recentpopularlog in

nhaliday : tech   511

« earlier  
Michael Akilian: Worker-in-the-loop Retrospective
Over the last ten years, many companies have created human-in-the-loop services that combine a mix of humans and algorithms. Now that some time has passed, we can tease out some patterns from their collective successes and failures. As someone who started a company in this space, my hope is that this retrospective can help prospective founders, investors, or companies navigating this space save time and fund more impactful projects.

A service is considered human-in-the-loop if it organizes its workflows with the intent to introduce models or heuristics that learn from the work of the humans executing the workflows. In this post, I will make reference to two common forms of human-in-the-loop:

User-in-the-loop (UITL): The end-user is interacting with suggestions from a software heuristic/ML system.
Worker-in-the-loop (WITL): A worker is paid to monitor suggestions from a software heuristic/ML system developed by the same company that pays the worker, but for the ultimate benefit of an end-user.
techtariat  reflection  business  tech  postmortem  automation  startups  hard-tech  ai  machine-learning  human-ml  cost-benefit  analysis  thinking  business-models  things  dimensionality  exploratory  markets  labor  economics  tech-infrastructure  gig-econ 
12 weeks ago by nhaliday
You’re Probably Asking the Wrong People For Career Advice | Hunter Walk
Here’s what I believe: when considering a specific career path decision or evaluating an offer with a particular company, I’ve found people tend to concentrate mostly on the opinions and inputs of two groups: their friends in similar jobs and the most “successful” people they know within the industry. Seems like a reasonable strategy, right? Depends.

...

Ok, so who do advice seekers usually *undervalue*? (A) People who know you very deeply regardless of expertise in your specific professional work and (B) individuals who have direct experience with the company, role and people you’re considering.
techtariat  career  advice  communication  strategy  working-stiff  tech  judgement  decision-making  theory-of-mind  expert-experience  track-record  arbitrage  cost-benefit  contrarianism  rhetoric 
december 2019 by nhaliday
Introduction · CTF Field Guide
also has some decent looking career advice and links to books/courses if I ever get interested in infosec stuff
guide  security  links  list  recommendations  contest  puzzles  hacker  init  adversarial  systems  traces  accretion  programming  debugging  assembly  c(pp)  metal-to-virtual  career  planning  jobs  books  course  learning  threat-modeling  tech  working-stiff 
december 2019 by nhaliday
Ask HN: Getting into NLP in 2018? | Hacker News
syllogism (spaCy author):
I think it's probably a bad strategy to try to be the "NLP guy" to potential employers. You'd do much better off being a software engineer on a project with people with ML or NLP expertise.

NLP projects fail a lot. If you line up a job as a company's first NLP person, you'll probably be setting yourself up for failure. You'll get handed an idea that can't work, you won't know enough about how to push back to change it into something that might, etc. After the project fails, you might get a chance to fail at a second one, but maybe not a third. This isn't a great way to move into any new field.

I think a cunning plan would be to angle to be the person who "productionises" models.
...
.--
...

Basically, don't just work on having more powerful solutions. Make sure you've tried hard to have easier problems as well --- that part tends to be higher leverage.

https://news.ycombinator.com/item?id=14008752
https://news.ycombinator.com/item?id=12916498
https://algorithmia.com/blog/introduction-natural-language-processing-nlp
hn  q-n-a  discussion  tech  programming  machine-learning  nlp  strategy  career  planning  human-capital  init  advice  books  recommendations  course  unit  links  automation  project  examples  applications  multi  mooc  lectures  video  data-science  org:com  roadmap  summary  error  applicability-prereqs  ends-means  telos-atelos  cost-benefit 
november 2019 by nhaliday
Ask HN: What's a promising area to work on? | Hacker News
hn  discussion  q-n-a  ideas  impact  trends  the-bones  speedometer  technology  applications  tech  cs  programming  list  top-n  recommendations  lens  machine-learning  deep-learning  security  privacy  crypto  software  hardware  cloud  biotech  CRISPR  bioinformatics  biohacking  blockchain  cryptocurrency  crypto-anarchy  healthcare  graphics  SIGGRAPH  vr  automation  universalism-particularism  expert-experience  reddit  social  arbitrage  supply-demand  ubiquity  cost-benefit  compensation  chart  career  planning  strategy  long-term  advice  sub-super  commentary  rhetoric  org:com  techtariat  human-capital  prioritizing  tech-infrastructure  working-stiff  data-science 
november 2019 by nhaliday
Zettelkästen? | Hacker News
Here’s a LessWrong post that describes it (including the insight “I honestly didn’t think Zettelkasten sounded like a good idea before I tried it” which I also felt).

yeah doesn't sound like a good idea to me either. idk

the linked post: https://pinboard.in/u:nhaliday/b:7a49d1d287f5
hn  commentary  techtariat  germanic  productivity  workflow  notetaking  exocortex  gtd  explore-exploit  business  comparison  academia  tech  ratty  lesswrong  idk  thinking  neurons  network-structure  software  tools  app  metabuch  writing  trees  graphs  skeleton  meta:reading  wkfly  worrydream  stay-organized  structure  multi 
october 2019 by nhaliday
What do executives do, anyway? - apenwarr
To paraphrase the book, the job of an executive is: to define and enforce culture and values for their whole organization, and to ratify good decisions.

That's all.

Not to decide. Not to break ties. Not to set strategy. Not to be the expert on every, or any topic. Just to sit in the room while the right people make good decisions in alignment with their values. And if they do, to endorse it. And if they don't, to send them back to try again.

There's even an algorithm for this.
techtariat  business  sv  tech  entrepreneurialism  management  startups  books  review  summary  culture  info-dynamics  strategy  hi-order-bits  big-picture  thinking  checklists  top-n  responsibility  organizing 
september 2019 by nhaliday
Geoff Greer's site: Burnout is in the Mind
I sometimes wonder if burnout is the western version of fan death. When you think about it, burnout makes little sense. People get depressed and tired from… what, exactly? Working too much? Working too hard? Excessive drudgery? Bull. We are working less than ever before. Just over a century ago, the average work week exceeded 60 hours. Today, it’s 33.[1] Past occupations also involved toil and danger far greater than any employment today. Yet burnout is a modern phenomenon. Strange, eh?

...

I’m not saying those who claim to be burnt-out are faking. I don’t doubt that burnout describes a real phenomenon. What I do doubt is the accepted cause (work) and the accepted cure (time off from work). It seems much more likely that burnout is a form of depression[3], which has a myriad of causes and cures.

It is only after making all this noise about burnout that I feel comfortable suggesting the following: Don’t worry about working too much. The important thing is to avoid depression. People more knowledgable than I have written on that subject, but to sum up their advice: Get out. Exercise. Try to form healthy habits. And stay the hell away from negative media such as cable news and Tumblr.
techtariat  labor  discipline  productivity  contrarianism  reflection  tech  realness  stress  causation  roots  psycho-atoms  health  oss  github  stamina  working-stiff  vitality 
september 2019 by nhaliday
The Effect of High-Tech Clusters on the Productivity of Top Inventors
I use longitudinal data on top inventors based on the universe of US patents 1971 - 2007 to quantify the productivity advantages of Silicon-Valley style clusters and their implications for the overall production of patents in the US. I relate the number of patents produced by an inventor in a year to the size of the local cluster, defined as a city × research field × year. I first study the experience of Rochester NY, whose high-tech cluster declined due to the demise of its main employer, Kodak. Due to the growth of digital photography, Kodak employment collapsed after 1996, resulting in a 49.2% decline in the size of the Rochester high-tech cluster. I test whether the change in cluster size affected the productivity of inventors outside Kodak and the photography sector. I find that between 1996 and 2007 the productivity of non-Kodak inventors in Rochester declined by 20.6% relative to inventors in other cities, conditional on inventor fixed effects. In the second part of the paper, I turn to estimates based on all the data in the sample. I find that when an inventor moves to a larger cluster she experiences significant increases in the number of patents produced and the number of citations received.

...

In a counterfactual scenario where the quality of U.S. inventors is held constant but their geographical location is changed so that all cities have the same number of inventors in each field, inventor productivity would increase in small clusters and decline in large clusters. On net, the overall number of patents produced in the US in a year would be 11.07% smaller.

[ed.: I wonder whether the benefits of less concentration (eg, lower cost of living propping up demographics) are actually smaller than the downsides overall.]
study  economics  growth-econ  innovation  roots  branches  sv  tech  econ-productivity  density  urban-rural  winner-take-all  polarization  top-n  pro-rata  distribution  usa  longitudinal  intellectual-property  northeast  natural-experiment  population  endogenous-exogenous  intervention  counterfactual  cost-benefit 
september 2019 by nhaliday
Mars Direct | West Hunter
Send Mr Bezos. He even looks like a Martian.
--
Throw in Zuckerberg and it’s a deal…
--
We could send twice as many people half-way to Mars.

--

I don’t think that the space station has been worth anything at all.

As for a lunar base, many of the issues are difficult and one ( effects of low-gee) is probably impossible to solve.

I don’t think that there are real mysteries about what is needed for a kind-of self-sufficient base – it’s just too hard and there’s not much prospect of a payoff.

That said, there may be other ways of going about this that are more promising.

--

Venus is worth terraforming: no gravity problems. Doable.

--

It’s not impossible that Mars might harbor microbial life – with some luck, life with a different chemical basis. That might be very valuable: there are endless industrial processes that depend upon some kind of fermentation.
Why, without acetone fermentation, there might not be a state of Israel.
--
If we used a reasonable approach, like Orion, I think that people would usefully supplement those robots.

https://westhunt.wordpress.com/2019/01/11/the-great-divorce/
Jeff Bezos isn’t my favorite guy, but he has ability and has built something useful. And an ugly, contested divorce would be harsh and unfair to the children, who have done nothing wrong.

But I don’t care. The thought of tens of billions of dollars being spent on lawyers and PIs offer the possibility of a spectacle that will live forever, far wilder than the antics of Nero or Caligula. It could make Suetonius look like Pilgrim’s Progress.

Have you ever wondered whether tens of thousands of divorce lawyers should be organized into legions or phalanxes? This is our chance to finally find out.
west-hunter  scitariat  commentary  current-events  trump  politics  troll  space  expansionism  frontier  cost-benefit  ideas  speculation  roots  deep-materialism  definite-planning  geoengineering  wild-ideas  gravity  barons  amazon  facebook  sv  tech  government  debate  critique  physics  mechanics  robotics  multi  lol  law  responsibility  drama  beginning-middle-end  direct-indirect 
september 2019 by nhaliday
CakeML
some interesting job openings in Sydney listed here
programming  pls  plt  functional  ocaml-sml  formal-methods  rigor  compilers  types  numerics  accuracy  estimate  research-program  homepage  anglo  jobs  tech  cool 
august 2019 by nhaliday
Who Owns Huawei? by Christopher Balding, Donald C. Clarke :: SSRN
• Given the public nature of trade unions in China, if the ownership stake of the trade union committee is genuine, and if the trade union and its committee function as trade unions generally function in China, then Huawei may be deemed effectively state-owned.

• Regardless of who, in a practical sense, owns and controls Huawei, it is clear that the employees do not.
study  economics  polisci  law  china  asia  government  leviathan  managerial-state  business  tech  contracts  trade  nationalism-globalism  network-structure  finance  securities 
august 2019 by nhaliday
Information Processing: Beijing 2019 Notes
Trump, the trade war, and US-China relations came up frequently in discussion. Chinese opinion tends to focus on the long term. Our driver for a day trip to the Great Wall was an older man from the countryside, who has lived only 3 years in Beijing. I was surprised to hear him expressing a very balanced opinion about the situation. He understood Trump's position remarkably well -- China has done very well trading with the US, and owes much of its technological and scientific development to the West. A recalibration is in order, and it is natural for Trump to negotiate in the interest of US workers.

China's economy is less and less export-dependent, and domestic drivers of growth seem easy to identify. For example, there is still a lot of low-hanging fruit in the form of "catch up growth" -- but now this means not just catching up with the outside developed world, but Tier 2 and Tier 3 cities catching up with Tier 1 cities like Beijing, Shanghai, Shenzhen, etc.

China watchers have noted the rapidly increasing government and private sector debt necessary to drive growth here. Perhaps this portends a future crisis. However, I didn't get any sense of impending doom for the Chinese economy. To be fair there was very little inkling of what would happen to the US economy in 2007-8. Some of the people I met with are highly placed with special knowledge -- they are among the most likely to be aware of problems. Overall I had the impression of normalcy and quiet confidence, but perhaps this would have been different in an export/manufacturing hub like Shenzhen. [ Update: Today after posting this I did hear something about economic concerns... So situation is unclear. ]

Innovation is everywhere here. Perhaps the most obvious is the high level of convenience from the use of e-payment and delivery services. You can pay for everything using your mobile (increasingly, using just your face!), and you can have food and other items (think Amazon on steroids) delivered quickly to your apartment. Even museum admissions can be handled via QR code.

A highly placed technologist told me that in fields like AI or computer science, Chinese researchers and engineers have access to in-depth local discussions of important arXiv papers -- think StackOverflow in Mandarin. Since most researchers here can read English, they have access both to Western advances, and a Chinese language reservoir of knowledge and analysis. He anticipates that eventually the pace and depth of engineering implementation here will be unequaled.

IVF and genetic testing are huge businesses in China. Perhaps I'll comment more on this in the future. New technologies, in genomics as in other areas, tend to be received more positively here than in the US and Europe.

...

Note Added: In the comments AG points to a Quora post by a user called Janus Dongye Qimeng, an AI researcher in Cambridge UK, who seems to be a real China expert. I found these posts to be very interesting.

Infrastructure development in poor regions of China

Size of Chinese internet social network platforms

Can the US derail China 2025? (Core technology stacks in and outside China)

Huawei smartphone technology stack and impact of US entity list interdiction (software and hardware!)

Agriculture at Massive Scale

US-China AI competition

More recommenations: Bruno Maçães is one of my favorite modern geopolitical thinkers. A Straussian of sorts (PhD under Harvey Mansfield at Harvard), he was Secretary of State for European Affairs in Portugal, and has thought deeply about the future of Eurasia and of US-China relations. He spent the last year in Beijing and I was eager to meet with him while here. His recent essay Equilibrium Americanum appeared in the Berlin Policy Journal. Podcast interview -- we hope to have him on Manifold soon :-)
hsu  scitariat  china  asia  thucydides  tech  technology  ai  automation  machine-learning  trends  the-bones  links  reflection  qra  q-n-a  foreign-policy  world  usa  trade  nationalism-globalism  great-powers  economics  research  journos-pundits  straussian 
july 2019 by nhaliday
Computer latency: 1977-2017
If we look at overall results, the fastest machines are ancient. Newer machines are all over the place. Fancy gaming rigs with unusually high refresh-rate displays are almost competitive with machines from the late 70s and early 80s, but “normal” modern computers can’t compete with thirty to forty year old machines.

...

If we exclude the game boy color, which is a different class of device than the rest, all of the quickest devices are Apple phones or tablets. The next quickest device is the blackberry q10. Although we don’t have enough data to really tell why the blackberry q10 is unusually quick for a non-Apple device, one plausible guess is that it’s helped by having actual buttons, which are easier to implement with low latency than a touchscreen. The other two devices with actual buttons are the gameboy color and the kindle 4.

After that iphones and non-kindle button devices, we have a variety of Android devices of various ages. At the bottom, we have the ancient palm pilot 1000 followed by the kindles. The palm is hamstrung by a touchscreen and display created in an era with much slower touchscreen technology and the kindles use e-ink displays, which are much slower than the displays used on modern phones, so it’s not surprising to see those devices at the bottom.

...

Almost every computer and mobile device that people buy today is slower than common models of computers from the 70s and 80s. Low-latency gaming desktops and the ipad pro can get into the same range as quick machines from thirty to forty years ago, but most off-the-shelf devices aren’t even close.

If we had to pick one root cause of latency bloat, we might say that it’s because of “complexity”. Of course, we all know that complexity is bad. If you’ve been to a non-academic non-enterprise tech conference in the past decade, there’s a good chance that there was at least one talk on how complexity is the root of all evil and we should aspire to reduce complexity.

Unfortunately, it's a lot harder to remove complexity than to give a talk saying that we should remove complexity. A lot of the complexity buys us something, either directly or indirectly. When we looked at the input of a fancy modern keyboard vs. the apple 2 keyboard, we saw that using a relatively powerful and expensive general purpose processor to handle keyboard inputs can be slower than dedicated logic for the keyboard, which would both be simpler and cheaper. However, using the processor gives people the ability to easily customize the keyboard, and also pushes the problem of “programming” the keyboard from hardware into software, which reduces the cost of making the keyboard. The more expensive chip increases the manufacturing cost, but considering how much of the cost of these small-batch artisanal keyboards is the design cost, it seems like a net win to trade manufacturing cost for ease of programming.

...

If you want a reference to compare the kindle against, a moderately quick page turn in a physical book appears to be about 200 ms.

https://twitter.com/gravislizard/status/927593460642615296
almost everything on computers is perceptually slower than it was in 1983
https://archive.is/G3D5K
https://archive.is/vhDTL
https://archive.is/a3321
https://archive.is/imG7S

linux terminals: https://lwn.net/Articles/751763/
techtariat  dan-luu  performance  time  hardware  consumerism  objektbuch  data  history  reflection  critique  software  roots  tainter  engineering  nitty-gritty  ui  ux  hci  ios  mobile  apple  amazon  sequential  trends  increase-decrease  measure  analysis  measurement  os  systems  IEEE  intricacy  desktop  benchmarks  rant  carmack  system-design  degrees-of-freedom  keyboard  terminal  editors  links  input-output  networking  world  s:**  multi  twitter  social  discussion  tech  programming  web  internet  speed  backup  worrydream  interface  metal-to-virtual  latency-throughput  workflow  form-design  interface-compatibility  org:junk  linux 
july 2019 by nhaliday
LeetCode - The World's Leading Online Programming Learning Platform
very much targeted toward interview prep
https://www.quora.com/Is-LeetCode-Online-Judges-premium-membership-really-worth-it
This data is especially valuable because you get to know a company's interview style beforehand. For example, most questions that appeared in Facebook interviews have short solution typically not more than 30 lines of code. Their interview process focus on your ability to write clean, concise code. On the other hand, Google style interviews lean more on the analytical side and is algorithmic heavy, typically with multiple solutions to a question - each with a different run time complexity.
programming  tech  career  working-stiff  recruiting  interview-prep  algorithms  problem-solving  oly-programming  multi  q-n-a  qra  comparison  stylized-facts  facebook  google  cost-benefit  homo-hetero  startups  organization  alien-character  🖥  contest  puzzles  accretion  transitions  money-for-time 
june 2019 by nhaliday
Hardware is unforgiving
Today, anyone with a CS 101 background can take Geoffrey Hinton's course on neural networks and deep learning, and start applying state of the art machine learning techniques in production within a couple months. In software land, you can fix minor bugs in real time. If it takes a whole day to run your regression test suite, you consider yourself lucky because it means you're in one of the few environments that takes testing seriously. If the architecture is fundamentally flawed, you pull out your copy of Feathers' “Working Effectively with Legacy Code” and you apply minor fixes until you're done.

This isn't to say that software isn't hard, it's just a different kind of hard: the sort of hard that can be attacked with genius and perseverance, even without experience. But, if you want to build a ship, and you "only" have a decade of experience with carpentry, milling, metalworking, etc., well, good luck. You're going to need it. With a large ship, “minor” fixes can take days or weeks, and a fundamental flaw means that your ship sinks and you've lost half a year of work and tens of millions of dollars. By the time you get to something with the complexity of a modern high-performance microprocessor, a minor bug discovered in production costs three months and five million dollars. A fundamental flaw in the architecture will cost you five years and hundreds of millions of dollars2.

Physical mistakes are costly. There's no undo and editing isn't simply a matter of pressing some keys; changes consume real, physical resources. You need enough wisdom and experience to avoid common mistakes entirely – especially the ones that can't be fixed.
techtariat  comparison  software  hardware  programming  engineering  nitty-gritty  realness  roots  explanans  startups  tech  sv  the-world-is-just-atoms  examples  stories  economics  heavy-industry  hard-tech  cs  IEEE  oceans  trade  korea  asia  recruiting  britain  anglo  expert-experience  growth-econ  world  developing-world  books  recommendations  intricacy  dan-luu  age-generation  system-design  correctness  metal-to-virtual  psycho-atoms  move-fast-(and-break-things)  kumbaya-kult 
june 2019 by nhaliday
The End of the Editor Wars » Linux Magazine
Moreover, even if you assume a broad margin of error, the pollings aren't even close. With all the various text editors available today, Vi and Vim continue to be the choice of over a third of users, while Emacs well back in the pack, no longer a competitor for the most popular text editor.

https://www.quora.com/Are-there-more-Emacs-or-Vim-users
I believe Vim is actually more popular, but it's hard to find any real data on it. The best source I've seen is the annual StackOverflow developer survey where 15.2% of developers used Vim compared to a mere 3.2% for Emacs.

Oddly enough, the report noted that "Data scientists and machine learning developers are about 3 times more likely to use Emacs than any other type of developer," which is not necessarily what I would have expected.

[ed. NB: Vim still dominates overall.]

https://pinboard.in/u:nhaliday/b:6adc1b1ef4dc

Time To End The vi/Emacs Debate: https://cacm.acm.org/blogs/blog-cacm/226034-time-to-end-the-vi-emacs-debate/fulltext

Vim, Emacs and their forever war. Does it even matter any more?: https://blog.sourcerer.io/vim-emacs-and-their-forever-war-does-it-even-matter-any-more-697b1322d510
Like an episode of “Silicon Valley”, a discussion of Emacs vs. Vim used to have a polarizing effect that would guarantee a stimulating conversation, regardless of an engineer’s actual alignment. But nowadays, diehard Emacs and Vim users are getting much harder to find. Maybe I’m in the wrong orbit, but looking around today, I see that engineers are equally or even more likely to choose any one of a number of great (for any given definition of ‘great’) modern editors or IDEs such as Sublime Text, Visual Studio Code, Atom, IntelliJ (… or one of its siblings), Brackets, Visual Studio or Xcode, to name a few. It’s not surprising really — many top engineers weren’t even born when these editors were at version 1.0, and GUIs (for better or worse) hadn’t been invented.

...

… both forums have high traffic and up-to-the-minute comment and discussion threads. Some of the available statistics paint a reasonably healthy picture — Stackoverflow’s 2016 developer survey ranks Vim 4th out of 24 with 26.1% of respondents in the development environments category claiming to use it. Emacs came 15th with 5.2%. In combination, over 30% is, actually, quite impressive considering they’ve been around for several decades.

What’s odd, however, is that if you ask someone — say a random developer — to express a preference, the likelihood is that they will favor for one or the other even if they have used neither in anger. Maybe the meme has spread so widely that all responses are now predominantly ritualistic, and represent something more fundamental than peoples’ mere preference for an editor? There’s a rather obvious political hypothesis waiting to be made — that Emacs is the leftist, socialist, centralized state, while Vim represents the right and the free market, specialization and capitalism red in tooth and claw.

How is Emacs/Vim used in companies like Google, Facebook, or Quora? Are there any libraries or tools they share in public?: https://www.quora.com/How-is-Emacs-Vim-used-in-companies-like-Google-Facebook-or-Quora-Are-there-any-libraries-or-tools-they-share-in-public
In Google there's a fair amount of vim and emacs. I would say at least every other engineer uses one or another.

Among Software Engineers, emacs seems to be more popular, about 2:1. Among Site Reliability Engineers, vim is more popular, about 9:1.
--
People use both at Facebook, with (in my opinion) slightly better tooling for Emacs than Vim. We share a master.emacs and master.vimrc file, which contains the bare essentials (like syntactic highlighting for the Hack language). We also share a Ctags file that's updated nightly with a cron script.

Beyond the essentials, there's a group for Emacs users at Facebook that provides tips, tricks, and major-modes created by people at Facebook. That's where Adam Hupp first developed his excellent mural-mode (ahupp/mural), which does for Ctags what iDo did for file finding and buffer switching.
--
For emacs, it was very informal at Google. There wasn't a huge community of Emacs users at Google, so there wasn't much more than a wiki and a couple language styles matching Google's style guides.

https://trends.google.com/trends/explore?date=all&geo=US&q=%2Fm%2F07zh7,%2Fm%2F01yp0m

https://www.quora.com/Why-is-interest-in-Emacs-dropping
And it is still that. It’s just that emacs is no longer unique, and neither is Lisp.

Dynamically typed scripting languages with garbage collection are a dime a dozen now. Anybody in their right mind developing an extensible text editor today would just use python, ruby, lua, or JavaScript as the extension language and get all the power of Lisp combined with vibrant user communities and millions of lines of ready-made libraries that Stallman and Steele could only dream of in the 70s.

In fact, in many ways emacs and elisp have fallen behind: 40 years after Lambda, the Ultimate Imperative, elisp is still dynamically scoped, and it still doesn’t support multithreading — when I try to use dired to list the files on a slow NFS mount, the entire editor hangs just as thoroughly as it might have in the 1980s. And when I say “doesn’t support multithreading,” I don’t mean there is some other clever trick for continuing to do work while waiting on a system call, like asynchronous callbacks or something. There’s start-process which forks a whole new process, and that’s about it. It’s a concurrency model straight out of 1980s UNIX land.

But being essentially just a decent text editor has robbed emacs of much of its competitive advantage. In a world where every developer tool is scriptable with languages and libraries an order of magnitude more powerful than cranky old elisp, the reason to use emacs is not that it lets a programmer hit a button and evaluate the current expression interactively (which must have been absolutely amazing at one point in the past).

https://www.reddit.com/r/emacs/comments/bh5kk7/why_do_many_new_users_still_prefer_vim_over_emacs/

more general comparison, not just popularity:
Differences between Emacs and Vim: https://stackoverflow.com/questions/1430164/differences-between-Emacs-and-vim

https://www.reddit.com/r/emacs/comments/9hen7z/what_are_the_benefits_of_emacs_over_vim/

https://unix.stackexchange.com/questions/986/what-are-the-pros-and-cons-of-vim-and-emacs

https://www.quora.com/Why-is-Vim-the-programmers-favorite-editor
- Adrien Lucas Ecoffet,

Because it is hard to use. Really.

However, the second part of this sentence applies to just about every good editor out there: if you really learn Sublime Text, you will become super productive. If you really learn Emacs, you will become super productive. If you really learn Visual Studio… you get the idea.

Here’s the thing though, you never actually need to really learn your text editor… Unless you use vim.

...

For many people new to programming, this is the first time they have been a power user of… well, anything! And because they’ve been told how great Vim is, many of them will keep at it and actually become productive, not because Vim is particularly more productive than any other editor, but because it didn’t provide them with a way to not be productive.

They then go on to tell their friends how great Vim is, and their friends go on to become power users and tell their friends in turn, and so forth. All these people believe they became productive because they changed their text editor. Little do they realize that they became productive because their text editor changed them[1].

This is in no way a criticism of Vim. I myself was a beneficiary of such a phenomenon when I learned to type using the Dvorak layout: at that time, I believed that Dvorak would help you type faster. Now I realize the evidence is mixed and that Dvorak might not be much better than Qwerty. However, learning Dvorak forced me to develop good typing habits because I could no longer rely on looking at my keyboard (since I was still using a Qwerty physical keyboard), and this has made me a much more productive typist.

Technical Interview Performance by Editor/OS/Language: https://triplebyte.com/blog/technical-interview-performance-by-editor-os-language
[ed.: I'm guessing this is confounded to all hell.]

The #1 most common editor we see used in interviews is Sublime Text, with Vim close behind.

Emacs represents a fairly small market share today at just about a quarter the userbase of Vim in our interviews. This nicely matches the 4:1 ratio of Google Search Trends for the two editors.

...

Vim takes the prize here, but PyCharm and Emacs are close behind. We’ve found that users of these editors tend to pass our interview at an above-average rate.

On the other end of the spectrum is Eclipse: it appears that someone using either Vim or Emacs is more than twice as likely to pass our technical interview as an Eclipse user.

...

In this case, we find that the average Ruby, Swift, and C# users tend to be stronger, with Python and Javascript in the middle of the pack.

...

Here’s what happens after we select engineers to work with and send them to onsites:

[Python does best.]

There are no wild outliers here, but let’s look at the C++ segment. While C++ programmers have the most challenging time passing Triplebyte’s technical interview on average, the ones we choose to work with tend to have a relatively easier time getting offers at each onsite.

The Rise of Microsoft Visual Studio Code: https://triplebyte.com/blog/editor-report-the-rise-of-visual-studio-code
This chart shows the rates at which each editor's users pass our interview compared to the mean pass rate for all candidates. First, notice the preeminence of Emacs and Vim! Engineers who use these editors pass our interview at significantly higher rates than other engineers. And the effect size is not small. Emacs users pass our interview at a rate 50… [more]
news  linux  oss  tech  editors  devtools  tools  comparison  ranking  flux-stasis  trends  ubiquity  unix  increase-decrease  multi  q-n-a  qra  data  poll  stackex  sv  facebook  google  integration-extension  org:med  politics  stereotypes  coalitions  decentralized  left-wing  right-wing  chart  scale  time-series  distribution  top-n  list  discussion  ide  parsimony  intricacy  cost-benefit  tradeoffs  confounding  analysis  crosstab  pls  python  c(pp)  jvm  microsoft  golang  hmm  correlation  debate  critique  quora  contrarianism  ecosystem  DSL  techtariat  org:com  org:nat  cs 
june 2019 by nhaliday
Should I go for TensorFlow or PyTorch?
Honestly, most experts that I know love Pytorch and detest TensorFlow. Karpathy and Justin from Stanford for example. You can see Karpthy's thoughts and I've asked Justin personally and the answer was sharp: PYTORCH!!! TF has lots of PR but its API and graph model are horrible and will waste lots of your research time.

--

...

Updated Mar 12
Update after 2019 TF summit:

TL/DR: previously I was in the pytorch camp but with TF 2.0 it’s clear that Google is really going to try to have parity or try to be better than Pytorch in all aspects where people voiced concerns (ease of use/debugging/dynamic graphs). They seem to be allocating more resources on development than Facebook so the longer term currently looks promising for Google. Prior to TF 2.0 I thought that Pytorch team had more momentum. One area where FB/Pytorch is still stronger is Google is a bit more closed and doesn’t seem to release reproducible cutting edge models such as AlphaGo whereas FAIR released OpenGo for instance. Generally you will end up running into models that are only implemented in one framework of the other so chances are you might end up learning both.
q-n-a  qra  comparison  software  recommendations  cost-benefit  tradeoffs  python  libraries  machine-learning  deep-learning  data-science  sci-comp  tools  google  facebook  tech  competition  best-practices  trends  debugging  expert-experience  ecosystem  theory-practice  pragmatic  wire-guided  static-dynamic  state  academia  frameworks  open-closed 
may 2019 by nhaliday
One week of bugs
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.

...

Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.

...

There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx
https://news.ycombinator.com/item?id=4531549

https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes [ed.: missing the word "valued" here, I think?]more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat  dan-luu  tech  software  error  list  debugging  linux  github  robust  checking  oss  troll  lol  aphorism  webapp  email  google  facebook  games  julia  pls  compilers  communication  mooc  browser  rust  programming  engineering  random  jargon  formal-methods  expert-experience  prof  c(pp)  course  correctness  hn  commentary  video  presentation  carmack  pragmatic  contrarianism  pessimism  sv  unix  rhetoric  critique  worrydream  hardware  performance  trends  multiplicative  roots  impact  comparison  history  iron-age  the-classics  mediterranean  conquest-empire  gibbon  technology  the-world-is-just-atoms  flux-stasis  increase-decrease  graphics  hmm  idk  systems  os  abstraction  intricacy  worse-is-better/the-right-thing  build-packaging  microsoft  osx  apple  reflection  assembly  things  knowledge  detail-architecture  thick-thin  trivia  info-dynamics  caching  frameworks  generalization  systematic-ad-hoc  universalism-particularism  analytical-holistic  structure  tainter  libraries  tradeoffs  prepping  threat-modeling  network-structure  writing  risk  local-glob 
may 2019 by nhaliday
Information Processing: Moore's Law and AI
Hint to technocratic planners: invest more in physicists, chemists, and materials scientists. The recent explosion in value from technology has been driven by physical science -- software gets way too much credit. From the former we got a factor of a million or more in compute power, data storage, and bandwidth. From the latter, we gained (perhaps) an order of magnitude or two in effectiveness: how much better are current OSes and programming languages than Unix and C, both of which are ~50 years old now?

...

Of relevance to this discussion: a big chunk of AlphaGo's performance improvement over other Go programs is due to raw compute power (link via Jess Riedel). The vertical axis is ELO score. You can see that without multi-GPU compute, AlphaGo has relatively pedestrian strength.
hsu  scitariat  comparison  software  hardware  performance  sv  tech  trends  ai  machine-learning  deep-learning  deepgoog  google  roots  impact  hard-tech  multiplicative  the-world-is-just-atoms  technology  trivia  cocktail  big-picture  hi-order-bits 
may 2019 by nhaliday
its-not-software - steveyegge2
You don't work in the software industry.

...

So what's the software industry, and how do we differ from it?

Well, the software industry is what you learn about in school, and it's what you probably did at your previous company. The software industry produces software that runs on customers' machines — that is, software intended to run on a machine over which you have no control.

So it includes pretty much everything that Microsoft does: Windows and every application you download for it, including your browser.

It also includes everything that runs in the browser, including Flash applications, Java applets, and plug-ins like Adobe's Acrobat Reader. Their deployment model is a little different from the "classic" deployment models, but it's still software that you package up and release to some unknown client box.

...

Servware

Our industry is so different from the software industry, and it's so important to draw a clear distinction, that it needs a new name. I'll call it Servware for now, lacking anything better. Hardware, firmware, software, servware. It fits well enough.

Servware is stuff that lives on your own servers. I call it "stuff" advisedly, since it's more than just software; it includes configuration, monitoring systems, data, documentation, and everything else you've got there, all acting in concert to produce some observable user experience on the other side of a network connection.
techtariat  sv  tech  rhetoric  essay  software  saas  devops  engineering  programming  contrarianism  list  top-n  best-practices  applicability-prereqs  desktop  flux-stasis  homo-hetero  trends  games  thinking  checklists  dbs  models  communication  tutorial  wiki  integration-extension  frameworks  api  whole-partial-many  metrics  retrofit  c(pp)  pls  code-dive  planning  working-stiff  composition-decomposition  libraries  conceptual-vocab  amazon  system-design  cracker-prog  tech-infrastructure  blowhards  client-server  project-management 
may 2019 by nhaliday
Why books don’t work | Andy Matuschak
https://www.spreaker.com/user/10197011/designing-and-developing-new-tools-for-t
https://twitter.com/andy_matuschak/status/1190675776036687878
https://archive.is/hNIFG
https://archive.is/f9Bwh
hmm: "zettelkasten like note systems have you do a linear search for connections, that gets exponentially more expensive as your note body grows",
https://twitter.com/Meaningness/status/1210309788141117440
https://archive.is/P6PH2
https://archive.is/uD9ls
https://archive.is/Sb9Jq

https://twitter.com/Scholars_Stage/status/1199702832728948737
https://archive.is/cc4zf
I reviewed today my catalogue of 420~ books I have read over the last six years and I am in despair. There are probably 100~ whose contents I can tell you almost nothing about—nothing noteworthy anyway.
techtariat  worrydream  learning  education  teaching  higher-ed  neurons  thinking  rhetoric  essay  michael-nielsen  retention  better-explained  bounded-cognition  info-dynamics  info-foraging  books  communication  lectures  contrarianism  academia  scholar  design  meta:reading  studying  form-design  writing  technical-writing  skunkworks  multi  broad-econ  wonkish  unaffiliated  twitter  social  discussion  backup  reflection  metameta  podcast  audio  interview  impetus  space  open-problems  questions  tech  hard-tech  startups  commentary  postrat  europe  germanic  notetaking  graphs  network-structure  similarity  intersection-connectedness  magnitude  cost-benefit  multiplicative 
may 2019 by nhaliday
Links 3/19: Linkguini | Slate Star Codex
How did the descendants of the Mayan Indians end up in the Eastern Orthodox Church?

Does Parental Quality Matter? Study using three sources of parental variation that are mostly immune to genetic confounding find that “the strong parent-child correlation in education is largely causal”. For example, “the parent-child correlation in education is stronger with the parent that spends more time with the child”.

Before and after pictures of tech leaders like Jeff Bezos, Elon Musk, and Sergey Brin suggest they’re taking supplemental testosterone. And though it may help them keep looking young, Palladium points out that there might be other effects from having some of our most powerful businessmen on a hormone that increases risk-taking and ambition. They ask whether the new availability of testosterone supplements is prolonging Silicon Valley businessmen’s “brash entrepreneur” phase well past the point where they would normally become mature respectable elders. But it also hints at an almost opposite take: average testosterone levels have been falling for decades, so at this point these businessmen would be the only “normal” (by 1950s standards) men out there, and everyone else would be unprecedently risk-averse and boring. Paging Peter Thiel and everyone else who takes about how things “just worked better” in Eisenhower’s day.

China’s SesameCredit social monitoring system, widely portrayed as dystopian, has an 80% approval rate in China (vs. 19% neutral and 1% disapproval). The researchers admit that although all data is confidential and they are not affiliated with the Chinese government, their participants might not believe that confidently enough to answer honestly.

I know how much you guys love attacking EAs for “pathological altruism” or whatever terms you’re using nowadays, so here’s an article where rationalist community member John Beshir describes his experience getting malaria on purpose to help researchers test a vaccine.

Some evidence against the theory that missing fathers cause earlier menarche.

John Nerst of EverythingStudies’ political compass.
ratty  yvain  ssc  links  multi  biodet  behavioral-gen  regularizer  causation  contrarianism  education  correlation  parenting  developmental  direct-indirect  time  religion  christianity  eastern-europe  russia  latin-america  other-xtian  endocrine  trends  malaise  stagnation  thiel  barons  tech  sv  business  rot  zeitgeist  outcome-risk  critique  environmental-effects  poll  china  asia  authoritarianism  alt-inst  sentiment  policy  n-factor  individualism-collectivism  pro-rata  technocracy  managerial-state  civil-liberty  effective-altruism  subculture  wtf  disease  parasites-microbiome  patho-altruism  self-interest  lol  africa  experiment  medicine  expression-survival  things  dimensionality  degrees-of-freedom  sex  composition-decomposition  analytical-holistic  systematic-ad-hoc  coordination  alignment  cooperate-defect  politics  coalitions  ideology  left-wing  right-wing  summary  exit-voice  redistribution  randy-ayndy  welfare-state 
march 2019 by nhaliday
Manifold – man·i·fold /ˈmanəˌfōld/ many and various.
https://infoproc.blogspot.com/2019/01/a-grand-experiment.html
Silicon Valley (Big Tech and startups and VC)
Financial Markets
Academia (Good, Bad, and Ugly)
The View from Europe
The View from Asia (Life in PRC? Fear and Loathing of PRC?)
Frontiers of Science (AI, Genomics, Physics, ...)
Frontiers of Rationality
The Billionaire Life
MMA / UFC
What Millennials think us old folks don't understand
True things that you are not allowed to say
Bubbles that are ready to pop?
Under-appreciated Genius?
Overrated Crap and Frauds?
podcast  audio  stream  hsu  scitariat  science  frontier  interview  physics  genetics  biotech  technology  bio  interdisciplinary  spearhead  multi  genomics  sv  tech  finance  academia  europe  EU  china  asia  wealth  class  fighting  age-generation  westminster  censorship  truth  cycles  economics  people  realness  arbitrage  subculture  ratty  rationality 
march 2019 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
What Peter Thiel thinks about AI risk - Less Wrong
TL;DR: he thinks its an issue but also feels AGI is very distant and hence less worried about it than Musk.

I recommend the rest of the lecture as well, it's a good summary of "Zero to One"  and a good QA afterwards.

For context, in case anyone doesn't realize: Thiel has been MIRI's top donor throughout its history.

other stuff:
nice interview question: "thing you know is true that not everyone agrees on?"
"learning from failure overrated"
cleantech a huge market, hard to compete
software makes for easy monopolies (zero marginal costs, network effects, etc.)
for most of history inventors did not benefit much (continuous competition)
ethical behavior is a luxury of monopoly
ratty  lesswrong  commentary  ai  ai-control  risk  futurism  technology  speedometer  audio  presentation  musk  thiel  barons  frontier  miri-cfar  charity  people  track-record  venture  startups  entrepreneurialism  contrarianism  competition  market-power  business  google  truth  management  leadership  socs-and-mops  dark-arts  skunkworks  hard-tech  energy-resources  wire-guided  learning  software  sv  tech  network-structure  scale  marginal  cost-benefit  innovation  industrial-revolution  economics  growth-econ  capitalism  comparison  nationalism-globalism  china  asia  trade  stagnation  things  dimensionality  exploratory  world  developing-world  thinking  definite-planning  optimism  pessimism  intricacy  politics  war  career  planning  supply-demand  labor  science  engineering  dirty-hands  biophysical-econ  migration  human-capital  policy  canada  anglo  winner-take-all  polarization  amazon  business-models  allodium  civilization  the-classics  microsoft  analogy  gibbon  conquest-empire  realness  cynicism-idealism  org:edu  open-closed  ethics  incentives  m 
february 2018 by nhaliday
Reid Hofmann and Peter Thiel and technology and politics - Marginal REVOLUTION
econotariat  marginal-rev  links  video  interview  thiel  barons  randy-ayndy  cryptocurrency  ai  communism  individualism-collectivism  civil-liberty  sv  tech  automation  speedometer  stagnation  technology  politics  current-events  trends  democracy  usa  malthus  zero-positive-sum  china  asia  stanford  news  org:local  polarization  economics  cycles  growth-econ  zeitgeist  housing  urban-rural  california  the-west  decentralized  privacy  anonymity  inequality  multi  winner-take-all  realpolitik  machiavelli  error  order-disorder  leviathan  dirty-hands  the-world-is-just-atoms  heavy-industry  embodied  engineering  reflection  trump  2016-election  pessimism  definite-planning  optimism  left-wing  right-wing  steel-man  managerial-state  orwellian  vampire-squid  contrarianism  age-generation  econ-productivity  compensation  time-series  feudal  gnosis-logos 
february 2018 by nhaliday
The Automated Public Sphere by Frank Pasquale :: SSRN
This article first describes the documented, negative effects of online propagandists’ interventions (and platforms’ neglect) in both electoral politics and the broader public sphere (Part I). It then proposes several legal and educational tactics to mitigate platforms’ power, or to encourage or require them to exercise it responsibly (Part II). The penultimate section (Part III) offers a concession to those suspicious of governmental intervention in the public sphere: some regimes are already too authoritarian or unreliable to be trusted with extensive powers of regulation over media (whether old or new media), or intermediaries. However, the inadvisability of extensive media regulation in disordered societies only makes this agenda more urgent in well-ordered societies, lest predictable pathologies of the automated public sphere degrade their processes of democratic will formation.
study  media  propaganda  info-dynamics  internet  automation  bots  social  facebook  google  tech  politics  polisci  law  rhetoric  regulation  madisonian 
november 2017 by nhaliday
design patterns - What is MVC, really? - Software Engineering Stack Exchange
The model manages fundamental behaviors and data of the application. It can respond to requests for information, respond to instructions to change the state of its information, and even to notify observers in event-driven systems when information changes. This could be a database, or any number of data structures or storage systems. In short, it is the data and data-management of the application.

The view effectively provides the user interface element of the application. It'll render data from the model into a form that is suitable for the user interface.

The controller receives user input and makes calls to model objects and the view to perform appropriate actions.

...

Though this answer has 21 upvotes, I find the sentence "This could be a database, or any number of data structures or storage systems. (tl;dr : it's the data and data-management of the application)" horrible. The model is the pure business/domain logic. And this can and should be so much more than data management of an application. I also differentiate between domain logic and application logic. A controller should not ever contain business/domain logic or talk to a database directly.
q-n-a  stackex  explanation  concept  conceptual-vocab  structure  composition-decomposition  programming  engineering  best-practices  pragmatic  jargon  thinking  metabuch  working-stiff  tech  🖥  checklists  code-organizing  abstraction  project-management 
october 2017 by nhaliday
Americans Used to be Proud of their Universities | The American Conservative
Some Notes on the Finances of Top Chinese Universities: https://www.insidehighered.com/blogs/world-view/some-notes-finances-top-chinese-universities
A glimpse into the finances of top Chinese universities suggests they share more than we might have imagined with American flagship public universities, but also that claims of imminent “catch up” might be overblown
news  org:mag  right-wing  reflection  history  early-modern  pre-ww2  mostly-modern  europe  germanic  britain  gibbon  trends  rot  zeitgeist  usa  china  asia  sinosphere  higher-ed  academia  westminster  comparison  analogy  multi  org:edu  money  monetary-fiscal  data  analysis  pro-rata  cs  tech  realness  social-science  the-world-is-just-atoms  science  innovation  is-ought  truth  identity-politics 
october 2017 by nhaliday
Definite optimism as human capital | Dan Wang
I’ve come to the view that creativity and innovative capacity aren’t a fixed stock, coiled and waiting to be released by policy. Now, I know that a country will not do well if it has poor infrastructure, interest rate management, tax and regulation levels, and a whole host of other issues. But getting them right isn’t sufficient to promote innovation; past a certain margin, when they’re all at rational levels, we ought to focus on promoting creativity and drive as a means to propel growth.

...

When I say “positive” vision, I don’t mean that people must see the future as a cheerful one. Instead, I’m saying that people ought to have a vision at all: A clear sense of how the technological future will be different from today. To have a positive vision, people must first expand their imaginations. And I submit that an interest in science fiction, the material world, and proximity to industry all help to refine that optimism. I mean to promote imagination by direct injection.

...

If a state has lost most of its jobs for electrical engineers, or nuclear engineers, or mechanical engineers, then fewer young people in that state will study those practices, and technological development in related fields slow down a little further. When I bring up these thoughts on resisting industrial decline to economists, I’m unsatisfied with their responses. They tend to respond by tautology (“By definition, outsourcing improves on the status quo”) or arithmetic (see: gains from comparative advantage, Ricardo). These kinds of logical exercises are not enough. I would like for more economists to consider a human capital perspective for preserving manufacturing expertise (to some degree).

I wonder if the so-called developed countries should be careful of their own premature deindustrialization. The US industrial base has faltered, but there is still so much left to build. Until we’ve perfected asteroid mining and super-skyscrapers and fusion rockets and Jupiter colonies and matter compilers, we can’t be satisfied with innovation confined mostly to the digital world.

Those who don’t mind the decline of manufacturing employment like to say that people have moved on to higher-value work. But I’m not sure that this is usually the case. Even if there’s an endlessly capacious service sector to absorb job losses in manufacturing, it’s often the case that these new jobs feature lower productivity growth and involve greater rent-seeking. Not everyone is becoming hedge fund managers and machine learning engineers. According to BLS, the bulk of service jobs are in 1. government (22 million), 2. professional services (19m), 3. healthcare (18m), 4. retail (15m), and 5. leisure and hospitality (15m). In addition to being often low-paying but still competitive, a great deal of service sector jobs tend to stress capacity for emotional labor over capacity for manual labor. And it’s the latter that tends to be more present in fields involving technological upgrading.

...

Here’s a bit more skepticism of service jobs. In an excellent essay on declining productivity growth, Adair Turner makes the point that many service jobs are essentially zero-sum. I’d like to emphasize and elaborate on that idea here.

...

Call me a romantic, but I’d like everyone to think more about industrial lubricants, gas turbines, thorium reactors, wire production, ball bearings, underwater cables, and all the things that power our material world. I abide by a strict rule never to post or tweet about current political stuff; instead I try to draw more attention to the world of materials. And I’d like to remind people that there are many things more edifying than following White House scandals.

...

First, we can all try to engage more actively with the material world, not merely the digital or natural world. Go ahead and pick an industrial phenomenon and learn more about it. Learn more about the history of aviation, and what it took to break the sound barrier; gaze at the container ships as they sail into port, and keep in mind that they carry 90 percent of the goods you see around you; read about what we mold plastics to do; meditate on the importance of steel in civilization; figure out what’s driving the decline in the cost of solar energy production, or how we draw electricity from nuclear fission, or what it takes to extract petroleum or natural gas from the ground.

...

Here’s one more point that I’d like to add on Girard at college: I wonder if to some extent current dynamics are the result of the liberal arts approach of “college teaches you how to think, not what to think.” I’ve never seen much data to support this wonderful claim that college is good at teaching critical thinking skills. Instead, students spend most of their energies focused on raising or lowering the status of the works they study or the people around them, giving rise to the Girardian terror that has gripped so many campuses.

How Technology Grows (a restatement of definite optimism): https://danwang.co/how-technology-grows/

College as an incubator of Girardian terror: http://danwang.co/college-girardian-terror/
It’s hard to construct a more perfect incubator for mimetic contagion than the American college campus. Most 18-year-olds are not super differentiated from each other. By construction, whatever distinctions any does have are usually earned through brutal, zero-sum competitions. These tournament-type distinctions include: SAT scores at or near perfection; being a top player on a sports team; gaining master status from chess matches; playing first instrument in state orchestra; earning high rankings in Math Olympiad; and so on, culminating in gaining admission to a particular college.

Once people enter college, they get socialized into group environments that usually continue to operate in zero-sum competitive dynamics. These include orchestras and sport teams; fraternities and sororities; and many types of clubs. The biggest source of mimetic pressures are the classes. Everyone starts out by taking the same intro classes; those seeking distinction throw themselves into the hardest classes, or seek tutelage from star professors, and try to earn the highest grades.

Mimesis Machines and Millennials: http://quillette.com/2017/11/02/mimesis-machines-millennials/
In 1956, a young Liverpudlian named John Winston Lennon heard the mournful notes of Elvis Presley’s Heartbreak Hotel, and was transformed. He would later recall, “nothing really affected me until I heard Elvis. If there hadn’t been an Elvis, there wouldn’t have been the Beatles.” It is an ancient human story. An inspiring model, an inspired imitator, and a changed world.

Mimesis is the phenomenon of human mimicry. Humans see, and they strive to become what they see. The prolific Franco-Californian philosopher René Girard described the human hunger for imitation as mimetic desire. According to Girard, mimetic desire is a mighty psychosocial force that drives human behavior. When attempted imitation fails, (i.e. I want, but fail, to imitate my colleague’s promotion to VP of Business Development), mimetic rivalry arises. According to mimetic theory, periodic scapegoating—the ritualistic expelling of a member of the community—evolved as a way for archaic societies to diffuse rivalries and maintain the general peace.

As civilization matured, social institutions evolved to prevent conflict. To Girard, sacrificial religious ceremonies first arose as imitations of earlier scapegoating rituals. From the mimetic worldview healthy social institutions perform two primary functions,

They satisfy mimetic desire and reduce mimetic rivalry by allowing imitation to take place.
They thereby reduce the need to diffuse mimetic rivalry through scapegoating.
Tranquil societies possess and value institutions that are mimesis tolerant. These institutions, such as religion and family, are Mimesis Machines. They enable millions to see, imitate, and become new versions of themselves. Mimesis Machines, satiate the primal desire for imitation, and produce happy, contented people. Through Mimesis Machines, Elvis fans can become Beatles.

Volatile societies, on the other hand, possess and value mimesis resistant institutions that frustrate attempts at mimicry, and mass produce frustrated, resentful people. These institutions, such as capitalism and beauty hierarchies, are Mimesis Shredders. They stratify humanity, and block the ‘nots’ from imitating the ‘haves’.
techtariat  venture  commentary  reflection  innovation  definite-planning  thiel  barons  economics  growth-econ  optimism  creative  malaise  stagnation  higher-ed  status  error  the-world-is-just-atoms  heavy-industry  sv  zero-positive-sum  japan  flexibility  china  outcome-risk  uncertainty  long-short-run  debt  trump  entrepreneurialism  human-capital  flux-stasis  cjones-like  scifi-fantasy  labor  dirty-hands  engineering  usa  frontier  speedometer  rent-seeking  econ-productivity  government  healthcare  essay  rhetoric  contrarianism  nascent-state  unintended-consequences  volo-avolo  vitality  technology  tech  cs  cycles  energy-resources  biophysical-econ  trends  zeitgeist  rot  alt-inst  proposal  multi  news  org:mag  org:popup  philosophy  big-peeps  speculation  concept  religion  christianity  theos  buddhism  politics  polarization  identity-politics  egalitarianism-hierarchy  inequality  duplication  society  anthropology  culture-war  westminster  info-dynamics  tribalism  institutions  envy  age-generation  letters  noble-lie 
october 2017 by nhaliday
« earlier      
per page:    204080120160

Copy this bookmark:





to read