recentpopularlog in

nhaliday : frontier   311

« earlier  
The Future of Mathematics? [video] | Hacker News
https://news.ycombinator.com/item?id=20909404
Kevin Buzzard (the Lean guy)

- general reflection on proof asssistants/theorem provers
- Kevin Hale's formal abstracts project, etc
- thinks of available theorem provers, Lean is "[the only one currently available that may be capable of formalizing all of mathematics eventually]" (goes into more detail right at the end, eg, quotient types)
hn  commentary  discussion  video  talks  presentation  math  formal-methods  expert-experience  msr  frontier  state-of-art  proofs  rigor  education  higher-ed  optimism  prediction  lens  search  meta:research  speculation  exocortex  skunkworks  automation  research  math.NT  big-surf  software  parsimony  cost-benefit  intricacy  correctness  programming  pls  python  functional  haskell  heavyweights  research-program  review  reflection  multi  pdf  slides  oly  experiment  span-cover  git  vcs  teaching  impetus  academia  composition-decomposition  coupling-cohesion  database  trust  types  plt  lifts-projections  induction  critique  beauty  truth  elegance  aesthetics 
october 2019 by nhaliday
Mars Direct | West Hunter
Send Mr Bezos. He even looks like a Martian.
--
Throw in Zuckerberg and it’s a deal…
--
We could send twice as many people half-way to Mars.

--

I don’t think that the space station has been worth anything at all.

As for a lunar base, many of the issues are difficult and one ( effects of low-gee) is probably impossible to solve.

I don’t think that there are real mysteries about what is needed for a kind-of self-sufficient base – it’s just too hard and there’s not much prospect of a payoff.

That said, there may be other ways of going about this that are more promising.

--

Venus is worth terraforming: no gravity problems. Doable.

--

It’s not impossible that Mars might harbor microbial life – with some luck, life with a different chemical basis. That might be very valuable: there are endless industrial processes that depend upon some kind of fermentation.
Why, without acetone fermentation, there might not be a state of Israel.
--
If we used a reasonable approach, like Orion, I think that people would usefully supplement those robots.

https://westhunt.wordpress.com/2019/01/11/the-great-divorce/
Jeff Bezos isn’t my favorite guy, but he has ability and has built something useful. And an ugly, contested divorce would be harsh and unfair to the children, who have done nothing wrong.

But I don’t care. The thought of tens of billions of dollars being spent on lawyers and PIs offer the possibility of a spectacle that will live forever, far wilder than the antics of Nero or Caligula. It could make Suetonius look like Pilgrim’s Progress.

Have you ever wondered whether tens of thousands of divorce lawyers should be organized into legions or phalanxes? This is our chance to finally find out.
west-hunter  scitariat  commentary  current-events  trump  politics  troll  space  expansionism  frontier  cost-benefit  ideas  speculation  roots  deep-materialism  definite-planning  geoengineering  wild-ideas  gravity  barons  amazon  facebook  sv  tech  government  debate  critique  physics  mechanics  robotics  multi  lol  law  responsibility  drama  beginning-middle-end  direct-indirect 
september 2019 by nhaliday
Karol Kuczmarski's Blog – A Haskell retrospective
Even in this hypothetical scenario, I posit that the value proposition of Haskell would still be a tough sell.

There is this old quote from Bjarne Stroustrup (creator of C++) where he says that programming languages divide into those everyone complains about, and those that no one uses.
The first group consists of old, established technologies that managed to accrue significant complexity debt through years and decades of evolution. All the while, they’ve been adapting to the constantly shifting perspectives on what are the best industry practices. Traces of those adaptations can still be found today, sticking out like a leftover appendix or residual tail bone — or like the built-in support for XML in Java.

Languages that “no one uses”, on the other hand, haven’t yet passed the industry threshold of sufficient maturity and stability. Their ecosystems are still cutting edge, and their future is uncertain, but they sometimes champion some really compelling paradigm shifts. As long as you can bear with things that are rough around the edges, you can take advantage of their novel ideas.

Unfortunately for Haskell, it manages to combine the worst parts of both of these worlds.

On one hand, it is a surprisingly old language, clocking more than two decades of fruitful research around many innovative concepts. Yet on the other hand, it bears the signs of a fresh new technology, with relatively few production-grade libraries, scarce coverage of some domains (e.g. GUI programming), and not too many stories of commercial successes.

There are many ways to do it
String theory
Errors and how to handle them
Implicit is better than explicit
Leaky modules
Namespaces are apparently a bad idea
Wild records
Purity beats practicality
techtariat  reflection  functional  haskell  programming  pls  realness  facebook  pragmatic  cost-benefit  legacy  libraries  types  intricacy  engineering  tradeoffs  frontier  homo-hetero  duplication  strings  composition-decomposition  nitty-gritty  error  error-handling  coupling-cohesion  critique  ecosystem  c(pp)  aphorism 
august 2019 by nhaliday
The Scholar's Stage: Book Notes—Strategy: A History
https://twitter.com/Scholars_Stage/status/1151681120787816448
https://archive.is/Bp5eu
Freedman's book is something of a shadow history of Western intellectual thought between 1850 and 2010. Marx, Tolstoy, Foucault, game theorists, economists, business law--it is all in there.

Thus the thoughts prompted by this book have surprisingly little to do with war.
Instead I am left with questions about the long-term trajectory of Western thought. Specifically:

*Has America really dominated Western intellectual life in the post 45 world as much as English speakers seem to think it has?
*Has the professionalization/credential-iization of Western intellectual life helped or harmed our ability to understand society?
*Will we ever recover from the 1960s?
wonkish  unaffiliated  broad-econ  books  review  reflection  summary  strategy  war  higher-ed  academia  social-science  letters  organizing  nascent-state  counter-revolution  rot  westminster  culture-war  left-wing  anglosphere  usa  history  mostly-modern  coordination  lens  local-global  europe  gallic  philosophy  cultural-dynamics  anthropology  game-theory  industrial-org  schelling  flux-stasis  trends  culture  iraq-syria  MENA  military  frontier  info-dynamics  big-peeps  politics  multi  twitter  social  commentary  backup  defense 
july 2019 by nhaliday
The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods
Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust 
july 2019 by nhaliday
Home is a small, engineless sailboat (2018) | Hacker News
Her deck looked disorderly; metal pipes lying on either side of the cabin, what might have been a bed sheet or sail cover (or one in the same) bunched between oxidized turnbuckles and portlights. A purple hula hoop. A green bucket. Several small, carefully potted plants. At the stern, a weathered tree limb lashed to a metal cradle – the arm of a sculling oar. There was no motor. The transom was partially obscured by a wind vane and Alexandra’s years of exposure to the elements were on full display.

...

Sean is a programmer, a fervent believer in free open source code – software programs available to the public to use and/or modify free of charge. His only computer is the Raspberry Pi he uses to code and control his autopilot, which he calls pypilot. Sean is also a programmer for and regular contributor to OpenCPN Chart Plotter Navigation, free open source software for cruisers. “I mostly write the graphics or the way it draws the chart, but a lot more than that, like how it draws the weather patterns and how it can calculate routes, like you should sail this way.”

from the comments:
Have also read both; they're fascinating in different ways. Paul Lutus has a boat full of technology (diesel engine, laptop, radio, navigation tools, and more) but his book is an intensely - almost uncomfortably - personal voyage through his psyche, while he happens to be sailing around the world. A diary of reflections on life, struggles with people, views on science, observations on the stars and sky and waves, poignant writing on how being at sea affect people, while he happens to be sailing around the world. It's better for that, more relatable as a geek, sadder and more emotional; I consider it a good read, and I reflect on it a lot.
Captain Slocum's voyage of 1896(?) is so different; he took an old clock, and not much else, he lashes the tiller and goes down below for hours at a time to read or sleep without worrying about crashing into other boats, he tells stories of mouldy cheese induced nightmares during rough seas or chasing natives away from robbing him, or finding remote islands with communites of slightly odd people. Much of his writing is about the people he meets - they often know in advance he's making a historic voyage, so when he arrives anywhere, there's a big fuss, he's invited to dine with local dignitaries or captains of large ships, gifted interesting foods and boat parts, there's a lot of interesting things about the world of 1896. (There's also quite a bit of tedious place names and locations and passages where nothing much happens, I'm not that interested in the geography of it).
hn  commentary  oceans  books  reflection  stories  track-record  world  minimum-viable  dirty-hands  links  frontier  allodium  prepping  navigation  oss  hacker 
july 2019 by nhaliday
C++ Core Guidelines
This document is a set of guidelines for using C++ well. The aim of this document is to help people to use modern C++ effectively. By “modern C++” we mean effective use of the ISO C++ standard (currently C++17, but almost all of our recommendations also apply to C++14 and C++11). In other words, what would you like your code to look like in 5 years’ time, given that you can start now? In 10 years’ time?

https://isocpp.github.io/CppCoreGuidelines/
“Within C++ is a smaller, simpler, safer language struggling to get out.” – Bjarne Stroustrup

...

The guidelines are focused on relatively higher-level issues, such as interfaces, resource management, memory management, and concurrency. Such rules affect application architecture and library design. Following the rules will lead to code that is statically type safe, has no resource leaks, and catches many more programming logic errors than is common in code today. And it will run fast - you can afford to do things right.

We are less concerned with low-level issues, such as naming conventions and indentation style. However, no topic that can help a programmer is out of bounds.

Our initial set of rules emphasize safety (of various forms) and simplicity. They may very well be too strict. We expect to have to introduce more exceptions to better accommodate real-world needs. We also need more rules.

...

The rules are designed to be supported by an analysis tool. Violations of rules will be flagged with references (or links) to the relevant rule. We do not expect you to memorize all the rules before trying to write code.

contrary:
https://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
This will be a long wall of text, and kinda random! My main points are:
1. C++ compile times are important,
2. Non-optimized build performance is important,
3. Cognitive load is important. I don’t expand much on this here, but if a programming language or a library makes me feel stupid, then I’m less likely to use it or like it. C++ does that a lot :)
programming  engineering  pls  best-practices  systems  c(pp)  guide  metabuch  objektbuch  reference  cheatsheet  elegance  frontier  libraries  intricacy  advanced  advice  recommendations  big-picture  novelty  lens  philosophy  state  error  types  concurrency  memory-management  performance  abstraction  plt  compilers  expert-experience  multi  checking  devtools  flux-stasis  safety  system-design  techtariat  time  measure  dotnet  comparison  examples  build-packaging  thinking  worse-is-better/the-right-thing  cost-benefit  tradeoffs  essay  commentary  oop  correctness  computer-memory  error-handling  resources-effects  latency-throughput 
june 2019 by nhaliday
Philip Guo - Research Design Patterns
List of ways to generate research directions. Some are pretty specific to applied CS.
techtariat  nibble  academia  meta:research  scholar  cs  systems  list  top-n  checklists  ideas  creative  frontier  memes(ew)  info-dynamics  innovation  novelty  the-trenches  tactics 
may 2019 by nhaliday
Why is Software Engineering so difficult? - James Miller
basic message: No silver bullet!

most interesting nuggets:
Scale and Complexity
- Windows 7 > 50 million LOC
Expect a staggering number of bugs.

Bugs?
- Well-written C and C++ code contains some 5 to 10 errors per 100 LOC after a clean compile, but before inspection and testing.
- At a 5% rate any 50 MLOC program will start off with some 2.5 million bugs.

Bug removal
- Testing typically exercises only half the code.

Better bug removal?
- There are better ways to do testing that do produce fantastic programs.”
- Are we sure about this fact?
* No, its only an opinion!
* In general Software Engineering has ....
NO FACTS!

So why not do this?
- The costs are unbelievable.
- It’s not unusual for the qualification process to produce a half page of documentation for each line of code.
pdf  slides  engineering  nitty-gritty  programming  best-practices  roots  comparison  cost-benefit  software  systematic-ad-hoc  structure  error  frontier  debugging  checking  formal-methods  context  detail-architecture  intricacy  big-picture  system-design  correctness  scale  scaling-tech  shipping  money  data  stylized-facts  street-fighting  objektbuch  pro-rata  estimate  pessimism  degrees-of-freedom  volo-avolo  no-go  things  thinking  summary  quality  density  methodology 
may 2019 by nhaliday
Reverse salients | West Hunter
Edison thought in terms of reverse salients and critical problems.

“Reverse salients are areas of research and development that are lagging in some obvious way behind the general line of advance. Critical problems are the research questions, cast in terms of the concrete particulars of currently available knowledge and technique and of specific exemplars or models that are solvable and whose solutions would eliminate the reverse salients. ”

What strikes you as as important current example of a reverse salient, and the associated critical problem or problems?
west-hunter  scitariat  discussion  science  technology  innovation  low-hanging  list  top-n  research  open-problems  the-world-is-just-atoms  marginal  definite-planning  frontier  🔬  speedometer  ideas  the-trenches  hi-order-bits  prioritizing  judgement 
may 2019 by nhaliday
Manifold – man·i·fold /ˈmanəˌfōld/ many and various.
https://infoproc.blogspot.com/2019/01/a-grand-experiment.html
Silicon Valley (Big Tech and startups and VC)
Financial Markets
Academia (Good, Bad, and Ugly)
The View from Europe
The View from Asia (Life in PRC? Fear and Loathing of PRC?)
Frontiers of Science (AI, Genomics, Physics, ...)
Frontiers of Rationality
The Billionaire Life
MMA / UFC
What Millennials think us old folks don't understand
True things that you are not allowed to say
Bubbles that are ready to pop?
Under-appreciated Genius?
Overrated Crap and Frauds?
podcast  audio  stream  hsu  scitariat  science  frontier  interview  physics  genetics  biotech  technology  bio  interdisciplinary  spearhead  multi  genomics  sv  tech  finance  academia  europe  EU  china  asia  wealth  class  fighting  age-generation  westminster  censorship  truth  cycles  economics  people  realness  arbitrage  subculture  ratty  rationality 
march 2019 by nhaliday
John Dee - Wikipedia
John Dee (13 July 1527 – 1608 or 1609) was an English mathematician, astronomer, astrologer, occult philosopher,[5] and advisor to Queen Elizabeth I. He devoted much of his life to the study of alchemy, divination, and Hermetic philosophy. He was also an advocate of England's imperial expansion into a "British Empire", a term he is generally credited with coining.[6]

Dee straddled the worlds of modern science and magic just as the former was emerging. One of the most learned men of his age, he had been invited to lecture on the geometry of Euclid at the University of Paris while still in his early twenties. Dee was an ardent promoter of mathematics and a respected astronomer, as well as a leading expert in navigation, having trained many of those who would conduct England's voyages of discovery.

Simultaneously with these efforts, Dee immersed himself in the worlds of magic, astrology and Hermetic philosophy. He devoted much time and effort in the last thirty years or so of his life to attempting to commune with angels in order to learn the universal language of creation and bring about the pre-apocalyptic unity of mankind. However, Robert Hooke suggested in the chapter Of Dr. Dee's Book of Spirits, that John Dee made use of Trithemian steganography, to conceal his communication with Elizabeth I.[7] A student of the Renaissance Neo-Platonism of Marsilio Ficino, Dee did not draw distinctions between his mathematical research and his investigations into Hermetic magic, angel summoning and divination. Instead he considered all of his activities to constitute different facets of the same quest: the search for a transcendent understanding of the divine forms which underlie the visible world, which Dee called "pure verities".

In his lifetime, Dee amassed one of the largest libraries in England. His high status as a scholar also allowed him to play a role in Elizabethan politics. He served as an occasional advisor and tutor to Elizabeth I and nurtured relationships with her ministers Francis Walsingham and William Cecil. Dee also tutored and enjoyed patronage relationships with Sir Philip Sidney, his uncle Robert Dudley, 1st Earl of Leicester, and Edward Dyer. He also enjoyed patronage from Sir Christopher Hatton.

https://twitter.com/Logo_Daedalus/status/985203144044040192
https://archive.is/h7ibQ
mind meld

Leave Me Alone! Misanthropic Writings from the Anti-Social Edge
people  big-peeps  old-anglo  wiki  history  early-modern  britain  anglosphere  optimate  philosophy  mystic  deep-materialism  science  aristos  math  geometry  conquest-empire  nietzschean  religion  christianity  theos  innovation  the-devil  forms-instances  god-man-beast-victim  gnosis-logos  expansionism  age-of-discovery  oceans  frontier  multi  twitter  social  commentary  backup  pic  memes(ew)  gnon  🐸  books  literature 
april 2018 by nhaliday
Harnessing Evolution - with Bret Weinstein | Virtual Futures Salon - YouTube
- ways to get out of Malthusian conditions: expansion to new frontiers, new technology, redistribution/theft
- some discussion of existential risk
- wants to change humanity's "purpose" to one that would be safe in the long run; important thing is it has to be ESS (maybe he wants a singleton?)
- not too impressed by transhumanism (wouldn't identify with a brain emulation)
video  interview  thiel  expert-experience  evolution  deep-materialism  new-religion  sapiens  cultural-dynamics  anthropology  evopsych  sociality  ecology  flexibility  biodet  behavioral-gen  self-interest  interests  moloch  arms  competition  coordination  cooperate-defect  frontier  expansionism  technology  efficiency  thinking  redistribution  open-closed  zero-positive-sum  peace-violence  war  dominant-minority  hypocrisy  dignity  sanctity-degradation  futurism  environment  climate-change  time-preference  long-short-run  population  scale  earth  hidden-motives  game-theory  GT-101  free-riding  innovation  leviathan  malthus  network-structure  risk  existence  civil-liberty  authoritarianism  tribalism  us-them  identity-politics  externalities  unintended-consequences  internet  social  media  pessimism  universalism-particularism  energy-resources  biophysical-econ  politics  coalitions  incentives  attention  epistemic  biases  blowhards  teaching  education  emotion  impetus  comedy  expression-survival  economics  farmers-and-foragers  ca 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox
We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods. This results in a considerable sharpening of the Fermi paradox.
pdf  study  article  essay  anthropic  fermi  space  expansionism  bostrom  ratty  philosophy  xenobio  ideas  threat-modeling  intricacy  time  civilization  🔬  futurism  questions  paradox  risk  physics  engineering  interdisciplinary  frontier  technology  volo-avolo  dirty-hands  ai  automation  robotics  duplication  iteration-recursion  von-neumann  data  scale  magnitude  skunkworks  the-world-is-just-atoms  hard-tech  ems  bio  bits  speedometer  nature  model-organism  mechanics  phys-energy  relativity  electromag  analysis  spock  nitty-gritty  spreading  hanson  street-fighting  speed  gedanken  nibble 
march 2018 by nhaliday
The Coming Technological Singularity
Within thirty years, we will have the technological
means to create superhuman intelligence. Shortly after,
the human era will be ended.

Is such progress avoidable? If not to be avoided, can
events be guided so that we may survive? These questions
are investigated. Some possible answers (and some further
dangers) are presented.

_What is The Singularity?_

The acceleration of technological progress has been the central
feature of this century. I argue in this paper that we are on the edge
of change comparable to the rise of human life on Earth. The precise
cause of this change is the imminent creation by technology of
entities with greater than human intelligence. There are several means
by which science may achieve this breakthrough (and this is another
reason for having confidence that the event will occur):
o The development of computers that are "awake" and
superhumanly intelligent. (To date, most controversy in the
area of AI relates to whether we can create human equivalence
in a machine. But if the answer is "yes, we can", then there
is little doubt that beings more intelligent can be constructed
shortly thereafter.
o Large computer networks (and their associated users) may "wake
up" as a superhumanly intelligent entity.
o Computer/human interfaces may become so intimate that users
may reasonably be considered superhumanly intelligent.
o Biological science may find ways to improve upon the natural
human intellect.

The first three possibilities depend in large part on
improvements in computer hardware. Progress in computer hardware has
followed an amazingly steady curve in the last few decades [16]. Based
largely on this trend, I believe that the creation of greater than
human intelligence will occur during the next thirty years. (Charles
Platt [19] has pointed out the AI enthusiasts have been making claims
like this for the last thirty years. Just so I'm not guilty of a
relative-time ambiguity, let me more specific: I'll be surprised if
this event occurs before 2005 or after 2030.)

What are the consequences of this event? When greater-than-human
intelligence drives progress, that progress will be much more rapid.
In fact, there seems no reason why progress itself would not involve
the creation of still more intelligent entities -- on a still-shorter
time scale. The best analogy that I see is with the evolutionary past:
Animals can adapt to problems and make inventions, but often no faster
than natural selection can do its work -- the world acts as its own
simulator in the case of natural selection. We humans have the ability
to internalize the world and conduct "what if's" in our heads; we can
solve many problems thousands of times faster than natural selection.
Now, by creating the means to execute those simulations at much higher
speeds, we are entering a regime as radically different from our human
past as we humans are from the lower animals.
org:junk  humanity  accelerationism  futurism  prediction  classic  technology  frontier  speedometer  ai  risk  internet  time  essay  rhetoric  network-structure  ai-control  morality  ethics  volo-avolo  egalitarianism-hierarchy  intelligence  scale  giants  scifi-fantasy  speculation  quotes  religion  theos  singularity  flux-stasis  phase-transition  cybernetics  coordination  cooperate-defect  moloch  communication  bits  speed  efficiency  eden-heaven  ecology  benevolence  end-times  good-evil  identity  the-self  whole-partial-many  density 
march 2018 by nhaliday
Antinomia Imediata – experiments in a reaction from the left
https://antinomiaimediata.wordpress.com/lrx/
So, what is the Left Reaction? First of all, it’s reaction: opposition to the modern rationalist establishment, the Cathedral. It opposes the universalist Jacobin program of global government, favoring a fractured geopolitics organized through long-evolved complex systems. It’s profoundly anti-socialist and anti-communist, favoring market economy and individualism. It abhors tribalism and seeks a realistic plan for dismantling it (primarily informed by HBD and HBE). It looks at modernity as a degenerative ratchet, whose only way out is intensification (hence clinging to crypto-marxist market-driven acceleration).

How come can any of this still be in the *Left*? It defends equality of power, i.e. freedom. This radical understanding of liberty is deeply rooted in leftist tradition and has been consistently abhored by the Right. LRx is not democrat, is not socialist, is not progressist and is not even liberal (in its current, American use). But it defends equality of power. It’s utopia is individual sovereignty. It’s method is paleo-agorism. The anti-hierarchy of hunter-gatherer nomads is its understanding of the only realistic objective of equality.

...

In more cosmic terms, it seeks only to fulfill the Revolution’s side in the left-right intelligence pump: mutation or creation of paths. Proudhon’s antinomy is essentially about this: the collective force of the socius, evinced in moral standards and social organization vs the creative force of the individuals, that constantly revolutionize and disrupt the social body. The interplay of these forces create reality (it’s a metaphysics indeed): the Absolute (socius) builds so that the (individualistic) Revolution can destroy so that the Absolute may adapt, and then repeat. The good old formula of ‘solve et coagula’.

Ultimately, if the Neoreaction promises eternal hell, the LRx sneers “but Satan is with us”.

https://antinomiaimediata.wordpress.com/2016/12/16/a-statement-of-principles/
Liberty is to be understood as the ability and right of all sentient beings to dispose of their persons and the fruits of their labor, and nothing else, as they see fit. This stems from their self-awareness and their ability to control and choose the content of their actions.

...

Equality is to be understood as the state of no imbalance of power, that is, of no subjection to another sentient being. This stems from their universal ability for empathy, and from their equal ability for reason.

...

It is important to notice that, contrary to usual statements of these two principles, my standpoint is that Liberty and Equality here are not merely compatible, meaning they could coexist in some possible universe, but rather they are two sides of the same coin, complementary and interdependent. There can be NO Liberty where there is no Equality, for the imbalance of power, the state of subjection, will render sentient beings unable to dispose of their persons and the fruits of their labor[1], and it will limit their ability to choose over their rightful jurisdiction. Likewise, there can be NO Equality without Liberty, for restraining sentient beings’ ability to choose and dispose of their persons and fruits of labor will render some more powerful than the rest, and establish a state of subjection.

https://antinomiaimediata.wordpress.com/2017/04/18/flatness/
equality is the founding principle (and ultimately indistinguishable from) freedom. of course, it’s only in one specific sense of “equality” that this sentence is true.

to try and eliminate the bullshit, let’s turn to networks again:

any nodes’ degrees of freedom is the number of nodes they are connected to in a network. freedom is maximum when the network is symmetrically connected, i. e., when all nodes are connected to each other and thus there is no topographical hierarchy (middlemen) – in other words, flatness.

in this understanding, the maximization of freedom is the maximization of entropy production, that is, of intelligence. As Land puts it:

https://antinomiaimediata.wordpress.com/category/philosophy/mutualism/
gnon  blog  stream  politics  polisci  ideology  philosophy  land  accelerationism  left-wing  right-wing  paradox  egalitarianism-hierarchy  civil-liberty  power  hmm  revolution  analytical-holistic  mutation  selection  individualism-collectivism  tribalism  us-them  modernity  multi  tradeoffs  network-structure  complex-systems  cybernetics  randy-ayndy  insight  contrarianism  metameta  metabuch  characterization  cooperate-defect  n-factor  altruism  list  coordination  graphs  visual-understanding  cartoons  intelligence  entropy-like  thermo  information-theory  order-disorder  decentralized  distribution  degrees-of-freedom  analogy  graph-theory  extrema  evolution  interdisciplinary  bio  differential  geometry  anglosphere  optimate  nascent-state  deep-materialism  new-religion  cool  mystic  the-classics  self-interest  interests  reason  volo-avolo  flux-stasis  invariance  government  markets  paying-rent  cost-benefit  peace-violence  frontier  exit-voice  nl-and-so-can-you  war  track-record  usa  history  mostly-modern  world-war  military  justice  protestant-cathol 
march 2018 by nhaliday
Existential Risks: Analyzing Human Extinction Scenarios
https://twitter.com/robinhanson/status/981291048965087232
https://archive.is/dUTD5
Would you endorse choosing policy to max the expected duration of civilization, at least as a good first approximation?
Can anyone suggest a different first approximation that would get more votes?

https://twitter.com/robinhanson/status/981335898502545408
https://archive.is/RpygO
How useful would it be to agree on a relatively-simple first-approximation observable-after-the-fact metric for what we want from the future universe, such as total life years experienced, or civilization duration?

We're Underestimating the Risk of Human Extinction: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/
An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Anderson: You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.
bostrom  ratty  miri-cfar  skunkworks  philosophy  org:junk  list  top-n  frontier  speedometer  risk  futurism  local-global  scale  death  nihil  technology  simulation  anthropic  nuclear  deterrence  environment  climate-change  arms  competition  ai  ai-control  genetics  genomics  biotech  parasites-microbiome  disease  offense-defense  physics  tails  network-structure  epidemiology  space  geoengineering  dysgenics  ems  authoritarianism  government  values  formal-values  moloch  enhancement  property-rights  coordination  cooperate-defect  flux-stasis  ideas  prediction  speculation  humanity  singularity  existence  cybernetics  study  article  letters  eden-heaven  gedanken  multi  twitter  social  discussion  backup  hanson  metrics  optimization  time  long-short-run  janus  telos-atelos  poll  forms-instances  threat-modeling  selection  interview  expert-experience  malthus  volo-avolo  intel  leviathan  drugs  pharma  data  estimate  nature  longevity  expansionism  homo-hetero  utopia-dystopia 
march 2018 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
What Peter Thiel thinks about AI risk - Less Wrong
TL;DR: he thinks its an issue but also feels AGI is very distant and hence less worried about it than Musk.

I recommend the rest of the lecture as well, it's a good summary of "Zero to One"  and a good QA afterwards.

For context, in case anyone doesn't realize: Thiel has been MIRI's top donor throughout its history.

other stuff:
nice interview question: "thing you know is true that not everyone agrees on?"
"learning from failure overrated"
cleantech a huge market, hard to compete
software makes for easy monopolies (zero marginal costs, network effects, etc.)
for most of history inventors did not benefit much (continuous competition)
ethical behavior is a luxury of monopoly
ratty  lesswrong  commentary  ai  ai-control  risk  futurism  technology  speedometer  audio  presentation  musk  thiel  barons  frontier  miri-cfar  charity  people  track-record  venture  startups  entrepreneurialism  contrarianism  competition  market-power  business  google  truth  management  leadership  socs-and-mops  dark-arts  skunkworks  hard-tech  energy-resources  wire-guided  learning  software  sv  tech  network-structure  scale  marginal  cost-benefit  innovation  industrial-revolution  economics  growth-econ  capitalism  comparison  nationalism-globalism  china  asia  trade  stagnation  things  dimensionality  exploratory  world  developing-world  thinking  definite-planning  optimism  pessimism  intricacy  politics  war  career  planning  supply-demand  labor  science  engineering  dirty-hands  biophysical-econ  migration  human-capital  policy  canada  anglo  winner-take-all  polarization  amazon  business-models  allodium  civilization  the-classics  microsoft  analogy  gibbon  conquest-empire  realness  cynicism-idealism  org:edu  open-closed  ethics  incentives  m 
february 2018 by nhaliday
Fermi paradox - Wikipedia
Rare Earth hypothesis: https://en.wikipedia.org/wiki/Rare_Earth_hypothesis
Fine-tuned Universe: https://en.wikipedia.org/wiki/Fine-tuned_Universe
something to keep in mind:
Puddle theory is a term coined by Douglas Adams to satirize arguments that the universe is made for man.[54][55] As stated in Adams' book The Salmon of Doubt:[56]
Imagine a puddle waking up one morning and thinking, “This is an interesting world I find myself in, an interesting hole I find myself in, fits me rather neatly, doesn't it? In fact, it fits me staggeringly well, must have been made to have me in it!” This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it's still frantically hanging on to the notion that everything's going to be all right, because this World was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.
article  concept  paradox  wiki  reference  fermi  anthropic  space  xenobio  roots  speculation  ideas  risk  threat-modeling  civilization  nihil  🔬  deep-materialism  new-religion  futurism  frontier  technology  communication  simulation  intelligence  eden  war  nuclear  deterrence  identity  questions  multi  explanans  physics  theos  philosophy  religion  chemistry  bio  hmm  idk  degrees-of-freedom  lol  troll  existence 
january 2018 by nhaliday
Information Processing: Mathematical Theory of Deep Neural Networks (Princeton workshop)
"Recently, long-past-due theoretical results have begun to emerge. These results, and those that will follow in their wake, will begin to shed light on the properties of large, adaptive, distributed learning architectures, and stand to revolutionize how computer science and neuroscience understand these systems."
hsu  scitariat  commentary  links  research  research-program  workshop  events  princeton  sanjeev-arora  deep-learning  machine-learning  ai  generalization  explanans  off-convex  nibble  frontier  speedometer  state-of-art  big-surf  announcement 
january 2018 by nhaliday
The Roman Virtues
These are the qualities of life to which every citizen should aspire. They are the heart of the Via Romana--the Roman Way--and are thought to be those qualities which gave the Roman Republic the moral strength to conquer and civilize the world:
Auctoritas--"Spiritual Authority": The sense of one's social standing, built up through experience, Pietas, and Industria.
Comitas--"Humor": Ease of manner, courtesy, openness, and friendliness.
Clementia--"Mercy": Mildness and gentleness.
Dignitas--"Dignity": A sense of self-worth, personal pride.
Firmitas--"Tenacity": Strength of mind, the ability to stick to one's purpose.
Frugalitas--"Frugalness": Economy and simplicity of style, without being miserly.
Gravitas--"Gravity": A sense of the importance of the matter at hand, responsibility and earnestness.
Honestas--"Respectibility": The image that one presents as a respectable member of society.
Humanitas--"Humanity": Refinement, civilization, learning, and being cultured.
Industria--"Industriousness": Hard work.
Pietas--"Dutifulness": More than religious piety; a respect for the natural order socially, politically, and religiously. Includes the ideas of patriotism and devotion to others.
Prudentia--"Prudence": Foresight, wisdom, and personal discretion.
Salubritas--"Wholesomeness": Health and cleanliness.
Severitas--"Sternness": Gravity, self-control.
Veritas--"Truthfulness": Honesty in dealing with others.

THE ROMAN CONCEPT OF FIDES: https://www.csun.edu/~hcfll004/fides.html
"FIDES" is often (and wrongly) translated 'faith', but it has nothing to do with the word as used by Christians writing in Latin about the Christian virute (St. Paul Letter to the Corinthians, chapter 13). For the Romans, FIDES was an essential element in the character of a man of public affairs, and a necessary constituent element of all social and political transactions (perhaps = 'good faith'). FIDES meant 'reliablilty', a sense of trust between two parties if a relationship between them was to exist. FIDES was always reciprocal and mutual, and implied both privileges and responsibilities on both sides. In both public and private life the violation of FIDES was considered a serious matter, with both legal and religious consequences. FIDES, in fact, was one of the first of the 'virtues' to be considered an actual divinity at Rome. The Romans had a saying, "Punica fides" (the reliability of a Carthaginian) which for them represented the highest degree of treachery: the word of a Carthaginian (like Hannibal) was not to be trusted, nor could a Carthaginian be relied on to maintain his political elationships.

Some relationships governed by fides:

VIRTUS
VIRTUS, for the Roman, does not carry the same overtones as the Christian 'virtue'. But like the Greek andreia, VIRTUS has a primary meaning of 'acting like a man' (vir) [cf. the Renaissance virtù ), and for the Romans this meant first and foremost 'acting like a brave man in military matters'. virtus was to be found in the context of 'outstanding deeds' (egregia facinora), and brave deeds were the accomplishments which brought GLORIA ('a reputation'). This GLORIA was attached to two ideas: FAMA ('what people think of you') and dignitas ('one's standing in the community'). The struggle for VIRTUS at Rome was above all a struggle for public office (honos), since it was through high office, to which one was elected by the People, that a man could best show hi smanliness which led to military achievement--which would lead in turn to a reputation and votes. It was the duty of every aristocrat (and would-be aristocrat) to maintain the dignitas which his family had already achieved and to extend it to the greatest possible degree (through higher political office and military victories). This system resulted in a strong built-in impetus in Roman society to engage in military expansion and conquest at all times.
org:junk  org:edu  letters  history  iron-age  mediterranean  the-classics  conquest-empire  civilization  leviathan  morality  ethics  formal-values  philosophy  status  virtu  list  personality  values  things  phalanges  alien-character  impro  dignity  power  nietzschean  martial  temperance  patience  duty  responsibility  coalitions  coordination  organizing  counter-revolution  nascent-state  discipline  self-control  cohesion  prudence  health  embodied  integrity  honor  truth  foreign-lang  top-n  canon  religion  theos  noblesse-oblige  egalitarianism-hierarchy  sulla  allodium  frontier  prepping  tradition  trust  culture  society  social-capital  jargon  hari-seldon  wisdom  concept  conceptual-vocab  good-evil  reputation  multi  exegesis-hermeneutics  stoic  new-religion  lexical  paganism 
january 2018 by nhaliday
Books 2017 | West Hunter
Arabian Sands
The Aryans
The Big Show
The Camel and the Wheel
Civil War on Western Waters
Company Commander
Double-edged Secrets
The Forgotten Soldier
Genes in Conflict
Hive Mind
The horse, the wheel, and language
The Penguin Atlas of Medieval History
Habitable Planets for Man
The genetical theory of natural selection
The Rise of the Greeks
To Lose a Battle
The Jewish War
Tropical Gangsters
The Forgotten Revolution
Egil’s Saga
Shapers
Time Patrol

Russo: https://westhunt.wordpress.com/2017/12/14/books-2017/#comment-98568
west-hunter  scitariat  books  recommendations  list  top-n  confluence  2017  info-foraging  canon  🔬  ideas  s:*  history  mostly-modern  world-war  britain  old-anglo  travel  MENA  frontier  reflection  europe  gallic  war  sapiens  antiquity  archaeology  technology  divergence  the-great-west-whale  transportation  nature  long-short-run  intel  tradecraft  japan  asia  usa  spearhead  garett-jones  hive-mind  economics  broad-econ  giants  fisher  space  iron-age  medieval  the-classics  civilization  judaism  conquest-empire  africa  developing-world  institutions  science  industrial-revolution  the-trenches  wild-ideas  innovation  speedometer  nordic  mediterranean  speculation  fiction  scifi-fantasy  time  encyclopedic  multi  poast  critique  cost-benefit  tradeoffs  quixotic 
december 2017 by nhaliday
Lynn Margulis | West Hunter
Margulis went on to theorize that symbiotic relationships between organisms are the dominant driving force of evolution. There certainly are important examples of this: as far as I know, every complex organism that digests cellulose manages it thru a symbiosis with various prokaryotes. Many organisms with a restricted diet have symbiotic bacteria that provide essential nutrients – aphids, for example. Tall fescue, a popular turf grass on golf courses, carries an endosymbiotic fungus. And so on, and on on.

She went on to oppose neodarwinism, particularly rejecting inter-organismal competition (and population genetics itself). From Wiki: [ She also believed that proponents of the standard theory “wallow in their zoological, capitalistic, competitive, cost-benefit interpretation of Darwin – having mistaken him… Neo-Darwinism, which insists on [the slow accrual of mutations by gene-level natural selection], is in a complete funk.”[8] ‘

...

You might think that Lynn Margulis is an example of someone that could think outside the box because she’d never even been able to find it in the first place – but that’s more true of autistic types [like Dirac or Turing], which I doubt she was in any way. I’d say that some traditional prejudices [dislike of capitalism and individual competition], combined with the sort of general looniness that leaves one open to unconventional ideas, drove her in a direction that bore fruit, more or less by coincidence. A successful creative scientist does not have to be right about everything, or indeed about much of anything: they need to contribute at least one new, true, and interesting thing.

https://westhunt.wordpress.com/2017/11/25/lynn-margulis/#comment-98174
“A successful creative scientist does not have to be right about everything, or indeed about much of anything: they need to contribute at least one new, true, and interesting thing.” Yes – it’s like old bands. As long as they have just one song in heavy rotation on the classic rock stations, they can tour endlessly – it doesn’t matter that they have only one or even no original members performing. A scientific example of this phenomena is Kary Mullins. He’ll always have PCR, even if a glowing raccoon did greet him with the words, “Good evening, Doctor.”

Nobel Savage: https://www.lrb.co.uk/v21/n13/steven-shapin/nobel-savage
Dancing Naked in the Mind Field by Kary Mullis

jet fuel can't melt steel beams: https://westhunt.wordpress.com/2017/11/25/lynn-margulis/#comment-98201
You have to understand a subject extremely well to make arguments why something couldn’t have happened. The easiest cases involve some purported explanation violating a conservation law of physics: that wasn’t the case here.

Do I think you’re a hotshot, deeply knowledgeable about structural engineering, properties of materials, using computer models, etc? A priori, pretty unlikely. What are the odds that you know as much simple mechanics as I do? a priori, still pretty unlikely. Most likely, you’re talking through your hat.

Next, the conspiracy itself is unlikely: quite a few people would be involved – unlikely that none of them would talk. It’s not that easy to find people that would go along with such a thing, believe it or not. The Communists were pretty good at conspiracy, but people defected, people talked: not just Whittaker Chambers, not just Igor Gouzenko.
west-hunter  scitariat  discussion  people  profile  science  the-trenches  innovation  discovery  ideas  turing  giants  autism  👽  bio  evolution  eden  roots  darwinian  capitalism  competition  cooperate-defect  being-right  info-dynamics  frontier  curiosity  creative  multi  poast  prudence  org:mag  org:anglo  letters  books  review  critique  summary  lol  genomics  social-science  sociology  psychology  psychiatry  ability-competence  rationality  epistemic  reason  events  terrorism  usa  islam  communism  coordination  organizing  russia  dirty-hands  degrees-of-freedom  alignment 
november 2017 by nhaliday
The Science of Roman History: Biology, Climate, and the Future of the Past (Hardcover and eBook) | Princeton University Press
Forthcoming April 2018

How the latest cutting-edge science offers a fuller picture of life in Rome and antiquity
This groundbreaking book provides the first comprehensive look at how the latest advances in the sciences are transforming our understanding of ancient Roman history. Walter Scheidel brings together leading historians, anthropologists, and geneticists at the cutting edge of their fields, who explore novel types of evidence that enable us to reconstruct the realities of life in the Roman world.

Contributors discuss climate change and its impact on Roman history, and then cover botanical and animal remains, which cast new light on agricultural and dietary practices. They exploit the rich record of human skeletal material--both bones and teeth—which forms a bio-archive that has preserved vital information about health, nutritional status, diet, disease, working conditions, and migration. Complementing this discussion is an in-depth analysis of trends in human body height, a marker of general well-being. This book also assesses the contribution of genetics to our understanding of the past, demonstrating how ancient DNA is used to track infectious diseases, migration, and the spread of livestock and crops, while the DNA of modern populations helps us reconstruct ancient migrations, especially colonization.

Opening a path toward a genuine biohistory of Rome and the wider ancient world, The Science of RomanHistory offers an accessible introduction to the scientific methods being used in this exciting new area of research, as well as an up-to-date survey of recent findings and a tantalizing glimpse of what the future holds.

Walter Scheidel is the Dickason Professor in the Humanities, Professor of Classics and History, and a Kennedy-Grossman Fellow in Human Biology at Stanford University. He is the author or editor of seventeen previous books, including The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century (Princeton).
books  draft  todo  broad-econ  economics  anthropology  genetics  genomics  aDNA  measurement  volo-avolo  environment  climate-change  archaeology  history  iron-age  mediterranean  the-classics  demographics  health  embodied  labor  migration  walter-scheidel  agriculture  frontier  malthus  letters  gibbon  traces 
november 2017 by nhaliday
If Quantum Computers are not Possible Why are Classical Computers Possible? | Combinatorics and more
As most of my readers know, I regard quantum computing as unrealistic. You can read more about it in my Notices AMS paper and its extended version (see also this post) and in the discussion of Puzzle 4 from my recent puzzles paper (see also this post). The amazing progress and huge investment in quantum computing (that I presented and update  routinely in this post) will put my analysis to test in the next few years.
tcstariat  mathtariat  org:bleg  nibble  tcs  cs  computation  quantum  volo-avolo  no-go  contrarianism  frontier  links  quantum-info  analogy  comparison  synthesis  hi-order-bits  speedometer  questions  signal-noise 
november 2017 by nhaliday
Frontier Culture: The Roots and Persistence of “Rugged Individualism” in the United States∗
In a classic 1893 essay, Frederick Jackson Turner argued that the American frontier promoted individualism. We revisit the Frontier Thesis and examine its relevance at the subnational level. Using Census data and GIS techniques, we track the frontier throughout the 1790–1890 period and construct a novel, county-level measure of historical frontier experience. We document the distinctive demographics of frontier locations during this period—disproportionately male, prime-age adult, foreign-born, and illiterate—as well as their higher levels of individualism, proxied by the share of infrequent names among children. Many decades after the closing of the frontier, counties with longer historical frontier experience exhibit more prevalent individualism and opposition to redistribution and regulation. We take several steps towards a causal interpretation, including an instrumental variables approach that exploits variation in the speed of westward expansion induced by prior national immigration in- flows. Using linked historical Census data, we identify mechanisms giving rise to a persistent frontier culture. Greater individualism on the frontier was not driven solely by selective migration, suggesting that frontier conditions may have shaped behavior and values. We provide evidence suggesting that rugged individualism may be rooted in its adaptive advantage on the frontier and the opportunities for upward mobility through effort.

https://twitter.com/whyvert/status/921900860224897024
https://archive.is/jTzSe

The Origins of Cultural Divergence: Evidence from a Developing Country.: http://economics.handels.gu.se/digitalAssets/1643/1643769_37.-hoang-anh-ho-ncde-2017-june.pdf
Cultural norms diverge substantially across societies, often even within the same country. In this paper, we test the voluntary settlement hypothesis, proposing that individualistic people tend to self-select into migrating out of reach from collectivist states towards the periphery and that such patterns of historical migration are reflected even in the contemporary distribution of norms. For more than one thousand years during the first millennium CE, northern Vietnam was under an exogenously imposed Chinese rule. From the eleventh to the eighteenth centuries, ancient Vietnam gradually expanded its territory through various waves of southward conquest. We demonstrate that areas being annexed earlier into ancient Vietnam are nowadays more (less) prone to collectivist (individualist) culture. We argue that the southward out-migration of individualist people was the main mechanism behind this finding. The result is consistent across various measures obtained from an extensive household survey and robust to various control variables as well as to different empirical specifications, including an instrumental variable estimation. A lab-in-the-field experiment also confirms the finding.
pdf  study  economics  broad-econ  cliometrics  path-dependence  evidence-based  empirical  stylized-facts  values  culture  cultural-dynamics  anthropology  usa  frontier  allodium  the-west  correlation  individualism-collectivism  measurement  politics  ideology  expression-survival  redistribution  regulation  political-econ  government  migration  history  early-modern  pre-ww2  things  phalanges  🎩  selection  polisci  roots  multi  twitter  social  commentary  scitariat  backup  gnon  growth-econ  medieval  china  asia  developing-world  shift  natural-experiment  endo-exo  endogenous-exogenous  hari-seldon 
october 2017 by nhaliday
Genome Editing
This collection of articles from the Nature Research journals provides an overview of current progress in developing targeted genome editing technologies. A selection of protocols for using and adapting these tools in your own lab is also included.
news  org:sci  org:nat  list  links  aggregator  chart  info-foraging  frontier  technology  CRISPR  biotech  🌞  survey  state-of-art  article  study  genetics  genomics  speedometer 
october 2017 by nhaliday
« earlier      
per page:    204080120160

Copy this bookmark:





to read