recentpopularlog in

nhaliday : crux   16

Why read old philosophy? | Meteuphoric
(This story would suggest that in physics students are maybe missing out on learning the styles of thought that produce progress in physics. My guess is that instead they learn them in grad school when they are doing research themselves, by emulating their supervisors, and that the helpfulness of this might partially explain why Nobel prizewinner advisors beget Nobel prizewinner students.)

The story I hear about philosophy—and I actually don’t know how much it is true—is that as bits of philosophy come to have any methodological tools other than ‘think about it’, they break off and become their own sciences. So this would explain philosophy’s lone status in studying old thinkers rather than impersonal methods—philosophy is the lone ur-discipline without impersonal methods but thinking.

This suggests a research project: try summarizing what Aristotle is doing rather than Aristotle’s views. Then write a nice short textbook about it.
ratty  learning  reading  studying  prioritizing  history  letters  philosophy  science  comparison  the-classics  canon  speculation  reflection  big-peeps  iron-age  mediterranean  roots  lens  core-rats  thinking  methodology  grad-school  academia  physics  giants  problem-solving  meta:research  scholar  the-trenches  explanans  crux  metameta  duplication  sociality  innovation  quixotic  meta:reading  classic 
june 2018 by nhaliday
Christian ethics - Wikipedia
Christian ethics is a branch of Christian theology that defines virtuous behavior and wrong behavior from a Christian perspective. Systematic theological study of Christian ethics is called moral theology, possibly with the name of the respective theological tradition, e.g. Catholic moral theology.

Christian virtues are often divided into four cardinal virtues and three theological virtues. Christian ethics includes questions regarding how the rich should act toward the poor, how women are to be treated, and the morality of war. Christian ethicists, like other ethicists, approach ethics from different frameworks and perspectives. The approach of virtue ethics has also become popular in recent decades, largely due to the work of Alasdair MacIntyre and Stanley Hauerwas.[2]

...

The seven Christian virtues are from two sets of virtues. The four cardinal virtues are Prudence, Justice, Restraint (or Temperance), and Courage (or Fortitude). The cardinal virtues are so called because they are regarded as the basic virtues required for a virtuous life. The three theological virtues, are Faith, Hope, and Love (or Charity).

- Prudence: also described as wisdom, the ability to judge between actions with regard to appropriate actions at a given time
- Justice: also considered as fairness, the most extensive and most important virtue[20]
- Temperance: also known as restraint, the practice of self-control, abstention, and moderation tempering the appetition
- Courage: also termed fortitude, forebearance, strength, endurance, and the ability to confront fear, uncertainty, and intimidation
- Faith: belief in God, and in the truth of His revelation as well as obedience to Him (cf. Rom 1:5:16:26)[21][22]
- Hope: expectation of and desire of receiving; refraining from despair and capability of not giving up. The belief that God will be eternally present in every human's life and never giving up on His love.
- Charity: a supernatural virtue that helps us love God and our neighbors, the same way as we love ourselves.

Seven deadly sins: https://en.wikipedia.org/wiki/Seven_deadly_sins
The seven deadly sins, also known as the capital vices or cardinal sins, is a grouping and classification of vices of Christian origin.[1] Behaviours or habits are classified under this category if they directly give birth to other immoralities.[2] According to the standard list, they are pride, greed, lust, envy, gluttony, wrath, and sloth,[2] which are also contrary to the seven virtues. These sins are often thought to be abuses or excessive versions of one's natural faculties or passions (for example, gluttony abuses one's desire to eat).

originally:
1 Gula (gluttony)
2 Luxuria/Fornicatio (lust, fornication)
3 Avaritia (avarice/greed)
4 Superbia (pride, hubris)
5 Tristitia (sorrow/despair/despondency)
6 Ira (wrath)
7 Vanagloria (vainglory)
8 Acedia (sloth)

Golden Rule: https://en.wikipedia.org/wiki/Golden_Rule
The Golden Rule (which can be considered a law of reciprocity in some religions) is the principle of treating others as one would wish to be treated. It is a maxim that is found in many religions and cultures.[1][2] The maxim may appear as _either a positive or negative injunction_ governing conduct:

- One should treat others as one would like others to treat oneself (positive or directive form).[1]
- One should not treat others in ways that one would not like to be treated (negative or prohibitive form).[1]
- What you wish upon others, you wish upon yourself (empathic or responsive form).[1]
The Golden Rule _differs from the maxim of reciprocity captured in do ut des—"I give so that you will give in return"—and is rather a unilateral moral commitment to the well-being of the other without the expectation of anything in return_.[3]

The concept occurs in some form in nearly every religion[4][5] and ethical tradition[6] and is often considered _the central tenet of Christian ethics_[7] [8]. It can also be explained from the perspectives of psychology, philosophy, sociology, human evolution, and economics. Psychologically, it involves a person empathizing with others. Philosophically, it involves a person perceiving their neighbor also as "I" or "self".[9] Sociologically, "love your neighbor as yourself" is applicable between individuals, between groups, and also between individuals and groups. In evolution, "reciprocal altruism" is seen as a distinctive advance in the capacity of human groups to survive and reproduce, as their exceptional brains demanded exceptionally long childhoods and ongoing provision and protection even beyond that of the immediate family.[10] In economics, Richard Swift, referring to ideas from David Graeber, suggests that "without some kind of reciprocity society would no longer be able to exist."[11]

...

hmm, Meta-Golden Rule already stated:
Seneca the Younger (c. 4 BC–65 AD), a practitioner of Stoicism (c. 300 BC–200 AD) expressed the Golden Rule in his essay regarding the treatment of slaves: "Treat your inferior as you would wish your superior to treat you."[23]

...

The "Golden Rule" was given by Jesus of Nazareth, who used it to summarize the Torah: "Do to others what you want them to do to you." and "This is the meaning of the law of Moses and the teaching of the prophets"[33] (Matthew 7:12 NCV, see also Luke 6:31). The common English phrasing is "Do unto others as you would have them do unto you". A similar form of the phrase appeared in a Catholic catechism around 1567 (certainly in the reprint of 1583).[34] The Golden Rule is _stated positively numerous times in the Hebrew Pentateuch_ as well as the Prophets and Writings. Leviticus 19:18 ("Forget about the wrong things people do to you, and do not try to get even. Love your neighbor as you love yourself."; see also Great Commandment) and Leviticus 19:34 ("But treat them just as you treat your own citizens. Love foreigners as you love yourselves, because you were foreigners one time in Egypt. I am the Lord your God.").

The Old Testament Deuterocanonical books of Tobit and Sirach, accepted as part of the Scriptural canon by Catholic Church, Eastern Orthodoxy, and the Non-Chalcedonian Churches, express a _negative form_ of the golden rule:

"Do to no one what you yourself dislike."

— Tobit 4:15
"Recognize that your neighbor feels as you do, and keep in mind your own dislikes."

— Sirach 31:15
Two passages in the New Testament quote Jesus of Nazareth espousing the _positive form_ of the Golden rule:

Matthew 7:12
Do to others what you want them to do to you. This is the meaning of the law of Moses and the teaching of the prophets.

Luke 6:31
Do to others what you would want them to do to you.

...

The passage in the book of Luke then continues with Jesus answering the question, "Who is my neighbor?", by telling the parable of the Good Samaritan, indicating that "your neighbor" is anyone in need.[35] This extends to all, including those who are generally considered hostile.

Jesus' teaching goes beyond the negative formulation of not doing what one would not like done to themselves, to the positive formulation of actively doing good to another that, if the situations were reversed, one would desire that the other would do for them. This formulation, as indicated in the parable of the Good Samaritan, emphasizes the needs for positive action that brings benefit to another, not simply restraining oneself from negative activities that hurt another. Taken as a rule of judgment, both formulations of the golden rule, the negative and positive, are equally applicable.[36]

The Golden Rule: Not So Golden Anymore: https://philosophynow.org/issues/74/The_Golden_Rule_Not_So_Golden_Anymore
Pluralism is the most serious problem facing liberal democracies today. We can no longer ignore the fact that cultures around the world are not simply different from one another, but profoundly so; and the most urgent area in which this realization faces us is in the realm of morality. Western democratic systems depend on there being at least a minimal consensus concerning national values, especially in regard to such things as justice, equality and human rights. But global communication, economics and the migration of populations have placed new strains on Western democracies. Suddenly we find we must adjust to peoples whose suppositions about the ultimate values and goals of life are very different from ours. A clear lesson from events such as 9/11 is that disregarding these differences is not an option. Collisions between worldviews and value systems can be cataclysmic. Somehow we must learn to manage this new situation.

For a long time, liberal democratic optimism in the West has been shored up by suppositions about other cultures and their differences from us. The cornerpiece of this optimism has been the assumption that whatever differences exist they cannot be too great. A core of ‘basic humanity’ surely must tie all of the world’s moral systems together – and if only we could locate this core we might be able to forge agreements and alliances among groups that otherwise appear profoundly opposed. We could perhaps then shelve our cultural or ideological differences and get on with the more pleasant and productive business of celebrating our core agreement. One cannot fail to see how this hope is repeated in order buoy optimism about the Middle East peace process, for example.

...

It becomes obvious immediately that no matter how widespread we want the Golden Rule to be, there are some ethical systems that we have to admit do not have it. In fact, there are a few traditions that actually disdain the Rule. In philosophy, the Nietzschean tradition holds that the virtues implicit in the Golden Rule are antithetical to the true virtues of self-assertion and the will-to-power. Among religions, there are a good many that prefer to emphasize the importance of self, cult, clan or tribe rather than of general others; and a good many other religions for whom large populations are simply excluded from goodwill, being labeled as outsiders, heretics or … [more]
article  letters  philosophy  morality  ethics  formal-values  religion  christianity  theos  n-factor  europe  the-great-west-whale  occident  justice  war  peace-violence  janus  virtu  list  sanctity-degradation  class  lens  wealth  gender  sex  sexuality  multi  concept  wiki  reference  theory-of-mind  ideology  cooperate-defect  coordination  psychology  cog-psych  social-psych  emotion  cybernetics  ecology  deep-materialism  new-religion  hsu  scitariat  aphorism  quotes  stories  fiction  gedanken  altruism  parasites-microbiome  food  diet  nutrition  individualism-collectivism  taxes  government  redistribution  analogy  lol  troll  poast  death  long-short-run  axioms  judaism  islam  tribalism  us-them  kinship  interests  self-interest  dignity  civil-liberty  values  homo-hetero  diversity  unintended-consequences  within-without  increase-decrease  signum  ascetic  axelrod  guilt-shame  patho-altruism  history  iron-age  mediterranean  the-classics  robust  egalitarianism-hierarchy  intricacy  hypocrisy  parable  roots  explanans  crux  s 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
AI-complete - Wikipedia
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI.[1] To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.

AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.[2]

Currently, AI-complete problems cannot be solved with modern computer technology alone, but would also require human computation. This property can be useful, for instance to test for the presence of humans as with CAPTCHAs, and for computer security to circumvent brute-force attacks.[3][4]

...

AI-complete problems are hypothesised to include:

Bongard problems
Computer vision (and subproblems such as object recognition)
Natural language understanding (and subproblems such as text mining, machine translation, and word sense disambiguation[8])
Dealing with unexpected circumstances while solving any real world problem, whether it's navigation or planning or even the kind of reasoning done by expert systems.

...

Current AI systems can solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempt to "scale up" their systems to handle more complicated, real world situations, the programs tend to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they fail as unexpected circumstances outside of its original problem context begin to appear. When human beings are dealing with new situations in the world, they are helped immensely by the fact that they know what to expect: they know what all things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. A machine without strong AI has no other skills to fall back on.[9]
concept  reduction  cs  computation  complexity  wiki  reference  properties  computer-vision  ai  risk  ai-control  machine-learning  deep-learning  language  nlp  order-disorder  tactics  strategy  intelligence  humanity  speculation  crux 
march 2018 by nhaliday
Superintelligence Risk Project Update II
https://www.jefftk.com/p/superintelligence-risk-project-update

https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
Why Information Grows – Paul Romer
thinking like a physicist:

The key element in thinking like a physicist is being willing to push simultaneously to extreme levels of abstraction and specificity. This sounds paradoxical until you see it in action. Then it seems obvious. Abstraction means that you strip away inessential detail. Specificity means that you take very seriously the things that remain.

Abstraction vs. Radical Specificity: https://paulromer.net/abstraction-vs-radical-specificity/
books  summary  review  economics  growth-econ  interdisciplinary  hmm  physics  thinking  feynman  tradeoffs  paul-romer  econotariat  🎩  🎓  scholar  aphorism  lens  signal-noise  cartoons  skeleton  s:**  giants  electromag  mutation  genetics  genomics  bits  nibble  stories  models  metameta  metabuch  problem-solving  composition-decomposition  structure  abstraction  zooming  examples  knowledge  human-capital  behavioral-econ  network-structure  info-econ  communication  learning  information-theory  applications  volo-avolo  map-territory  externalities  duplication  spreading  property-rights  lattice  multi  government  polisci  policy  counterfactual  insight  paradox  parallax  reduction  empirical  detail-architecture  methodology  crux  visual-understanding  theory-practice  matching  analytical-holistic  branches  complement-substitute  local-global  internet  technology  cost-benefit  investing  micro  signaling  limits  public-goodish  interpretation  elegance  meta:reading  intellectual-property  writing 
september 2016 by nhaliday

Copy this bookmark:





to read