recentpopularlog in

robertogreco : ai   122

« earlier  
ColouriseSG
"A deep learning colouriser prototype specifically for old Singaporean photos."
color  colorization  photography  tools  ai 
5 weeks ago by robertogreco
Styles of Democracy | the A-Line
"Increasingly, since the Supreme Court some thirty-plus years ago ruled to allow unlimited funding by private and corporate interests, the United States has steadily moved toward political degeneration and corrupting abuse of democracy’s frameworks. This issue stands at the forefront of any discussion regarding democracy’s present and future reality. I see no institutional change of any sort since Trump’s hijacked election outcome. Mid-term congressional voting will doubtless produce a déjà vu, entrenching a new era of external manipulation that may assert an ongoing debasement of American institutional compromise and failure. The philosophical query, what governmental styles are possible, preferable, to be pursued, in the aftermath of coordinated de facto treason acknowledges the specter of a blithe dismantling of this nation’s tradition of democratic turmoil generated solely from within American political culture. A pernicious acceptance of outside political leverage as a new norm promises to dismantle both the legitimacy of democratic autonomy and authority as well as the tenuous usefulness of checks and balances among inter-governmental political responsibilities…institutional scrutiny that, alone, allows the flawed creativity and untrammeled rivalry of capitalistic interests to thrive despite human frailty and institutional stupidity.

The era of professional political energy may have come to a close, replaced by mafioso crony collusion. However that plays out, nothing short of a profound retrenchment of democratic idealism exercised with a maximum of commitment and canny political judgment is likely to reverse, or undo, the demise underway. I see a theoretical opening for some degree of hope. Trump has so violated standards of individual maturity, professional good sense, public decency and day-to-day truthfulness that broad public revulsion may curtail his deceitful assault on the general well being.

However that plays out, I see the present moment as inaugurating a significant transformation of American political reality. First, Marx was correct to view large deformations of institutional authority and state power to appear on occasion, first, as tragedy and, later, as farce. The events of 9/11 in Manhattan that fulfilled the “Project for the New American Century” – implicitly calling for a catastrophic event on the order of Pearl Harbor – changed the equation of American influence and global intervention as a calculus of irredeemably tragic decimation. The intervention of Russia in Trump’s electoral college victory in 2016, the successful confluence of treason and treachery, has produced enlarging institutional and cultural deformations at once farcical and dauntingly horrific. Quite literally, the entire narrative of American idealism and benevolence has been challenged, reversed and put into ongoing self-disabling dysfunction. Jeffersonian definitions of human dignity and freedom, always placebos to avoid confronting American racist cruelty, are now being eviscerated by the enlarging truth of Marx’s awareness of capital inequities (a strenuous falling rate of profit driven by excess accumulation). A long feared mega-depression, eclipsing the one that aided Hitler’s rise ninety years ago, appears to be crawling inexorably toward global reality. If, somehow, such an apocalyptic event spanning Europe, Asia and the United States is further postponed, the reprieve will not prove the superior wisdom of capitalist managers or the inherent fairness or flexibility of capitalist institutions. Its delay may wait until further depreciation of the global labor force gains momentum from increased robotic displacements.

Second, the epochal transformation of the digital era’s instantaneous social media reinforcement of tribal divisions has put the traditional pace of democratic logic not merely “at risk” but, in fact, under siege. This early stage of political dishevelment, within a span of decades, will be exacerbated by quantum computing speed and the spread of artificial intelligence. One needs only read several of the recently crafted protocols that the Future of Life Institute (influenced by Elon Musk, David Chalmers, Martin Rees, Lawrence Krauss, Nick Bostrom and Max Tegmark) have put forward to grasp a full measure of institutional transformations and upheavals gathering steady momentum: a) that AI research and implementation must hold to the goal of beneficial, precisely opposed to unfocused and potentially malicious, intelligence; b) the need to update legal systems to keep pace with AI; c) assurance that AI builders and stakeholders will enforce moral responsibility in developing their technological innovations; d) economic prosperity that accrues from AI must be shared to the benefit of humanity as a whole; e) long term alterations to life on earth must be projected and managed with profound care and resolute attention.

My point here is to suggest that our contemporary crisis in democratic well being is fundamentally a crisis of and within capitalism itself, very much resembling Terry Eagleton’s cautionary warning, in Why Marx Was Right, that “the essential irrationality of the drive for capital accumulation…subordinates everything to the requirements of [its] self-expansion,” which are hostile to earth’s ecological dynamics (237). To that hostility, I’ll add the ineradicable priority of human health, cultural and political sanity, as well as once imagined rights of individual liberty, dignity and access to the contested possibility of justice."
jimmerod  capitalism  economics  ecology  sustainability  marxism  terryeagleton  capitalaccumulation  democracy  justice  society  socialjustice  us  humanism  soicalmedia  politics  ai  elonmusk  davidchalmers  martinrees  lawrencekrauss  nickbostrom  maxtegmark 
12 weeks ago by robertogreco
Silicon Valley Thinks Everyone Feels the Same Six Emotions
"From Alexa to self-driving cars, emotion-detecting technologies are becoming ubiquitous—but they rely on out-of-date science"
emotions  ai  artificialintelligence  2018  psychology  richfirth-godbehere  faces 
january 2019 by robertogreco
James Bridle on New Dark Age: Technology and the End of the Future - YouTube
"As the world around us increases in technological complexity, our understanding of it diminishes. Underlying this trend is a single idea: the belief that our existence is understandable through computation, and more data is enough to help us build a better world.

In his brilliant new work, leading artist and writer James Bridle surveys the history of art, technology, and information systems, and reveals the dark clouds that gather over our dreams of the digital sublime."
quantification  computationalthinking  systems  modeling  bigdata  data  jamesbridle  2018  technology  software  systemsthinking  bias  ai  artificialintelligent  objectivity  inequality  equality  enlightenment  science  complexity  democracy  information  unschooling  deschooling  art  computation  computing  machinelearning  internet  email  web  online  colonialism  decolonization  infrastructure  power  imperialism  deportation  migration  chemtrails  folkliterature  storytelling  conspiracytheories  narrative  populism  politics  confusion  simplification  globalization  global  process  facts  problemsolving  violence  trust  authority  control  newdarkage  darkage  understanding  thinking  howwethink  collapse 
september 2018 by robertogreco
Silicon Valley Is Turning Into Its Own Worst Fear
"Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism."



"Insight is precisely what Musk’s strawberry-picking AI lacks, as do all the other AIs that destroy humanity in similar doomsday scenarios. I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations. Corporations don’t operate autonomously, of course, and the humans in charge of them are presumably capable of insight, but capitalism doesn’t reward them for using it. On the contrary, capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what “good” means with “whatever the market decides.”"



"
It’d be tempting to say that fearmongering about superintelligent AI is a deliberate ploy by tech behemoths like Google and Facebook to distract us from what they themselves are doing, which is selling their users’ data to advertisers. If you doubt that’s their goal, ask yourself, why doesn’t Facebook offer a paid version that’s ad free and collects no private information? Most of the apps on your smartphone are available in premium versions that remove the ads; if those developers can manage it, why can’t Facebook? Because Facebook doesn’t want to. Its goal as a company is not to connect you to your friends, it’s to show you ads while making you believe that it’s doing you a favor because the ads are targeted.

So it would make sense if Mark Zuckerberg were issuing the loudest warnings about AI, because pointing to a monster on the horizon would be an effective red herring. But he’s not; he’s actually pretty complacent about AI. The fears of superintelligent AI are probably genuine on the part of the doomsayers. That doesn’t mean they reflect a real threat; what they reflect is the inability of technologists to conceive of moderation as a virtue. Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieve its goals because that’s the attitude they adopted. (Of course, they saw nothing wrong with this strategy when they were the ones engaging in it; it’s only the possibility that someone else might be better at it than they were that gives them cause for concern.)

There’s a saying, popularized by Fredric Jameson, that it’s easier to imagine the end of the world than to imagine the end of capitalism. It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending. What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.

Which brings us back to the importance of insight. Sometimes insight arises spontaneously, but many times it doesn’t. People often get carried away in pursuit of some goal, and they may not realize it until it’s pointed out to them, either by their friends and family or by their therapists. Listening to wake-up calls of this sort is considered a sign of mental health.

We need for the machines to wake up, not in the sense of computers becoming self-aware, but in the sense of corporations recognizing the consequences of their behavior. Just as a superintelligent AI ought to realize that covering the planet in strawberry fields isn’t actually in its or anyone else’s best interests, companies in Silicon Valley need to realize that increasing market share isn’t a good reason to ignore all other considerations. Individuals often reevaluate their priorities after experiencing a personal wake-up call. What we need is for companies to do the same — not to abandon capitalism completely, just to rethink the way they practice it. We need them to behave better than the AIs they fear and demonstrate a capacity for insight."
ai  elonmusk  capitalism  siliconvalley  technology  artificialintelligence  tedchiang  2017  insight  intelligence  regulation  governance  government  johnperrybarlow  1996  autonomy  externalcontrols  corporations  corporatism  fredericjameson  excess  growth  monopolies  technosolutionism  ethics  economics  policy  civilization  libertarianism  aynrand  billgates  markzuckerberg 
december 2017 by robertogreco
Impakt Festival 2017 - Performance: ANAB JAIN. HQ - YouTube
[Embedded here: http://impakt.nl/festival/reports/impakt-festival-2017/impakt-festival-2017-anab-jain/ ]

"'Everything is Beautiful and Nothing Hurts': @anab_jain's expansive keynote @impaktfestival weaves threads through death, transcience, uncertainty, growthism, technological determinism, precarity, imagination and truths. Thanks to @jonardern for masterful advise on 'modelling reality', and @tobias_revell and @ndkane for the invitation."
https://www.instagram.com/p/BbctTcRFlFI/ ]
anabjain  2017  superflux  death  aging  transience  time  temporary  abundance  scarcity  future  futurism  prototyping  speculativedesign  predictions  life  living  uncertainty  film  filmmaking  design  speculativefiction  experimentation  counternarratives  designfiction  futuremaking  climatechange  food  homegrowing  smarthomes  iot  internetofthings  capitalism  hope  futures  hopefulness  data  dataviz  datavisualization  visualization  williamplayfair  society  economics  wonder  williamstanleyjevons  explanation  statistics  wiiliambernstein  prosperity  growth  latecapitalism  propertyrights  jamescscott  objectivity  technocrats  democracy  probability  scale  measurement  observation  policy  ai  artificialintelligence  deeplearning  algorithms  technology  control  agency  bias  biases  neoliberalism  communism  present  past  worldview  change  ideas  reality  lucagatti  alextaylor  unknown  possibility  stability  annalowenhaupttsing  imagination  ursulaleguin  truth  storytelling  paradigmshifts  optimism  annegalloway  miyamotomusashi  annatsing 
november 2017 by robertogreco
Zeynep Tufekci: We're building a dystopia just to make people click on ads | TED Talk | TED.com
"We're building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren't even the real threat. What we need to understand is how the powerful might use AI to control us -- and what we can do in response."

[See also: "Machine intelligence makes human morals more important"
https://www.ted.com/talks/zeynep_tufekci_machine_intelligence_makes_human_morals_more_important

"Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics.""]
zeyneptufekci  machinelearning  ai  artificialintelligence  youtube  facebook  google  amazon  ethics  computing  advertising  politics  behavior  technology  web  online  internet  susceptibility  dystopia  sociology  donaldtrump 
october 2017 by robertogreco
Ellen Ullman: Life in Code: "A Personal History of Technology" | Talks at Google - YouTube
"The last twenty years have brought us the rise of the internet, the development of artificial intelligence, the ubiquity of once unimaginably powerful computers, and the thorough transformation of our economy and society. Through it all, Ellen Ullman lived and worked inside that rising culture of technology, and in Life in Code she tells the continuing story of the changes it wrought with a unique, expert perspective.

When Ellen Ullman moved to San Francisco in the early 1970s and went on to become a computer programmer, she was joining a small, idealistic, and almost exclusively male cadre that aspired to genuinely change the world. In 1997 Ullman wrote Close to the Machine, the now classic and still definitive account of life as a coder at the birth of what would be a sweeping technological, cultural, and financial revolution.

Twenty years later, the story Ullman recounts is neither one of unbridled triumph nor a nostalgic denial of progress. It is necessarily the story of digital technology’s loss of innocence as it entered the cultural mainstream, and it is a personal reckoning with all that has changed, and so much that hasn’t. Life in Code is an essential text toward our understanding of the last twenty years—and the next twenty."
ellenullman  bias  algorithms  2017  technology  sexism  racism  age  ageism  society  exclusion  perspective  families  parenting  mothers  programming  coding  humans  humanism  google  larrypage  discrimination  self-drivingcars  machinelearning  ai  artificialintelligence  literacy  reading  howweread  humanities  education  publicschools  schools  publicgood  libertarianism  siliconvalley  generations  future  pessimism  optimism  hardfun  kevinkelly  computing 
october 2017 by robertogreco
GitHub - Microsoft/ELL: Embedded Learning Library
"The Embedded Learning Library (ELL) allows you to build and deploy machine-learned pipelines onto embedded platforms, like Raspberry Pis, Arduinos, micro:bits, and other microcontrollers. The deployed machine learning model runs on the device, disconnected from the cloud. Our APIs can be used either from C++ or Python.

This project has been developed by a team of researchers at Microsoft Research. It's a work in progress, and we expect it to change rapidly, including breaking API changes. Despite this code churn, we welcome you to try it and give us feedback!

A good place to start is the tutorial, which allows you to do image recognition on a Raspberry Pi with a web cam, disconnected from the cloud. The software you deploy to the Pi will recognize a variety of common objects on camera and print a label for the recognized object on the Pi's screen."
machinelearning  embedded  arduino  ai  raspberrypi  microsoft  code  microcontrollers  via:clivethompson 
july 2017 by robertogreco
David Byrne | Journal | ELIMINATING THE HUMAN
"My dad was an electrical engineer—I love the engineer's’ way of looking at the world. I myself applied to both art school AND to engineering school (my frustration was that there was little or no cross-pollination. I was told at the time that taking classes in both disciplines would be VERY difficult). I am familiar with and enjoy both the engineer's mindset and the arty mindset (and I’ve heard that now mixing one’s studies is not as hard as it used to be).

The point is not that making a world to accommodate oneself is bad, but that when one has as much power over the rest of the world as the tech sector does, over folks who don’t naturally share its worldview, then there is a risk of a strange imbalance. The tech world is predominantly male—very much so. Testosterone combined with a drive to eliminate as much interaction with real humans as possible—do the math, and there’s the future.

We’ve gotten used to service personnel and staff who have no interest or participation in the businesses where they work. They have no incentive to make the products or the services better. This is a long legacy of the assembly line, standardising, franchising and other practices that increase efficiency and lower costs. It’s a small step then from a worker that doesn’t care to a robot. To consumers, it doesn’t seem like a big loss.

Those who oversee the AI and robots will, not coincidentally, make a lot of money as this trend towards less human interaction continues and accelerates—as many of the products produced above are hugely and addictively convenient. Google, Facebook and other companies are powerful and yes, innovative, but the innovation curiously seems to have had an invisible trajectory. Our imaginations are constrained by who and what we are. We are biased in our drives, which in some ways is good, but maybe some diversity in what influences the world might be reasonable and may be beneficial to all.

To repeat what I wrote above—humans are capricious, erratic, emotional, irrational and biased in what sometimes seem like counterproductive ways. I’d argue that though those might seem like liabilities, many of those attributes actually work in our favor. Many of our emotional responses have evolved over millennia, and they are based on the probability that our responses, often prodded by an emotion, will more likely than not offer the best way to deal with a situation.

Neuroscientist Antonio Damasio wrote about a patient he called Elliot, who had damage to his frontal lobe that made him unemotional. In all other respects he was fine—intelligent, healthy—but emotionally he was Spock. Elliot couldn’t make decisions. He’d waffle endlessly over details. Damasio concluded that though we think decision-making is rational and machinelike, it’s our emotions that enable us to actually decide.

With humans being somewhat unpredictable (well, until an algorithm completely removes that illusion), we get the benefit of surprises, happy accidents and unexpected connections and intuitions. Interaction, cooperation and collaboration with others multiplies those opportunities.

We’re a social species—we benefit from passing discoveries on, and we benefit from our tendency to cooperate to achieve what we cannot alone. In his book, Sapiens, Yuval Harari claims this is what allowed us to be so successful. He also claims that this cooperation was often facilitated by a possibility to believe in “fictions” such as nations, money, religions and legal institutions. Machines don’t believe in fictions, or not yet anyway. That’s not to say they won’t surpass us, but if machines are designed to be mainly self-interested, they may hit a roadblock. If less human interaction enables us to forget how to cooperate, then we lose our advantage.

Our random accidents and odd behaviors are fun—they make life enjoyable. I’m wondering what we’re left with when there are fewer and fewer human interactions. Remove humans from the equation and we are less complete as people or as a society. “We” do not exist as isolated individuals—we as individuals are inhabitants of networks, we are relationships. That is how we prosper and thrive."
davidbyrne  2017  automation  ai  business  culture  technology  dehumanization  humanism  humanity  gigeconomy  labor  work  robots  moocs  socialmedia  google  facebook  amazon  yuvalharari  social  productivity  economics  society  vr  ebay  retail  virtualreality 
june 2017 by robertogreco
Eyes Without a Face — Real Life
"The American painter and sculptor Ellsworth Kelly — remembered mainly for his contributions to minimalism, Color Field, and Hard-edge painting — was also a prodigious birdwatcher. “I’ve always been a colorist, I think,” he said in 2013. “I started when I was very young, being a birdwatcher, fascinated by the bird colors.” In the introduction to his monograph, published by Phaidon shortly before his death in 2015, he writes, “I remember vividly the first time I saw a Redstart, a small black bird with a few very bright red marks … I believe my early interest in nature taught me how to ‘see.’”

Vladimir Nabokov, the world’s most famous lepidopterist, classified, described, and named multiple butterfly species, reproducing their anatomy and characteristics in thousands of drawings and letters. “Few things have I known in the way of emotion or appetite, ambition or achievement, that could surpass in richness and strength the excitement of entomological exploration,” he wrote. Tom Bradley suggests that Nabokov suffered from the same “referential mania” as the afflicted son in his story “Signs and Symbols,” imagining that “everything happening around him is a veiled reference to his personality and existence” (as evidenced by Nabokov’s own “entomological erudition” and the influence of a most major input: “After reading Gogol,” he once wrote, “one’s eyes become Gogolized. One is apt to see bits of his world in the most unexpected places”).

For me, a kind of referential mania of things unnamed began with fabric swatches culled from Alibaba and fine suiting websites, with their wonderfully zoomed images that give you a sense of a particular material’s grain or flow. The sumptuous decadence of velvets and velours that suggest the gloved armatures of state power, and their botanical analogue, mosses and plant lichens. Industrial materials too: the seductive artifice of Gore-Tex and other thermo-regulating meshes, weather-palimpsested blue tarpaulins and piney green garden netting (winningly known as “shade cloth”). What began as an urge to collect colors and textures, to collect moods, quickly expanded into the delicious world of carnivorous plants and bugs — mantises exhibit a particularly pleasing biomimicry — and deep-sea aphotic creatures, which rewardingly incorporate a further dimension of movement. Walls suggest piled textiles, and plastics the murky translucence of jellyfish, and in every bag of steaming city garbage I now smell a corpse flower.

“The most pleasurable thing in the world, for me,” wrote Kelly, “is to see something and then translate how I see it.” I feel the same way, dosed with a healthy fear of cliché or redundancy. Why would you describe a new executive order as violent when you could compare it to the callous brutality of the peacock shrimp obliterating a crab, or call a dress “blue” when it could be cobalt, indigo, cerulean? Or ivory, alabaster, mayonnaise?

We might call this impulse building visual acuity, or simply learning how to see, the seeing that John Berger describes as preceding even words, and then again as completely renewed after he underwent the “minor miracle” of cataract surgery: “Your eyes begin to re-remember first times,” he wrote in the illustrated Cataract, “…details — the exact gray of the sky in a certain direction, the way a knuckle creases when a hand is relaxed, the slope of a green field on the far side of a house, such details reassume a forgotten significance.” We might also consider it as training our own visual recognition algorithms and taking note of visual or affective relationships between images: building up our datasets. For myself, I forget people’s faces with ease but never seem to forget an image I have seen on the internet.

At some level, this training is no different from Facebook’s algorithm learning based on the images we upload. Unlike Google, which relies on humans solving CAPTCHAs to help train its AI, Facebook’s automatic generation of alt tags pays dividends in speed as well as privacy. Still, the accessibility context in which the tags are deployed limits what the machines currently tell us about what they see: Facebook’s researchers are trying to “understand and mitigate the cost of algorithmic failures,” according to the aforementioned white paper, as when, for example, humans were misidentified as gorillas and blind users were led to then comment inappropriately. “To address these issues,” the paper states, “we designed our system to show only object tags with very high confidence.” “People smiling” is less ambiguous and more anodyne than happy people, or people crying.

So there is a gap between what the algorithm sees (analyzes) and says (populates an image’s alt text with). Even though it might only be authorized to tell us that a picture is taken outside, then, it’s fair to assume that computer vision is training itself to distinguish gesture, or the various colors and textures of the slope of a green field. A tag of “sky” today might be “cloudy with a threat of rain” by next year. But machine vision has the potential to do more than merely to confirm what humans see. It is learning to see something different that doesn’t reproduce human biases and uncover emotional timbres that are machinic. On Facebook’s platforms (including Instagram, Messenger, and WhatsApp) alone, over two billion images are shared every day: the monolith’s referential mania looks more like fact than delusion."
2017  rahelaima  algorithms  facebook  ai  artificialintelligence  machinelearning  tagging  machinevision  at  ellsworthkelly  color  tombrdley  google  captchas  matthewplummerfernandez  julesolitski  neuralnetworks  eliezeryudkowsky  seeing 
may 2017 by robertogreco
Physiognomy’s New Clothes – Blaise Aguera y Arcas – Medium
"In 1844, a laborer from a small town in southern Italy was put on trial for stealing “five ricottas, a hard cheese, two loaves of bread […] and two kid goats”. The laborer, Giuseppe Villella, was reportedly convicted of being a brigante (bandit), at a time when brigandage — banditry and state insurrection — was seen as endemic. Villella died in prison in Pavia, northern Italy, in 1864.

Villella’s death led to the birth of modern criminology. Nearby lived a scientist and surgeon named Cesare Lombroso, who believed that brigantes were a primitive type of people, prone to crime. Examining Villella’s remains, Lombroso found “evidence” confirming his belief: a depression on the occiput of the skull reminiscent of the skulls of “savages and apes”.

Using precise measurements, Lombroso recorded further physical traits he found indicative of derangement, including an “asymmetric face”. Criminals, Lombroso wrote, were “born criminals”. He held that criminality is inherited, and carries with it inherited physical characteristics that can be measured with instruments like calipers and craniographs [1]. This belief conveniently justified his a priori assumption that southern Italians were racially inferior to northern Italians.

The practice of using people’s outer appearance to infer inner character is called physiognomy. While today it is understood to be pseudoscience, the folk belief that there are inferior “types” of people, identifiable by their facial features and body measurements, has at various times been codified into country-wide law, providing a basis to acquire land, block immigration, justify slavery, and permit genocide. When put into practice, the pseudoscience of physiognomy becomes the pseudoscience of scientific racism.

Rapid developments in artificial intelligence and machine learning have enabled scientific racism to enter a new era, in which machine-learned models embed biases present in the human behavior used for model development. Whether intentional or not, this “laundering” of human prejudice through computer algorithms can make those biases appear to be justified objectively.

A recent case in point is Xiaolin Wu and Xi Zhang’s paper, “Automated Inference on Criminality Using Face Images”, submitted to arXiv (a popular online repository for physics and machine learning researchers) in November 2016. Wu and Zhang’s claim is that machine learning techniques can predict the likelihood that a person is a convicted criminal with nearly 90% accuracy using nothing but a driver’s license-style face photo. Although the paper was not peer-reviewed, its provocative findings generated a range of press coverage. [2]
Many of us in the research community found Wu and Zhang’s analysis deeply problematic, both ethically and scientifically. In one sense, it’s nothing new. However, the use of modern machine learning (which is both powerful and, to many, mysterious) can lend these old claims new credibility.

In an era of pervasive cameras and big data, machine-learned physiognomy can also be applied at unprecedented scale. Given society’s increasing reliance on machine learning for the automation of routine cognitive tasks, it is urgent that developers, critics, and users of artificial intelligence understand both the limits of the technology and the history of physiognomy, a set of practices and beliefs now being dressed in modern clothes. Hence, we are writing both in depth and for a wide audience: not only for researchers, engineers, journalists, and policymakers, but for anyone concerned about making sure AI technologies are a force for good.

We will begin by reviewing how the underlying machine learning technology works, then turn to a discussion of how machine learning can perpetuate human biases."



"Research shows that the photographer’s preconceptions and the context in which the photo is taken are as important as the faces themselves; different images of the same person can lead to widely different impressions. It is relatively easy to find a pair of images of two individuals matched with respect to age, race, and gender, such that one of them looks more trustworthy or more attractive, while in a different pair of images of the same people the other looks more trustworthy or more attractive."



"On a scientific level, machine learning can give us an unprecedented window into nature and human behavior, allowing us to introspect and systematically analyze patterns that used to be in the domain of intuition or folk wisdom. Seen through this lens, Wu and Zhang’s result is consistent with and extends a body of research that reveals some uncomfortable truths about how we tend to judge people.

On a practical level, machine learning technologies will increasingly become a part of all of our lives, and like many powerful tools they can and often will be used for good — including to make judgments based on data faster and fairer.

Machine learning can also be misused, often unintentionally. Such misuse tends to arise from an overly narrow focus on the technical problem, hence:

• Lack of insight into sources of bias in the training data;
• Lack of a careful review of existing research in the area, especially outside the field of machine learning;
• Not considering the various causal relationships that can produce a measured correlation;
• Not thinking through how the machine learning system might actually be used, and what societal effects that might have in practice.

Wu and Zhang’s paper illustrates all of the above traps. This is especially unfortunate given that the correlation they measure — assuming that it remains significant under more rigorous treatment — may actually be an important addition to the already significant body of research revealing pervasive bias in criminal judgment. Deep learning based on superficial features is decidedly not a tool that should be deployed to “accelerate” criminal justice; attempts to do so, like Faception’s, will instead perpetuate injustice."
blaiseaguerayarcas  physiognomy  2017  facerecognition  ai  artificialintelligence  machinelearning  racism  bias  xiaolinwu  xi  zhang  race  profiling  racialprofiling  giuseppevillella  cesarelombroso  pseudoscience  photography  chrononet  deeplearning  alexkrizhevsky  ilyasutskever  geoffreyhinton  gillevi  talhassner  alexnet  mugshots  objectivity  giambattistadellaporta  francisgalton  samuelnorton  josiahnott  georgegiddon  charlesdarwin  johnhoward  thomasclarkson  williamshakespeare  iscnewton  ernsthaeckel  scientificracism  jamesweidmann  faception  criminality  lawenforcement  faces  doothelange  mikeburton  trust  trustworthiness  stephenjaygould  philippafawcett  roberthughes  testosterone  gender  criminalclass  aggression  risk  riskassessment  judgement  brianholtz  shermanalexie  feedbackloops  identity  disability  ableism  disabilities 
may 2017 by robertogreco
Your Camera Wants to Kill the Keyboard | WIRED
"SNAPCHAT KNEW IT from the start, but in recent months Google and Facebook have all but confirmed it: The keyboard, slowly but surely, is fading into obscurity.

Last week at Google’s annual developer conference, the company presented its vision for how it expects its users—more than a billion people—to interact with technology in the coming years. And for the most part, it didn’t involve typing into a search box. Instead, Google’s brass spent its time onstage touting the company’s speech recognition skills and showing off Google Lens, a new computer vision technology that essentially turns your phone’s camera into a search engine.

Technology has once again reached an inflection point. For years, smartphones relied on hardware keyboards, a holdover from the early days of cell phones. Then came multitouch. Spurred by the wonders of the first smartphone screens, people swiped, typed, and pinched. Now, the way we engage with our phones is changing once again thanks to AI. Snapping a photo works as well, if not better, than writing a descriptive sentence in a search box. Casually chatting with Google Assistant, the company’s omnipresent virtual helper, gets results as fast, if not faster, than opening Chrome and navigating from there. The upshot, as Google CEO Sundar Pichai explained, is that we’re increasingly interacting with our computers in more natural and emotive ways, which could mean using your keyboard a lot less.

Ask the people who build your technology, and they’ll tell you: The camera is the new keyboard. The catchy phrase is becoming something of an industry-wide mantra to describe the constant march toward more visual forms of communication. Just look at Snapchat. The company bet its business on the fact that people would rather trade pictures than strings of words. The idea proved so compelling that Facebook and Instagram unabashedly developed their own versions of the feature. “The camera has already become a pervasive form of communication,” says Roman Kalantari, the head creative technologist at the design studio Fjord. “But what’s the next step after that?”

For Facebook and Snapchat, it was fun-house mirror effects and goofy augmented reality overlays—ways of building on top of photos that you simply can’t with text. Meanwhile, Google took a decidedly more utilitarian approach with Lens, turning the camera into an input device much like the keyboard itself. Point your camera at a tree, and it’ll tell you the variety. Snap a pic of the new restaurant on your block, and it’ll pull up the menu and hours, even help you book a reservation. Perhaps the single most effective demonstration of the technology was also its dullest—focus the lens on a router’s SKU and password, and Google’s image recognition will scan the information, pass it along to your Android phone, and automatically log you into the network.

This simplicity is a big deal. No longer does finding information require typing into a search box. Suddenly the world, in all its complexity, can be understood just by aiming your camera at something. Google isn’t the only company buying into this vision of the future. Amazon’s Fire Phone from 2014 enabled image-based search, which meant you could point the camera at a book or a box of cereal and have the item shipped to you instantly via Amazon Prime. Earlier this year, Pinterest launched the beta version of Lens, a tool that allows users to take a photo of an object in the real world and surface related objects on the Pinterest platform. “We’re getting to the point where using your camera to discover new ideas is as fast and easy as typing,” says Albert Pereta, a creative lead at Pinterest, who led the development at Lens.

Translation: Words can be hard, and it often works better to show than to tell. It’s easier to find the mid-century modern chair with a mahogany leather seat you’re looking for when you can share what it looks like, rather than typing a string of precise keywords. “With a camera, you can complete the task by taking a photo or video of the thing,” explains Gierad Laput, who studies human computer interaction at Carnegie Mellon. “Whereas with a keyboard, you complete this task by typing a description of the thing. You have to come up with the right description and type them accordingly.”

The caveat, of course, is that the image recognition needs to be accurate in order to work. You have agency when you type something into a search box—you can delete, revise, retype. But with a camera, the devices decides what you’re looking at and, even more crucially, assumes what information you want to see in return. The good (or potentially creepy) news is that with every photo taken, search query typed, and command spoken, Google learns more about you, which means over time your results grow increasingly accurate. With its deep trove of knowledge in hand, Google seems determined to smooth out the remaining rough edges of technology. It’ll probably still be a while before the keyboard goes extinct, but with every shot you take on your camera, it’s getting one step closer."
interface  ai  google  communication  images  cameras  2017  snapchat  facebook  smartphones  lizstinson  imagerecognition  pinterest  keyboards  input  romankalantari  technology  amazon  sundarpichai  albertpereta  gieradlaput 
may 2017 by robertogreco
Learning Gardens
[See also: https://www.are.na/blog/case%20study/2016/11/16/learning-gardens.html
https://www.are.na/edouard-u/learning-gardens ]

"Learning Gardens is a meta-organization to support grassroots non-institutional learning, exploration, and community-building.

At its simplest, this means we want to help you start and run your own learning group.

At its best, we hope you and your friends achieve nirvana."



"Our Mission

It's difficult to carve out time for focused study. We support learning groups in any discipline to overcome this inertia and build their own lessons, community, and learning styles.
If we succeed in our mission, participating groups should feel empowered and free of institutional shackles.

Community-based learning — free, with friends, using public resources — is simply a more sustainable and distributed form of learning for the 21st century. Peer-oriented and interest-driven study often fosters the best learning anyway.

Learning Gardens is an internet-native organization. As such, we seek to embrace transparency, decentralization, and multiple access points."



"Joining

Joining us largely means joining our slack. Say hello!

If you own or participate in your own learning group, we additionally encourage you to message us for further information.

Organization

We try to use tools that are free, open, and relatively transparent.

Slack to communicate and chat.
Github and Google Drive to build public learning resources.

You're welcome to join and assemble with us on Are.na, which we use to find and collect research materials. In a way, Learning Gardens was born from this network.

We also use Notion and Dropbox internally."



"Our lovely learning groups:

Mondays [http://mondays.nyc/ ]
Mondays is a casual discussion group for creative thinkers from all disciplines. Its simple aim is to encourage knowledge-sharing and self-learning by providing a space for the commingling of ideas, for reflective conversations that might otherwise not be had.

Pixel Lab [http://morgane.com/pixel-lab ]
A community of indie game devs and weird web artists — we're here to learn from each other and provide feedback and support for our digital side projects.

Emulating Intelligence [https://github.com/learning-gardens/_emulating_intelligence ]
EI is a learning group organized around the design, implementation, and implications of artificial intelligence as it is increasingly deployed throughout our lives. We'll weave together the theoretical, the practical, and the social aspects of the field and link it up to current events, anxieties, and discussions. To tie it all together, we'll experiment with tools for integrating AI into our own processes and practices.

Cybernetics Club [https://github.com/learning-gardens/cybernetics-club ]
Cybernetics Club is a learning group organized around the legacy of cybernetics and all the fields it has touched. What is the relevance of cybernetics today? Can it provide us the tools to make sense of the world today? Better yet, can it give us a direction for improving things?

Pedagogy Play Lab [http://ryancan.build/pedagogy-play-lab/ ]
A reading club about play, pedagogy, and learning meeting biweekly starting soon in Williamsburg, Brooklyn.

[http://millennialfocusgroup.info/ ]
monthly irl discussion. 4 reading, collaborating, presenting, critiquing, and hanging vaguely identity-oriented, creatively-inclined, internet-aware, structurally-experimental networked thinking <<<>>> intersectional thinking

Utopia School [http://www.utopiaschool.org/ ]
Utopia School is an ongoing project that shares information about both failed and successful utopian projects and work towards new ones. For us, utopias are those spaces and initiatives that re-imagine the world in some crucial way. The school engages and connects people through urgent conversations, with the goal of exploring, archiving and distributing collective knowledge throughout this multi-city project.

A Pattern Language [https://github.com/learning-gardens/pattern_language ]
Biweekly reading group on A Pattern Language, attempting to reinterpret the book for the current-day."

[See also: "Getting Started with Learning Gardens: An introduction of sorts"
http://learning-gardens.co/2016/08/13/getting_started.html

"Hi, welcome to this place.

If you’re reading this, you’re probably wondering where to start! Try sifting through some links on our site, especially our resources, Github Organization, and Google Drive.

If you’re tired of reading docs and this website in general, we’d highly recommend you join our lively community in real time chat. We’re using Slack for this. It’s great.

When you enter the chat, you’ll be dumped in a channel called #_landing_pad. This channel is muted by default so that any channels you join feel fully voluntary.

We’ve recently started a system where we append any ”Learning Gardens”-related channels with an underscore (_), so it’s easy to tell which channels are meta (e.g. #_help), and which are related to actual learning groups (e.g. #cybernetics).

Everything is up for revision." ]
education  learninggardens  learningnetworks  networks  slack  aldgdp  artschools  learning  howwlearn  sfsh  self-directed  self-directedlearning  empowerment  unschooling  deschooling  decentralization  transparency  accessibility  bookclubs  readinggroups  utopiaschool  apatternlanguage  christopheralexander  pedagogy  pedagogyplaylab  cyberneticsclub  emulatingintelligence  pixellab  games  gaming  videogames  mondays  creativity  multidisciplinary  crossdisciplinary  interdisciplinary  ai  artificialintelligence  distributed  online  web  socialmedia  édouardurcades  artschool 
december 2016 by robertogreco
Werner-Herzog comenta en I am Werner Herzog, the filmmaker. AMA.
"Q: You’ve covered everything from the prehistoric Chauvet Cave to the impending overthrow of not-so-far-off futuristic artificial intelligence. What about humankind's history/capability terrifies you the most?

A: It's a difficult question, because it encompasses almost all of human history so far. What is interesting about this paleolithic cave is that we see with our own eyes the origins, the beginning of the modern human soul. These people were like us, and what their concept of art was, we do not really comprehend fully. We can only guess.

And of course now today, we are into almost futuristic moments where we create artificial intelligence and we may not even need other human beings anymore as companions. We can have fluffy robots, and we can have assistants who brew the coffee for us and serve us to the bed, and all these things. So we have to be very careful and should understand what basic things, what makes us human, what essentially makes us into what we are. And once we understand that, we can make our educated choices, and we can use our inner filters, our conceptual filters. How far would we use artificial intelligence? How far would we trust, for example into the logic of a self-driving car? Will it crash or not if we don't look after the steering wheel ourselves?

So, we should make a clear choice, what we would like to preserve as human beings, and for that, for these kinds of conceptual answers, I always advise to read books. Read read read read read! And I say that not only to filmmakers, I say that to everyone. People do not read enough, and that's how you create critical thinking, conceptual thinking. You create a way of how to shape your life. Although, it seems to elude us into a pseudo-life, into a synthetic life out there in cyberspace, out there in social media. So it's good that we are using Facebook, but use it wisely."
via:savasavasava  wernerherzog  2016  reading  ai  artificialintelligence  humanity  humans  humanism  criticalthinking  coneptualithinking  thinking  howwething  howwelearn  socialmedia  cyberspace  redditama 
july 2016 by robertogreco
The Bot Power List 2016 — How We Get To Next
"Science fiction is full of bots that hurt people. HAL 9000 kills one astronaut and tries to kill another in 2001: A Space Odyssey; Ava in Ex Machina expertly manipulates the humans she meets to try and escape her cell; the T-800 is known as The Terminator for obvious reasons.

Even more common, though, are those bots clever and sentient enough to have real personality but undone through their naïveté — from Johnny Five in Short Circuit to the robotic cop in RoboCop, sci-fi is great at examining the dangers of greater intelligence when it’s open to manipulation or lacking concrete moral direction. A smarter bot, a more powerful bot, is also a bot that has more power to do evil things, and in the process expose human hubris.

These are all fictional examples, of course, but since we’re starting to see the tech industry shift its focus toward conversational bots as the future of, well, everything, maybe it offers us a useful way to define the power that a bot has. In this case, we’ll say that a bot is powerful if it could do powerfully evil things if it wanted to.

We’ve asked a number of experts to suggest what they think are the most powerful bots around today, in what is still an early stage for the industry. Together, those suggestions make up our first-ever Bot Power List."
bots  2016  googlenow  alexa  siri  ai  xiaoic  wordsmith  watson  hellobarbie  jillwatson  viv  cortana  amazon  apple  google  microsoft  facebook  eliza  luvo  lark  quartznwws  hala  cyberlover  murdock  bendixon  brucewilcox  neomy  deepdrumpf  rbs  josephweizenbaum  irenechang  ibm  mattel 
june 2016 by robertogreco
A Neural Network Playground
"Tinker With a Neural Network Right Here in Your Browser.
Don’t Worry, You Can’t Break It. We Promise.

Um, What Is a Neural Network?

It’s a technique for building a computer program that learns from data. It is based very loosely on how we think the human brain works. First, a collection of software “neurons” are created and connected together, allowing them to send messages to each other. Next, the network is asked to solve a problem, which it attempts to do over and over, each time strengthening the connections that lead to success and diminishing those that lead to failure. For a more detailed introduction to neural networks, Michael Nielsen’s Neural Networks and Deep Learning is a good place to start. For more a more technical overview, try Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville.

This Is Cool, Can I Repurpose It?

Please do! We’ve open sourced it on GitHub with the hope that it can make neural networks a little more accessible and easier to learn. You’re free to use it in any way that follows our Apache License. And if you have any suggestions for additions or changes, please let us know.

We’ve also provided some controls below to enable you tailor the playground to a specific topic or lesson. Just choose which features you’d like to be visible below then save this link, or refresh the page.

Show test data
Discretize output
Play button
Learning rate
Activation
Regularization
Regularization rate
Problem type
Which dataset
Ratio train data
Noise level
Batch size
# of hidden layers
What Do All the Colors Mean?

Orange and blue are used throughout the visualization in slightly different ways, but in general orange shows negative values while blue shows positive values.

The data points (represented by small circles) are initially colored orange or blue, which correspond to positive one and negative one.

In the hidden layers, the lines are colored by the weights of the connections between neurons. Blue shows a positive weight, which means the network is using that output of the neuron as given. An orange line shows that the network is assiging a negative weight.

In the output layer, the dots are colored orange or blue depending on their original values. The background color shows what the network is predicting for a particular area. The intensity of the color shows how confident that prediction is.

Credits

This was created by Daniel Smilkov and Shan Carter. This is a continuation of many people’s previous work — most notably Andrej Karpathy’s convnet.js and Chris Olah’s articles about neural networks. Many thanks also to D. Sculley for help with the original idea and to Fernanda Viégas and Martin Wattenberg and the rest of the Big Picture and Google Brain teams for feedback and guidance."
neuralnetworks  data  computing  deeplearning  ai  danielsmilkov  shancarter 
april 2016 by robertogreco
San Andreas State: Animal Cam
"San Andreas Deer Cam is a live video stream from a computer running a hacked modded version of Grand Theft Auto V, hosted on Twitch.tv. The mod creates a deer and follows it as it wanders throughout the 100 square miles of San Andreas, a fictional state in GTA V based on California. The deer has been programmed to control itself and make its own decisions, with no one actually playing the video game. The deer is ‘playing itself’, with all activity unscripted… and unexpected. In the past 48 hours, the deer has wandered along a moonlit beach, caused a traffic jam on a major freeway, been caught in a gangland gun battle, and been chased by the police.

For more information about the San Andreas Deer Cam project, click here. [http://bwatanabe.com/GTA_V_WanderingDeer.html ]

To donate to the San Andreas Deer Cam, click here. All donations go directly to The Humane Society."
gta  gtav  deer  animals  ai  videogames  games  gaming  brentwatanabe 
march 2016 by robertogreco
Master of Go Board Game Is Walloped by Google Computer Program - The New York Times
"Mr. Hassabis said AlphaGo did not try to consider all the possible moves in a match, as a traditional artificial intelligence machine like Deep Blue does. Rather, it narrows its options based on what it has learned from millions of matches played against itself and in 100,000 Go games available online.

Mr. Hassabis said that a central advantage of AlphaGo was that “it will never get tired, and it will not get intimidated either.

Kim Sung-ryong, a South Korean Go master who provided commentary during Wednesday’s match, said that AlphaGo made a clear mistake early on, but that unlike most human players, it did not lose its “cool.”

“It didn’t play Go as a human does,” he said. “It was a Go match with human emotional elements carved out.”

Mr. Lee said he knew he had lost the match after AlphaGo made a move so unexpected and unconventional that he thought “it was impossible to make such a move.”"
via:tealtan  alhphago  ai  artificialintelligence  go  2016  games  deepmind  leese-dol 
march 2016 by robertogreco
Chats with Bots | BBH Labs
"AI bots are everywhere. Or at least, chatter about chatbots is everywhere. The slick new Quartz app wants to msg you the news. Forbes launched their own official Telegram newsbot yesterday. Will 2016 be the year of the bot, the year we start chatting and stop worrying about whether the person(a) at the other end of the chat is human or not?

At Labs we like to get stuck in and get our hands dirty. Metaphorically. So we fired up Telegram, added some bots to our contact list, and started chatting. And here’s the resulting chat, screengrabbed for your edification."
bots  api  telegram  quartz  interface  ai  artificialintelligence  2016  jeremyettinghausen 
march 2016 by robertogreco
The Future of Chat Isn’t AI — Medium
"So if not AI, then what? What will bots let you do that was never possible before?

We think the answer is actually quite simple: For the first time ever, bots will let you instantly interact with the world around you. This is best illustrated through something that I experienced recently.

During last year’s baseball playoffs, I went to a Blue Jays game at the Rogers Centre. I was running late, so I went straight to my seat to catch as much of the game as I could. But when I got there, I realized I was the only one of my friends without a beer. So, with no beer guy in sight, I turned back to go get a beer. After 10 minutes of waiting in line, I finally got back to my seat. I had missed two home runs.

But good news! In the future, this will never have to happen again. The stadium is developing an app that will let you order from your seat. So next time, I won’t have to miss a beat — I’ll just order through the app. It will be great. Or will it?

Imagine I had sat down and found that there was a sticker on the back of the chair in front of me that said, “Want a beer? Download our app!” Sounds great! I’d unlock my phone, go to the App Store, search for the app, put in my password, wait for it to download, create an account, enter my credit card details, figure out where in the app I actually order from, figure out how to input how many beers I want and of what type, enter my seat number, and then finally my beer would be on its way.

Actually, I would have been better off just waiting in line.

And yet there are so many of these types of apps: apps to order train tickets at stations; apps to order food at restaurants; and apps to order movie tickets at theatres. Everyone wants you to just “download our app!” And yet, after spending millions of dollars developing them, how many people actually use them? My guess: not a lot.

But imagine the stadium one more time, except now instead of spending millions to develop an app, the stadium had spent thousands to develop a simple, text-based bot. I’d sit down and see a similar sticker: “Want a beer? Chat with us!” with a chat code beside it. I’d unlock my phone, open my chat app, and scan the code. Instantly, I’d be chatting with the stadium bot, and it’d ask me how many beers I wanted: “1, 2, 3, or 4.” It’d ask me what type: “Bud, Coors, or Corona.” And then it’d ask me how I wanted to pay: Credit card already on file (**** 0345), or a new card.

Chat app > Scan > 2 > Bud > **** 0345. Done."



"To be clear, this is just the beginning of the bots era, and there are many developments to come. The leaders in this space — Kik, WeChat, Line, Facebook, Slack, and Telegram — all have their own ideas about how this is all going to play out. But one thing I think we can all agree on is that chat is going to be the world’s next great operating system: a Bot OS (or, as we like to call it, BOS).

These developments open up new and giant opportunities for consumers, developers, and businesses. Chat apps will come to be thought of as the new browsers; bots will be the new websites. This is the beginning of a new internet."
chat  ai  artificialintelligence  2016  tedlivingston  kik  slack  telegram  facebook  ui  ux  interface  api  wechat  bots  qrcodes 
march 2016 by robertogreco
How to Think About Bots | Motherboard
"Who is responsible for the output and actions of bots, both ethically and legally? How does semi-autonomy create ethical constraints that limit the maker of a bot?"



"Given the public and social role they increasingly play—and whatever responsibility their creators assume—the actions of bots, whether implicitly or explicitly, have political outcomes. The last several years have seen a rise in bots being used to spread political propaganda, stymie activism and bolster social media follower lists of public figures. Activists can use bots to mobilize people around social and political causes. People working for a variety of groups and causes use bots to inject automated discourse on platforms like Twitter and Reddit. Over the last few years both government employees and opposition activists in Mexico have used bots in attempts to sway public opinion. Where do we draw the line between propaganda, public relations and smart communication?

Platforms, governments and citizens must step in and consider the purpose, and future, of bot technology before manipulative anonymity becomes a hallmark of the social bot."
bots  robots  ethics  ai  artificialintelligence  twitter  bot-ifesto  programming  coding  automation  samuelwoolley  danahboyd  meredithbroussard  madeleineelish  lainnafader  timhwang  alexislloyd  giladlotan  luisdanielpalacios  allisonparrish  giladrosner  saiphsavage  smanthashorey  socialbots  oliviataters  politics  policy 
march 2016 by robertogreco
Education Outrage: Now it is Facebook's turn to be stupid about AI
"What could Facebook be thinking here? We read stories to our children for many reasons. These are read because they have been around a long time, which is not a great reason. The reason to read frightening stories to children has never ben clear to me. The only value I saw in doing this sort of thing as a parent was to begin a discussion with the child about the story which might lead somewhere interesting. Now my particular children had been living in the real world at the time so they had some way to relate to the story because of their own fears, or because of experiences they might have had.

Facebook’s AI will be able to relate to these stories by matching words it has seen before. Oh good. It will not learn anything from the stories because it cannot learn anything from any story. Learning from stories means mapping your experiences (your own stories) to the new story and finding some commonalities and some differences. It also entails discussing those commonalties and differences with someone who is willing to have that conversation with you. In order to do that you have to be able to construct sentences on your own and be able to interpret your own experiences through conversations with your friends and family.

Facebook’s “AI” will not be doing this because it can’t. It has had no experiences. Apparently its experience is loading lots of text and counting patterns. Too bad there isn’t a children’s story about that.

Facebook hasn’t a clue about AI, but it will continue to spend money and accomplish nothing until AI is declared to have failed again,"
rogerschanck  2016  facebook  ai  artificialintelligence  algorithms  via:audreywatters  context  experience  understanding  stories  storytelling 
february 2016 by robertogreco
From AI to IA: How AI and architecture created interactivity - YouTube
"The architecture of digital systems isn't just a metaphor. It developed out of a 50-year collaborative relationship between architects and designers, on one side, and technologists in AI, cybernetics, and computer science, on the other. In this talk at the O'Reilly Design Conference in 2016, Molly Steenson traces that history of interaction, tying it to contemporary lessons aimed at designing for a complex world."
mollysteenson  2016  ai  artificialintelligence  douglasenglebart  symbiosis  augmentation  christopheralexander  nicholasnegroponte  richardsaulwurman  architecture  physical  digital  mitmedialab  history  mitarchitecturemachinegroup  technology  compsci  computerscience  cybernetics  interaction  structures  computing  design  complexity  frederickbrooks  computers  interactivity  activity  metaphor  marvinminsky  heuristics  problemsolving  kent  wardcunningham  gangoffour  objectorientedprogramming  apatternlanguage  wikis  agilesoftwaredevelopment  software  patterns  users  digitalspace  interactiondesign  terrywinograd  xeroxparc  petermccolough  medialab 
february 2016 by robertogreco
Why Do I Have to Call This App ‘Julie’? - The New York Times
"Technologies speak with recorded feminine voices because women “weren’t normally there to be heard,” Helen Hester, a media studies lecturer at Middlesex University, told me. A woman’s voice stood out. For example, an automated recording of a woman’s voice used in cockpit navigation becomes a beacon, a voice in stark contrast with that of everyone else, when all the pilots on board are men.

Ms. Hester lives in London, where the spectral sound of robotic women is piping from nearly every corner. Enter the Underground and you hear a disembodied woman announcing “the next station is Mornington Crescent” and the train’s signature canned message, “please mind the gap between the train and the platform.”

A similar voice — emotionless, timeless, with an accent difficult to place — emits from clocks and traffic lights, and inside elevators and supermarkets. The “coldness, the forthrightness of the voice” is what Ms. Hester finds striking. What human speaks with such emotionless authority? And, as Ms. Hester points out: “It’s not real authority. There’s a maternal edge to all of it. It is personal guidance rather than definite directions.”

And, she says, these voices can even play into people’s expectations of male authority because they aren’t actual women. People hear a woman’s voice, realize it is robotic, and “imagine a male programmer” did the actual work.

No one seems to market tech products in the image of the most famous virtual assistant in film history. Hal from “2001: A Space Odyssey” was so brilliant and manly that it attempted to kill off the crew of the spacecraft it was built to manage. Instead, people build what I call “Stepford apps.” These are the Internet’s answer to those old sci-fi robots in dresses mopping floors with manufactured enthusiasm."
ai  artificialintelligence  gender  joannemcneil  voices  siri  cortana  alexa  2015  sexism  apple  amazon  microsoft 
december 2015 by robertogreco
I spent a weekend at Google talking with nerds about charity. I came away … worried. - Vox
"To be fair, the AI folks weren't the only game in town. Another group emphasized "meta-charity," or giving to and working for effective altruist groups. The idea is that more good can be done if effective altruists try to expand the movement and get more people on board than if they focus on first-order projects like fighting poverty.

This is obviously true to an extent. There's a reason that charities buy ads. But ultimately you have to stop being meta. As Jeff Kaufman — a developer in Cambridge who's famous among effective altruists for, along with his wife Julia Wise, donating half their household's income to effective charities — argued in a talk about why global poverty should be a major focus, if you take meta-charity too far, you get a movement that's really good at expanding itself but not necessarily good at actually helping people.

And you have to do meta-charity well — and the more EA grows obsessed with AI, the harder it is to do that. The movement has a very real demographic problem, which contributes to very real intellectual blinders of the kind that give rise to the AI obsession. And it's hard to imagine that yoking EA to one of the whitest and most male fields (tech) and academic subjects (computer science) will do much to bring more people from diverse backgrounds into the fold.

The self-congratulatory tone of the event didn't help matters either. I physically recoiled during the introductory session when Kerry Vaughan, one of the event's organizers, declared, "I really do believe that effective altruism could be the last social movement we ever need." In the annals of sentences that could only be said with a straight face by white men, that one might take the cake.

Effective altruism is a useful framework for thinking through how to do good through one's career, or through political advocacy, or through charitable giving. It is not a replacement for movements through which marginalized peoples seek their own liberation. If EA is to have any hope of getting more buy-in from women and people of color, it has to at least acknowledge that."
charity  philanthropy  ethics  2015  altruism  dylanmatthews  google  siliconvalley  ai  artificialintelligence 
november 2015 by robertogreco
Facebook, communication, and personhood - Text Patterns - The New Atlantis
"William Davies tells us about Mark Zuckerberg's hope to create an “ultimate communication technology,” and explains how Zuckerberg's hopes arise from a deep dissatisfaction with and mistrust of the ways humans have always communicated with one another. Nick Carr follows up with a thoughtful supplement:
If language is bound up in living, if it is an expression of both sense and sensibility, then computers, being non-living, having no sensibility, will have a very difficult time mastering “natural-language processing” beyond a certain rudimentary level. The best solution, if you have a need to get computers to “understand” human communication, may to be avoid the problem altogether. Instead of figuring out how to get computers to understand natural language, you get people to speak artificial language, the language of computers. A good way to start is to encourage people to express themselves not through messy assemblages of fuzzily defined words but through neat, formal symbols — emoticons or emoji, for instance. When we speak with emoji, we’re speaking a language that machines can understand.

People like Mark Zuckerberg have always been uncomfortable with natural language. Now, they can do something about it.

I think we should be very concerned about this move by Facebook. In these contexts, I often think of a shrewd and troubling comment by Jaron Lanier: “The Turing test cuts both ways. You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you've let your sense of personhood degrade in order to make the illusion work for you?” In this sense, the degradation of personhood is one of Facebook's explicit goals, and Facebook will increasingly require its users to cooperate in lowering their standards of intelligence and personhood."
williamdavies  markzuckerberg  communication  technology  2015  facebook  alanjacobs  jaronlanier  turingtest  ai  artificialintelligence  personhood  dehumanization  machines 
september 2015 by robertogreco
silentrob/superscript · GitHub
"SuperScript is a dialog system + bot engine for creating human-like conversation chat bots. It exposes an expressive script for crafting dialogue and features text-expansion using wordnet and Information Retrieval and extraction using ConceptNet."
bots  chatbots  code  github  ai 
august 2015 by robertogreco
Teaching Machines and Turing Machines: The History of the Future of Labor and Learning
"In all things, all tasks, all jobs, women are expected to perform affective labor – caring, listening, smiling, reassuring, comforting, supporting. This work is not valued; often it is unpaid. But affective labor has become a core part of the teaching profession – even though it is, no doubt, “inefficient.” It is what we expect – stereotypically, perhaps – teachers to do. (We can debate, I think, if it’s what we reward professors for doing. We can interrogate too whether all students receive care and support; some get “no excuses,” depending on race and class.)

What happens to affective teaching labor when it runs up against robots, against automation? Even the tasks that education technology purports to now be able to automate – teaching, testing, grading – are shot through with emotion when done by humans, or at least when done by a person who’s supposed to have a caring, supportive relationship with their students. Grading essays isn’t necessarily burdensome because it’s menial, for example; grading essays is burdensome because it is affective labor; it is emotionally and intellectually exhausting.

This is part of our conundrum: teaching labor is affective not simply intellectual. Affective labor is not valued. Intellectual labor is valued in research. At both the K12 and college level, teaching of content is often seen as menial, routine, and as such replaceable by machine. Intelligent machines will soon handle the task of cultivating human intellect, or so we’re told.

Of course, we should ask what happens when we remove care from education – this is a question about labor and learning. What happens to thinking and writing when robots grade students’ essays, for example. What happens when testing is standardized, automated? What happens when the whole educational process is offloaded to the machines – to “intelligent tutoring systems,” “adaptive learning systems,” or whatever the latest description may be? What sorts of signals are we sending students?

And what sorts of signals are the machines gathering in turn? What are they learning to do?
Often, of course, we do not know the answer to those last two questions, as the code and the algorithms in education technologies (most technologies, truth be told) are hidden from us. We are becoming as law professor Frank Pasquale argues a “black box society.” And the irony is hardly lost on me that one of the promises of massive collection of student data under the guise of education technology and learning analytics is to crack open the “black box” of the human brain.

We still know so little about how the brain works, and yet, we’ve adopted a number of metaphors from our understanding of that organ to explain how computers operate: memory, language, intelligence. Of course, our notion of intelligence – its measurability – has its own history, one wrapped up in eugenics and, of course, testing (and teaching) machines. Machines now both frame and are framed by this question of intelligence, with little reflection on the intellectual and ideological baggage that we carry forward and hard-code into them."



"We’re told by some automation proponents that instead of a future of work, we will find ourselves with a future of leisure. Once the robots replace us, we will have immense personal freedom, so they say – the freedom to pursue “unproductive” tasks, the freedom to do nothing at all even, except I imagine, to continue to buy things.
On one hand that means that we must address questions of unemployment. What will we do without work? How will we make ends meet? How will this affect identity, intellectual development?

Yet despite predictions about the end of work, we are all working more. As games theorist Ian Bogost and others have observed, we seem to be in a period of hyper-employment, where we find ourselves not only working numerous jobs, but working all the time on and for technology platforms. There is no escaping email, no escaping social media. Professionally, personally – no matter what you say in your Twitter bio that your Tweets do not represent the opinions of your employer – we are always working. Computers and AI do not (yet) mark the end of work. Indeed, they may mark the opposite: we are overworked by and for machines (for, to be clear, their corporate owners).

Often, we volunteer to do this work. We are not paid for our status updates on Twitter. We are not compensated for our check-in’s in Foursquare. We don’t get kick-backs for leaving a review on Yelp. We don’t get royalties from our photos on Flickr.

We ask our students to do this volunteer labor too. They are not compensated for the data and content that they generate that is used in turn to feed the algorithms that run TurnItIn, Blackboard, Knewton, Pearson, Google, and the like. Free labor fuels our technologies: Forum moderation on Reddit – done by volunteers. Translation of the courses on Coursera and of the videos on Khan Academy – done by volunteers. The content on pretty much every “Web 2.0” platform – done by volunteers.

We are working all the time; we are working for free.

It’s being framed, as of late, as the “gig economy,” the “freelance economy,” the “sharing economy” – but mostly it’s the service economy that now comes with an app and that’s creeping into our personal not just professional lives thanks to billions of dollars in venture capital. Work is still precarious. It is low-prestige. It remains unpaid or underpaid. It is short-term. It is feminized.

We all do affective labor now, cultivating and caring for our networks. We respond to the machines, the latest version of ELIZA, typing and chatting away hoping that someone or something responds, that someone or something cares. It’s a performance of care, disguising what is the extraction of our personal data."



"Personalization. Automation. Management. The algorithms will be crafted, based on our data, ostensibly to suit us individually, more likely to suit power structures in turn that are increasingly opaque.

Programmatically, the world’s interfaces will be crafted for each of us, individually, alone. As such, I fear, we will lose our capacity to experience collectivity and resist together. I do not know what the future of unions looks like – pretty grim, I fear; but I do know that we must enhance collective action in order to resist a future of technological exploitation, dehumanization, and economic precarity. We must fight at the level of infrastructure – political infrastructure, social infrastructure, and yes technical infrastructure.

It isn’t simply that we need to resist “robots taking our jobs,” but we need to challenge the ideologies, the systems that loath collectivity, care, and creativity, and that champion some sort of Randian individual. And I think the three strands at this event – networks, identity, and praxis – can and should be leveraged to precisely those ends.

A future of teaching humans not teaching machines depends on how we respond, how we design a critical ethos for ed-tech, one that recognizes, for example, the very gendered questions at the heart of the Turing Machine’s imagined capabilities, a parlor game that tricks us into believing that machines can actually love, learn, or care."
2015  audreywatters  education  technology  academia  labor  work  emotionallabor  affect  edtech  history  highered  highereducation  teaching  schools  automation  bfskinner  behaviorism  sexism  howweteach  alanturing  turingtest  frankpasquale  eliza  ai  artificialintelligence  robots  sharingeconomy  power  control  economics  exploitation  edwardthorndike  thomasedison  bobdylan  socialmedia  ianbogost  unemployment  employment  freelancing  gigeconomy  serviceeconomy  caring  care  love  loving  learning  praxis  identity  networks  privacy  algorithms  freedom  danagoldstein  adjuncts  unions  herbertsimon  kevinkelly  arthurcclarke  sebastianthrun  ellenlagemann  sidneypressey  matthewyglesias  karelčapek  productivity  efficiency  bots  chatbots  sherryturkle 
august 2015 by robertogreco
Matt Jones: Jumping to the End -- Practical Design Fiction on Vimeo
[Matt says (http://magicalnihilism.com/2015/03/06/my-ixd15-conference-talk-jumping-to-the-end/ ):

"This talk summarizes a lot of the approaches that we used in the studio at BERG, and some of those that have carried on in my work with the gang at Google Creative Lab in NYC.

Unfortunately, I can’t show a lot of that work in public, so many of the examples are from BERG days…

Many thanks to Catherine Nygaard and Ben Fullerton for inviting me (and especially to Catherine for putting up with me clowning around behind here while she was introducing me…)"]

[At ~35:00:
“[(Copy)Writers] are the fastest designers in the world. They are amazing… They are just amazing at that kind of boiling down of incredibly abstract concepts into tiny packages of cognition, language. Working with writers has been my favorite thing of the last two years.”
mattjones  berg  berglondon  google  googlecreativelab  interactiondesign  scifi  sciencefiction  designfiction  futurism  speculativefiction  julianbleecker  howwework  1970s  comics  marvel  marvelcomics  2001aspaceodyssey  fiction  speculation  technology  history  umbertoeco  design  wernerherzog  dansaffer  storytelling  stories  microinteractions  signaturemoments  worldbuilding  stanleykubrick  details  grain  grammars  computervision  ai  artificialintelligence  ui  personofinterest  culture  popculture  surveillance  networks  productdesign  canon  communication  johnthackara  macroscopes  howethink  thinking  context  patternsensing  systemsthinking  systems  mattrolandson  objects  buckminsterfuller  normanfoster  brianarthur  advertising  experiencedesign  ux  copywriting  writing  film  filmmaking  prototyping  posters  video  howwewrite  cognition  language  ara  openstudioproject  transdisciplinary  crossdisciplinary  interdisciplinary  sketching  time  change  seams  seamlessness 
march 2015 by robertogreco
The real robot economy and the bus ticket inspector | Science | The Guardian
"Hidden in these everyday, mundane interactions are different moral or ethical questions about the future of AI: if a job is affected but not taken over by a robot, how and when does the new system interact with a consumer? Is it ok to turn human social intelligence – managing a difficult customer – into a commodity? Is it ok that a decision lies with a handheld device, while the human is just a mouthpiece?

What does this mean for the second wave robot economy?

Mike Osborne and Carl Benedikt Frey from Oxford University have studied the risk of automation in the US economy, concluding that 47 per cent of jobs in the current workforce are at high risk of computerisation. They come to this conclusion by looking for jobs that can’t be automated; the 47 per cent is what’s left over. In their model, there are three bottlenecks that prevent automation:
…occupations that involve complex perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks are unlikely to be substituted by computer capital over the next decade or two.


These are bottlenecks which technological advances will find it hard to overcome. The authors predict that the next decade will see steps forward in the algorithms that automate cognitive tasks, including cutting edge techniques like machine learning, artificial intelligence and mobile robotics.

This second wave of the robot economy follows a first wave that automated manufacturing and repetitive manual tasks. So many of the desk jobs that our parents and grandparents would have done, like typing and manual data entry, are now becoming obsolete. And according to Osborne and Frey, some of the jobs that are most at risk of automation, were formerly present in droves at many city offices. This includes the likes of accountants, legal clerks and book keepers - dying breeds, and casualties of the robot economy. But Osborne and Frey think that tasks like navigating complex environments, creative thinking and social influence and persuasion will not be automated as part of these advances.

Some of my colleagues are interested in the second kind of task – creativity. They are working with Osborne and Frey to understand how resistant the creative economy is to automation: how many jobs in the creative economy involve truly creative tasks (if that’s not tautologous). Preliminary results look pretty good for creative occupations. 87 per cent are at low or no risk of automation.

Maybe service occupations where persuasion and influence are important will be saved too. The bus ticket inspector requires exactly the kind of social intelligence that Osborne and Frey argue a machine cannot replicate. But this doesn’t take into account the subtleties I witnessed on the top deck of the 76. It may not be job titles or wages that are most affected by the day-to-day of a robot economy. Automation of parts of a job, or of the context that someone works in, means that jobs not taken by machines are fundamentally changed in other ways. We may become slaves to hardwired decision-making systems.

To avoid this, we need to design human-machine jobs with the humans who will be part of them. I met Carla Brodley, Computer Scientist from North­eastern Uni­ver­sity in the US a few months ago. She has applies advanced computing techniques to med­ical imaging, diagnosis and neu­ro­science. Brodley has publicly argued that the most interesting problems for machine learning come from real world uses of these computational techniques. She says the tough bit of her job is knowing when and how to bring the expert - doctor, radiologist, scientist - into the design of the algorithm. But she is avid that the success of her work depends entirely on this kind of user-led computational design. We need to find a Brodley for the bus ticket inspector."

[via: "'The real robot economy and the bus ticket inspector' @pesska on why we need user-led computational design."
https://twitter.com/Superflux/status/567745423163789312 ]
automation  robots  2015  design  jessicabland  computationaldesign  technology  london  mikeosborne  carlbenediktfrey  computerization  economics  services  socialintelligence  ai  artificialintelligence 
february 2015 by robertogreco
Eyeo 2014 - Claire Evans on Vimeo
"Science Fiction & The Synthesized Sound – Turn on the radio in the year 3000, and what will you hear? When we make first contact with an alien race, will we—as in "Close Encounters of the Third Kind"—communicate through melody? If the future has a sound, what can it possibly be? If science fiction has so far failed to produce convincing future music, it won’t be for lack of trying. It’s just that the problem of future-proofing music is complex, likely impossible. The music of 1,000 years from now will not be composed by, or even for, human ears. It may be strident, seemingly random, mathematical; like the “Musica Universalis” of the ancients, it might not be audible at all. It might be the symphony of pure data. It used to take a needle, a laser, or a magnet to reproduce sound. Now all it takes is code. The age of posthuman art is near; music, like mathematics, may be a universal language—but if we’re too proud to learn its new dialects, we’ll find ourselves silent and friendless in a foreign future."
claireevans  sciencefiction  scifi  music  future  sound  audio  communication  aesthetics  robertscholes  williamgibson  code  composition  2014  johncage  film  history  ai  artificialintelligence  machines  universality  appreciation  language  turingtest 
february 2015 by robertogreco
Valley Of The Meatpuppets on Huffduffer
"The Valley of the Meatpuppets is an ethereal space where people, agents, thingbots, action heroes and big dogs coexist. In this new habitat, we are forming complex relationships with nebulous surveillance systems, machine intelligences and architectures of control, confronting questions about our freedom and capacity to act under invisible constraints."
anabjain  2014  dconstruct  dconstruct2014  bigdog  surveillance  machineintelligence  ai  artificialintelligence  technology  design  systesmthinking  individualism  privacy  future  wearable  wearables  nsa  complexity  googleglass  intenetofthings  control 
september 2014 by robertogreco
Deep Belief by Jetpac - teach your phone to recognize any object on the App Store on iTunes
"Teach your iPhone to see! Teach it to recognize any object using the Jetpac Deep Belief framework running on the phone.

See the future - this is the latest in Object Recognition technology, on a phone for the first time.

The app helps you to teach the phone to recognize an object by taking a short video of that object, and then teach it what is not the object, by taking a short video of everything around, except that object. Then you can scan your surroundings with your phone camera, and it will detect when you are pointing at the object which you taught it to recognize.

We trained our Deep Belief Convoluted Neural Network on a million photos, and like a brain, it learned concepts of textures, shapes and patterns, and combining those to recognize objects. It includes an easily-trainable top layer so you can recognize the objects that you are interested in.

If you want to build custom object recognition into your own iOS app, you can download our Deep Belief SDK framework. It's an implementation of the Krizhevsky convolutional neural network architecture for object recognition in images, running in under 300ms on an iPhone 5S, and available under an open BSD License."

[via: https://medium.com/message/the-fire-phone-at-the-farmers-market-34f51c2ba885 petewarden ]

[See also: http://petewarden.com/2014/04/08/how-to-add-a-brain-to-your-smart-phone/ ]
applications  ios  ios7  iphone  ipad  objects  objectrecognition  identification  objectidentification  mobile  phones  2014  learning  deepbelief  petewarden  ai  artificialintelligence  cameras  computervision  commonplace  deeplearning 
june 2014 by robertogreco
The Fire Phone at the farmers market — The Message — Medium
"With the exception of a few paintings, all of Amazon’s demo “items” were commercial products: things with ISBNs, bar codes, and/or spectral signatures. Things with price tags.

We did not see the Fire Phone recognize a eucalyptus tree.

There is reason to suspect the Fire Phone cannot identify a goldfinch.

And I do not think the Fire Phone can tell me which of these “items” is kale.

This last one is the most troubling, because a system that greets a bag of frozen vegetables with a bar code like an old friend but draws a blank on a basket of fresh greens at the farmers market—that’s not just technical. That’s political.

But here’s the thing: The kale is coming.

There’s an iPhone app called Deep Belief, a tech demo from programmer Pete Warden. It’s free."



"If Amazon’s Fire Phone could tell kale from Swiss chard, if it could recognize trees and birds, I think its polarity would flip entirely, and it would become a powerful ally of humanistic values. As it stands, Firefly adds itself to the forces expanding the commercial sphere, encroaching on public space, insisting that anything interesting must have a price tag. But of course, that’s Amazon: They’re in The Goldfinch detection business, not the goldfinch detection business.

If we ever do get a Firefly for all the things without price tags, we’ll probably get it from Google, a company that’s already working hard on computer vision optimized for public space. It’s lovely to imagine one of Google’s self-driving cars roaming around, looking everywhere at once, diligently noting street signs and stop lights… and noting also the trees standing alongside those streets and the birds perched alongside those lights.

Lovely, but not likely.

Maybe the National Park Service needs to get good at this.

At this point, the really deeply humanistic critics are thinking: “Give me a break. You need an app for this? Buy a bird book. Learn the names of trees.” Okay, fine. But, you know what? I have passed so much flora and fauna in my journeys around this fecund neighborhood of mine and wondered: What is that? If I had a humanistic Firefly to tell me, I’d know their names by now."
amazon  technology  robinsloan  objects  objectrecognition  identification  objectidentification  firefly  mobile  phones  2014  jeffbezos  consumption  learning  deepbelief  petewarden  ai  artificialintelligence  cameras  computervision  commonplace  deeplearning 
june 2014 by robertogreco
Episode Eighty Six: Solid 2 of 2; Requests - GOV.UK 2018; Next
"Today, reading LinkedIn recommendations as they came in felt like reading eulogies. Apart from me not quite being dead. Not yet, at least. Or, I was dead and I hadn't realised it yet. It doesn't matter, anyway: all the recommendations from people I've enjoyed working with over the past three years just feel, unfortunately, like double-edged knives - ultimately good but only really readable with a twist.

Right now is a bad time, one of those terrible times when it doesn't even really matter that one of my good friends has pulled me aside, insisted that I have something to eat and sat patiently with me in a pizza joint while I stare off into space and mumble. It doesn't matter that he's great and doing these things for me and telling me that this too will pass: I am hearing all of the words that he's saying, the sounds he's making as that make all the little bits of air vibrate and hit my ear and undergo some sort of magic transformation as they get understood in my brain. But they don't connect. Understanding is different from feeling. And right now, I'm feeling useless and broken and disconnected and above all, sad. But I can't feel those things. I have meetings to go to. Hustle to hust. Against what felt at times like the relentless optimism of an O'Reilly conference I had to finally hide away for a while, behind a Diet Coke and a slice of cheesecake, because dealing with that much social interaction was just far too draining.

And so I'm hiding again tonight, instead of out with friends, because it's just too hard to smile and pretend that everything's OK when it's demonstrably not."



"Over the past couple of days at Solid it's become almost painfully apparent that the Valley, in broad terms, is suffering from a chronic lack of empathy in terms of how it both sees and deals with the rest of the world, not just in communicating what it's doing and what it's excited about, but also in its acts. Sometimes these are genuine gaffes - mistakes that do not betray a deeper level of consideration, thinking or strategy. Other times, they *are* genuine, and they betray at the very least a naivety as to consequence or second-order impact (and I'm prepared to accept that without at least a certain level of naivety or lack of consideration for impact we'd find it pretty hard as a species to ever take advantage of any technological advance), but let me instead perhaps point to a potential parallel. 

There are a bunch of people worried about what might happen if, or when, we finally get around to a sort of singularity event and we have to deal with a genuine superhuman artificial intelligence that can think (and act) rings around us, never mind improving its ability at a rate greater than O(n). 

One of the reasons to be afraid of such a strong AI was explained by Elizer Yudkowsky:

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

And here's how the rest of the world, I think, can unfairly perceive Silicon Valley: Silicon Valley doesn't care about humans, really. Silicon Valley loves solving problems. It doesn't hate you and it doesn't love you, but you do things that it can use for something else. Right now, those things include things-that-humans-are-good-at, like content generation and pointing at things. Right now, those things include things like getting together and making things. But solving problems is more fun than looking after people, and sometimes solving problems can be rationalised away as looking after people because hey, now that $20bn worth of manufacturing involved in making planes has gone away, people can go do stuff that they want to do, instead of having to make planes!

Would that it were that easy.

So anyway. I'm thinking about the Internet of Things and how no one's done a good job of branding it or explaining it or communicating it to Everyone Else. Because that needs doing.

--

As ever, thanks for the notes. Keep them coming in. If you haven't said hi already, please do and let me know where you heard about my newsletter. And if you like the newsletter, please consider telling some friends about it."
danhon  2014  siliconvalley  ai  empathy  problemsolving  society  californianideology  unemplyment  capitalism  depression  elizeryudkowsky  humans  singularity 
may 2014 by robertogreco
George Dyson: No Time Is There--- The Digital Universe and Why Things Appear To Be Speeding Up - The Long Now
"The digital big bang

When the digital universe began, in 1951 in New Jersey, it was just 5 kilobytes in size. "That's just half a second of MP3 audio now," said Dyson. The place was the Institute for Advanced Study, Princeton. The builder was engineer Julian Bigelow. The instigator was mathematician John von Neumann. The purpose was to design hydrogen bombs.

Bigelow had helped develop signal processing and feedback (cybernetics) with Norbert Wiener. Von Neumann was applying ideas from Alan Turing and Kurt Gödel, along with his own. They were inventing and/or gates, addresses, shift registers, rapid-access memory, stored programs, a serial architecture—all the basics of the modern computer world, all without thought of patents. While recuperating from brain surgery, Stanislaw Ulam invented the Monte Carlo method of analysis as a shortcut to understanding solitaire. Shortly Von Neumann's wife Klári was employing it to model the behavior of neutrons in a fission explosion. By 1953, Nils Barricelli was modeling life itself in the machine—virtual digital beings competed and evolved freely in their 5-kilobyte world.

In the few years they ran that machine, from 1951 to 1957, they worked on the most difficult problems of their time, five main problems that are on very different time scales—26 orders of magnitude in time—from the lifetime of a neutron in a bomb's chain reaction measured in billionths of a second, to the behavior of shock waves on the scale of seconds, to weather prediction on a scale of days, to biological evolution on the scale of centuries, to the evolution of stars and galaxies over billions of years. And our lives, measured in days and years, is right in the middle of the scale of time. I still haven't figured that out."

Julian Bigelow was frustrated that the serial, address-constrained, clock-driven architecture of computers became standard because it is so inefficient. He thought that templates (recognition devices) would work better than addresses. The machine he had built for von Neumann ran on sequences rather than a clock. In 1999 Bigelow told George Dyson, "Sequence is different from time. No time is there." That's why the digital world keeps accelerating in relation to our analog world, which is based on time, and why from the perspective of the computational world, our world keeps slowing down.

The acceleration is reflected in the self-replication of computers, Dyson noted: "By now five or six trillion transistors per second are being added to the digital universe, and they're all connected." Dyson is a kayak builder, emulating the wood-scarce Arctic natives to work with minimum frame inside a skin craft. But in the tropics, where there is a surplus of wood, natives make dugout canoes, formed by removing wood. "We're now surrounded by so much information," Dyson concluded, "we have to become dugout canoe builders. The buzzword of last year was 'big data.' Here's my definition of the situation: Big data is what happened when the cost of storing information became less than the cost of throwing it away."

--Stewart Brand"

[See also: http://blog.longnow.org/02014/04/04/george-dyson-seminar-flashback-no-time-is-there/ ]
data  longnow  georgedyson  computing  history  stewartbrand  2013  ai  artificialintelligence  time  julianbigelow 
april 2014 by robertogreco
Patent US8156160 - Poet personalities - Google Patents
"A method of generating a poet personality including reading poems, each of the poems containing text, generating analysis models, each of the analysis models representing one of poems and storing the analysis models in a personality data structure. The personality data structure further includes weights, each of the weights associated with each of the analysis models. The weights include integer values."



BACKGROUND
This invention relates to generating poetry from a computer.

A computer may be used to generate text, such as poetry, to an output device and/or storage device. The displayed text may be in response to a user input or via an automatic composition process. Devices for generating poetry via a computer have been proposed which involve set slot grammars in which certain parts of speech, that are provided in a list, are selected for certain slots.

SUMMARY
In an aspect, the invention features a method of generating a poet personality including reading poems, each of the poems containing text, generating analysis models, each of the analysis models representing one of poems and storing the analysis models in a personality data structure. The personality data structure further includes weights, each of the weights associated with each of the analysis models. The weights include integer values.

In another aspect a poet's assistant method including loading a word processing program, receiving a word in the word processing program provided by a user, displaying poet windows in response to receiving the word and processing the word in each of the windows. The poet windows may include combinations of a finish word window, a finish line window and a finish poem window. Processing the word in the finish word window includes loading an analysis model, locating the word in the analysis model and generating a proposed word in conjunction with the author analysis model.

The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.



"1. A computer-implemented method of generating a poet personality comprising:
analyzing by one or more computers a plurality of poems, each of the poems containing a plurality of words;

generating by the one or more computers a plurality of analysis models, each of said analysis models representing one of said plurality of poems, by

marking by the one or more computers words in the poems with rhyme numbers with words that rhyme with each other having the same rhyme number;

generating by the one or more computers a data structure that specifies n-grams found in the text, with each analysis model haying a set of weights, bigram, trigram and quadgram exponents; and

storing the plurality of analysis models in a personality data structure including a set of parameters that control poetry generation using the personality data structure."
poetry  poets  patents  raykurzweil  google  johnkeklak  1999  via:soulellis  artificialintelligence  ai 
march 2014 by robertogreco
Why Her Will Dominate UI Design Even More Than Minority Report | Wired Design | Wired.com
"In Her, the future almost looks more like the past."



"Jonze had help in finding the contours of this slight future, including conversations with designers from New York-based studio Sagmeister & Walsh and an early meeting with Elizabeth Diller and Ricardo Scofidio, principals at architecture firm DS+R. As the film’s production designer, Barrett was responsible for making it a reality.

Throughout that process, he drew inspiration from one of his favorite books, a visual compendium of futuristic predictions from various points in history. Basically, the book reminded Barrett what not to do. “It shows a lot of things and it makes you laugh instantly, because you say, ‘those things never came to pass!’” he explains. “But often times, it’s just because they over-thought it. The future is much simpler than you think.”

That’s easy to say in retrospect, looking at images of Rube Goldbergian kitchens and scenes of commute by jet pack. But Jonze and Barrett had the difficult task of extrapolating that simplification forward from today’s technological moment.

Theo’s home gives us one concise example. You could call it a “smart house,” but there’s little outward evidence of it. What makes it intelligent isn’t the whizbang technology but rather simple, understated utility. Lights, for example, turn off and on as Theo moves from room to room. There’s no app for controlling them from the couch; no control panel on the wall. It’s all automatic. Why? “It’s just a smart and efficient way to live in a house,” says Barrett.

Today’s smartphones were another object of Barrett’s scrutiny. “They’re advanced, but in some ways they’re not advanced whatsoever,” he says. “They need too much attention. You don’t really want to be stuck engaging them. You want to be free.” In Barrett’s estimation, the smartphones just around the corner aren’t much better. “Everyone says we’re supposed to have a curved piece of flexible glass. Why do we need that? Let’s make it more substantial. Let’s make it something that feels nice in the hand.”"
her  spikejonze  design  ai  film  technology  ui  future  minorityreport  diller+scofidio  elizabethdiller  lizdiller  dillerscofidio  designfiction  speculativedesign  speculativefiction 
january 2014 by robertogreco
Omniorthogonal: Hostile AI: You’re soaking in it!
"Corporations are at least somewhat constrained by the need to actually provide some service that is useful to people. Exxon provides energy, McDonald’s provides food, etc. The exception to this seems to be the financial industry. These institutions consume vast amounts of wealth and intelligence to essentially no human end. Of all human institutions, these seem the most parasitical and dangerous. Because of their ability to extract wealth, they are also siphoning off great amounts of human energy and intelligence — they have their own parallel universe of high-speed technology, for instance.

The financial system as a whole functions as a hostile AI. It has its own form of intelligence, it has interests that are distant or hostile to human goals. It is quite artificial, and quite intelligent in an alien sort of way. While it is not autonomous in the way we envision killer robots or Skynet, it is effectively autonomous of human control, which makes it just as dangerous."

[via and more: http://mini.quietbabylon.com/post/44276219648/the-singularity-already-happened-we-got-corporations ]
ai  singularity  corporations  corporatism  economics  finance  parasitism  2013 
march 2013 by robertogreco
SYNDICATED COLUMN: You're Not Underemployed. You're Underpaid. | Ted Rall's Rallblog
"The solution is clear: to guarantee everyone, whether or not he or she holds a job, a minimum salary sufficient to cover housing, transportation, education, medical care and, yes, discretionary income. Unfortunately, we’re stuck in an 18th century mindset. We’re nowhere close to detaching money from work. The Right wants to get rid of the minimum wage. On the Left, advocates for a Universal Living Wage nevertheless stipulate that a decent income should go to those who work a 40-hour week.

Ford proposes a Basic Income Guarantee based on performance of non-work activities; volunteering at a soup kitchen would be considered compensable work. But even this “radical” proposal doesn’t go far enough.

Whatever comes next, revolutionary overthrow or reform of the existing system, Americans are going to have to accept a reality that will be hard for a nation of strivers to take: we’re going to have to start paying people to sit at home."
universallivingwage  gamechanging  workweek  shiftlessness  ai  tedrall  2012  automation  economics  work  via:leisurearts  leisurearts  productivity  basicincomeguarantee  labor  martinford  post-productiveeconomy  universalbasicincome  artleisure  ubi 
december 2012 by robertogreco
The Singularity is not coming - Cognitive Social Web - A better web, for a better world.
"I would like to simply argue that scientific progress is in fact linear, and this despite the capitalization of past results into current research (“accelerating returns”), and despite an exponentially increasing population of scientists and engineers working on advancing it (resource explosion). And since I don’t want to argue in the realm of opinion, I am going to propose a simple, convincing mathematical model of the progress of science. Using the same model, I’ll point out that a hypothetical self-improving AI would actually see its own research progress and intelligence stagnate soon enough, rather than explode —unless we provide it with exponentially increasing computing resources, in which case it may do linear progress (or even better, given a fast enough exponential rate of resource increase). … Intelligence is just a skill, more precisely a meta-skill that defines your ability to get new skills. But imagination is a fucking superpower. Do not rely solely on your intelligence and hard work to make an impact on this world, or even luck, it’s not going to work. After all the total quantity of intelligence and hard work available around is millionfold what you can provide —you’re just a drop of water in the ocean. Rather use your imagination, the one thing that makes you a beautiful unique snowflake. Intelligence and hard work should be merely at the service of our imagination. Think outside of the box. Break out. Shake the axioms."
singularity  intelligence  AI  imagination  2012  science  via:Preoccupations 
august 2012 by robertogreco
Bruce Sterling's Turing Centenary Speech | Beyond The Beyond | Wired.com
Discussed: weirdness, femininity, AI skepticism, the aesthetics of computational art. Sort of a mess but consistently interesting.
ai  technology  gender  via:jbushnell  brucesterling  newaesthetic  art  alanturing 
july 2012 by robertogreco
Q&A;: Hacker Historian George Dyson Sits Down With Wired's Kevin Kelly | Wired Magazine | Wired.com
"In some creation myths, life arises out of the earth; in others, life falls out of the sky. The creation myth of the digital universe entails both metaphors. The hardware came out of the mud of World War II, and the code fell out of abstract mathematical concepts. Computation needs both physical stuff and a logical soul to bring it to life…"

"…When I first visited Google…I thought, my God, this is not Turing’s mansion—this is Turing’s cathedral. Cathedrals were built over hundreds of years by thousands of nameless people, each one carving a little corner somewhere or adding one little stone. That’s how I feel about the whole computational universe. Everybody is putting these small stones in place, incrementally creating this cathedral that no one could even imagine doing on their own."
artificialintelligence  ai  software  nuclearbombs  stanulam  hackers  hacking  alanturing  coding  klarivanneumann  nilsbarricelli  MANIAC  digitaluniverse  biology  digitalorganisms  computers  computing  freemandyson  johnvanneumann  interviews  creation  kevinkelly  turing'smansion  turing'scathedral  turing  wired  history  georgedyson 
february 2012 by robertogreco
Gardens and Zoos – Blog – BERG
"So, much simpler systems that people or pets can find places in our lives as companions. Legible motives, limited behaviours and agency can illicit response, empathy and engagement from us.

We think this is rich territory for design as the things around us start to acquire means of context-awareness, computation and connectivity.

As we move from making inert tools – that we are unequivocally the users of – to companions, with behaviours that animate them – we wonder whether we should go straight from this…

Ultimately we’re interested in the potential for new forms of companion species that extend us. A favourite project for us is Natalie Jeremijenko’s “Feral Robotic Dogs” – a fantastic example of legibility, seamful-ness and BASAAP…

We need to engage with the complexity and make it open up to us.

To make evident, seamful surfaces through which we can engage with puppy-smart things."
williamsburroughs  chrisheathcote  nataliejeremijenko  companionship  simplicity  context-awareness  artificialintelligence  ai  behavior  empathy  2012  interactiondesign  interaction  internetofthings  basaap  robots  future  berglondon  berg  mattjones  design  spimes  iot 
january 2012 by robertogreco
Why Siri Is (Probably) So Good • Quisby
"If anybody’s wondering why Siri is so good when the 4S comes out in a few weeks, this is almost certainly why. (I highly doubt the iPhone’s CPU isn’t capable of processing speech recognition on its own. And I just heard Gruber on 5by5 live speculating that the phone takes a first pass at interpreting the Siri command before sending it to the cloud, suggesting the cloud isn’t there for interpretation, having actually used it.) Pretty interesting—and, ultimately, unsurprising—that Google and Apple are responsible for what are probably the biggest advances in speech recognition in decades. Fuck your stupid iPhone 5 rumours, this is some insane future shit."
siri  apple  2011  iphone  ios  google  speechrecognition  ai  richardgaywood  technology 
october 2011 by robertogreco
The Smell of Control: Fear, Focus, Trust - we make money not art
"What should a robot smell like? Kevin Grennan has augmented three existing industrial robots with 'sweat glands'. Each uses a specific property of human sub-conscious behaviour in response to a chemical stimulus: one makes humans about to undergo surgery more trustful, another one makes women working in production line more focused and the third one is a bomb disposal robot that emits the smell of fear.

The contrast between the physical anti-anthropomorphic nature of the machines and the olfactory anthropomorphism highlights the absurd nature of the trickery at play in all anthropomorphism…

The Smell of Control: Fear, Focus, Trust also involved demonstrating the limits of anthropomorphism. The video of the android's birthday shows a lovely android attempting to recreate the most straightforward moment of a birthday celebration: blowing the candles of the birthday cake…"
kevingrennan  robots  design  anthropomorphism  androids  behavior  ai  senses  smell  uncannyvalley  2011  wmmna  fear  control  trust  réginedebatty 
july 2011 by robertogreco
rep.licants.org, a virtual prosthesis for the online introvert - we make money not art
"rep.licants.org allows people to install a bot on their Facebook and/or Twitter account. The bot will combine the activity the user is already having on other channels such as youtube or flickr with a set of keywords selected by the user to attempt and simulate that person's activity, feeding their account with more frequent updates, engaging in discussions with other users and adding new people to their list of contacts."
wmmna  bots  rep.licants.org  socialmedia  introverts  facebook  flickr  twitter  wikileaks  mobile  matthieucherubini  automation  ai  turing  2011 
june 2011 by robertogreco
Where the F**k Was I? (A Book) | booktwo.org
"Where Selvadurai is interested in the space between two human cultural identities, I suppose I am interested in the space where human and artificial cultures overlap. (“Artificial” is wrong; feels—what? Prejudiced? Colonial? Anthropocentric? Carboncentric?)

There are no digital natives but the devices themselves; no digital immigrants but the devices too. They are a diaspora, tentatively reaching out into the world to understand it and themselves, and across the network to find and touch one another. This mapping is a byproduct, part of the process by which any of us, separate and indistinct so long, find a place in the world."
books  iphone  maps  mobile  data  jamesbridle  shyamselvaduri  kevinslavin  digitalnatives  digital  devices  internet  web  singularity  mapping  place  meaning  meaningmaking  digitalimmigrants  understanding  learning  exploration  networkedlearning  networks  ai  2011 
june 2011 by robertogreco
Playing Duchamp | Login
"Marcel Duchamp is widely recognized for his contribution to conceptual art, but his lifelong obsession was the game of chess, in which he achieved the rank of Master. Working with the records of his chess matches, I have created a computer program to play chess as if it were Marcel Duchamp. I invite all artists, skilled and unskilled at this classic game, to play against a Duchampian ghost."
art  chess  ai  games  marcelduchamp 
march 2011 by robertogreco
Artificial Empathy – Blog – BERG
"Artificial Empathy is at the core of B.A.S.A.A.P. – it’s what powers Kacie Kinzer’s Tweenbots, and it’s what Byron and Nass were describing in The Media Equation to some extent, which of course brings us back to Clippy.

Clippy was referenced by Alex in her talk, and has been resurrected again as an auto-critique to current efforts to design and build agents and ‘things with behaviour’

One thing I recalled which I don’t think I’ve mentioned in previous discussions was that back in 1997, when Clippy was at the height of his powers – I did something that we’re told (quite rightly to some extent) no-one ever does – I changed the defaults.

You might not know, but there were several skins you could place on top of Clippy from his default paperclip avatar – a little cartoon Einstein, an ersatz Shakespeare… and a number of others."
ai  robotics  emotion  design  artificialempathy  empathy  bigdog  robots  mattjones  berg  berglondon  machines  dogs  behavior  adaptivepotentiation  play  seriousplay  toys  culture  human  basaap  emotionalrobots  emoticons  alexdeschamps-sonsino  reallyinterestinggroup  2011  animals 
february 2011 by robertogreco
n+1: N1BReading, Part 2
"The Lost Books of the Odyssey by Zachary Mason—that book was awesome. It came out in 2007 from a tiny publisher & was republished by FSG last year, at which point my esteemed friend Mansbach gave it a review…I think he was less enthusiastic than I have since become. The book is not just a game w/ the Odyssey…but a genuine rewriting of it. For what was the thing about Odysseus? He was crafty; he was smarter than everyone else. But what did it mean to be smarter than a bunch of peasants; what did it mean to be a logician 600 years before the birth of Pythagoras? Mason puts the ingeniousness, the cleverness, & the math back into Odysseus & back also into contemporary literature. It’s interesting that, according to the jacket copy, Mason in his day-to-day life works on AI: Computers too are pre-logical, full of force but lacking reason. Working with computers all those years, Mason must himself have come to feel like Odysseus among the Agamemnon-era Greeks." —Keith Gessen
books  odyssey  lists  n+1  zacharymason  math  ai  literature 
february 2011 by robertogreco
RORY HYDE PROJECTS / BLOG » Blog Archive » ‘Know No Boundaries’: an interview with Matt Webb of BERG London
"we attempt to invent things and create culture. It’s not just enough to invent something and see it once, you have to change the world around you, get underneath it, interfere with it somehow, because otherwise you’re just problem solving. And I wont say that design has an exclusive hold over this – you can invent things and change culture with art, music, business practices, ethnography, market research; all of these are valid too – design just happens to be the way we do it…our things should be hopeful, and not just functional…beautiful, inventive and mainstream…you could see our work as experimental, or science-fiction, or futuristic…our design is essentially a political act. We design ‘normative’ products, normative being that you design for the world as it should be. Invention is always for the world as it should be, and not for the world you are in…Design these products and you’ll move the world just slightly in that direction."
mattwebb  berg  berglondon  design  invention  hope  culture  change  purpose  innovation  scifi  sciencefiction  designfiction  beauty  future  inventingthefuture  speculative  speculativedesign  fractionalai  ai  brucesterling  evolutionarysoup  storytelling  isaacasimov  arthurcclarke  argoscatalog  schooloscope  behavior  evocativeobjects  collaboration  functionalism  technology  architecture  people  structure  groups  experience  interdisciplinary  tinkering  multidisciplinary  play  playfulness  crossdisciplinary  flip  gamechanging 
january 2011 by robertogreco
The Do Lectures | Matt Webb
"Matt Webb is MD of the design studio BERG, which invents products and designs new media. Projects include Popular Science+ for the Apple iPad, solid metal phone prototypes for Nokia, a bendy map of Manhattan called Here & There, and an electronic puppet that brings you closer to your friends.

Matt speaks on design and technology, is co-author of Mind Hacks - cognitive psychology for a general audience - and if you were to sum up his design interests in one word, it would be “politeness.” He lives in London in a flat with a wonky floor."
mattwebb  design  designfiction  computing  ai  scifi  sciencefiction  berg  berglondon  future  futurism  retrofuture  space  speculativedesign  2010  dolectures  books  film  thinkingnebula  nebulas  history  automation  toys  productdesign  iphone  schooloscope  redlaser  mechanicalturk  magic  virtualpets  commoditization  robotics  anyshouse  twitter  internetofthings  ubicomp  anybots  faces  pareidolia  fractionalai  fractionalhorsepower  andyshouse  weliveinamazingtimes  spacetravel  spaceexploration  spimes  iot 
october 2010 by robertogreco
Matt Webb – What comes after mobile « Mobile Monday Amsterdam
"Matt Webb talks about how slightly smart things have invaded our lives over the past years. People have been talking about artificial intelligence for years but the promise has never really come through. Matt shows how the AI promise has transformed and now seems to be coming to us in the form of simple toys instead of complex machines. But this talks is about much more then AI, Matt also introduces chatty interfaces & hard math for trivial things."

[via: http://preoccupations.tumblr.com/post/1157711285/what-comes-after-mobile-matt-webb ]
mattwebb  berg  berglondon  future  mobile  technology  ai  design  productinvention  invention  spacebinding  timebinding  energybinding  spimes  internetofthings  anybot  ubicomp  glowcaps  geography  context  privacy  glanceableuse  cloud  embedded  chernofffaces  understanding  math  mathematics  augmentedreality  redlaser  neuralnetworks  mechanicalturk  shownar  toys  lanyrd  iot  ar 
september 2010 by robertogreco
B.A.S.A.A.P. – Blog – BERG [Be As Smart As A Puppy]
"Imagine a household of hunchbots.

Each of them working across a little domain within your home. Each building up tiny caches of emotional intelligence about you, cross-referencing them with machine learning across big data from the internet. They would make small choices autonomously around you, for you, with you – and do it well. Surprisingly well. Endearingly well.

They would be as smart as puppies. …

That might be part of the near-future: being surrounded by things that are helping us, that we struggle to build a model of how they are doing it in our minds. That we can’t directly map to our own behaviour. A demon-haunted world. This is not so far from most people’s experience of computers (and we’re back to Byron and Nass) but we’re talking about things that change their behaviour based on their environment and their interactions with us, and that have a certain mobility and agency in our world."
berg  berglondon  mattjones  hunch  priorityinbox  gmail  biomimicry  design  future  intelligence  uncannyvalley  adamgreenfield  everyware  ubicomp  internetofthings  data  ai  machinelearning  spimes  basaap  biomimetics  iot 
september 2010 by robertogreco
Students, Meet Your New Teacher, Mr. Robot - NYTimes.com
"Standing on a polka-dot carpet at a preschool on the campus of the University of California, San Diego, a robot named RUBI is teaching Finnish to a 3-year-old boy.

RUBI looks like a desktop computer come to life: its screen-torso, mounted on a pair of shoes, sprouts mechanical arms and a lunchbox-size head, fitted with video cameras, a microphone and voice capability. RUBI wears a bandanna around its neck and a fixed happy-face smile, below a pair of large, plastic eyes.

It picks up a white sneaker and says kenka, the Finnish word for shoe, before returning it to the floor. “Feel it; I’m a kenka.”

In a video of this exchange, the boy picks up the sneaker, says “kenka, kenka” — and holds up the shoe for the robot to see.

In person they are not remotely humanlike, most of today’s social robots. Some speak well, others not at all. Some move on two legs, others on wheels. Many look like escapees from the Island of Misfit Toys."
robots  robotics  education  autism  ai  schools  teaching  ucsd 
july 2010 by robertogreco
Siri - Your Virtual Personal Assistant
"No more endless clicking on links and pages to get things done on the Internet. Delegate the work to Siri and relax while Siri takes care of it for you.
ai  mobile  applications  iphone  semanticweb  search  voice  ios 
july 2010 by robertogreco
Self-organizing map - Wikipedia
"A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map. Self-organizing maps are different from other artificial neural networks in the sense that they use a neighborhood function to preserve the topological properties of the input space."
maps  mathematics  networks  optimization  datamining  database  clustering  classification  algorithms  ai  learning  programming  research  statistics  visualization  neuralnetworks  mapping  som  self-organizingmaps 
june 2010 by robertogreco
Artificial Intelligence Brings Musicians Back From the Dead, Allowing All-Stars of All Time to Jam | Popular Science
"New software, developed by North Carolina-based Zenph Sound Innovations, is something like a Pandora for live musical style; sophisticated software analyzes musicians based on how they sound on old, archaic recordings. The software can then reconstruct songs as they would have sounded if those musicians had recorded in a modern studio and on superior media.

But it doesn't end there. Zenph is working on a means to not only recreate old performances, but to dissect a style to the point that it can manifest an artist's personal touch into pieces he or she never performed in life. Meaning the software could potentially lift Jimmy Page out of Black Dog and replace him with, say, Jimi Hendrix, just so see how it sounds."
music  annabelscheme  podcast  simulation  ai  simulations 
march 2010 by robertogreco
collision detection: Garry Kasparov, cyborg [more: http://snarkmarket.com/2010/5194]
"What I love about Kasparov’s algorithm — “Weak human + machine + better process was superior to a strong computer alone and … superior to a strong human + machine + inferior process” — is that it suggests serious rewards accrue to those who figure out the best way to use thought-enhancing software. (Or rather, those who figure out a way that’s best for them; people always use tools in slightly different, idiosyncratic ways.) The process matters as much as the software itself. How often do you check it? When do you trust the help it’s offering, and when do you ignore it?"
chess  transhumanism  ai  technology  psychology  future  cyborg  gaming  computers  computing  process  garrtkasparov 
february 2010 by robertogreco
uwnews.org | Computers unlock more secrets of the mysterious Indus Valley script | University of Washington News and Information
"The team led by a University of Washington researcher has used computers to extract patterns in ancient Indus symbols. The study, published this week in the Proceedings of the National Academy of Sciences, shows distinct patterns in the symbols' placement in sequences and creates a statistical model for the unknown language."
language  linguistics  ai  ancienthistory  ancientcivilization  indusvalley 
august 2009 by robertogreco
BLDGBLOG: A Drone Amidst the Ruins
"Accompanying Napoleon's expeditionary force was a kind of secondary army of "savants": scientists, researchers, archaeologists, linguists, and other scholars who were there, ostensibly, to produce a scientific record of Nile civilization, but who, conveniently for Napoleon, also "offered moral cover for the invasion." ... "what would the 21st-century equivalent of these savants be? How interesting, I'd suggest, to imagine an army of Artificially Intelligent, wireless translation drones sent into the ruins of ancient temple complexes; they descend through otherwise inaccessible partly collapsed passages and domed vaults beneath hillsides in order to interpret the walls around them, narrating for the first time a vast and unfolding dream of gods and ancient earthquakes, their LEDs reflecting in colored glass mosaics on the floor. Maybe they'd even use Twitter."
bldgblog  napoleon  egypt  future  ai  drones  history 
april 2009 by robertogreco
Do-ism « Magical Nihilism [see also: http://brainfood.howies.co.uk/footprints/instorematic/]
"I’m a designer that mainly works with digital materials, and while the pleasure of tinkering with a machine is something that I get quite a lot in software, to tinker in hardware and software (especially Meccano) is a rarer thing. It seems to activate a way of thinking with the eye, the mind and the hand that is entirely natural, and the playful problem-solving instincts of childhood come rushing back. Kevin Kelly writes in an essay about Artificial Intelligence that problem-solving is not just an abstract process of the mind, but something that happens in the world, and brands those who don’t believe this as indulging in ‘thinkism’. The intelligence of the hand, and the eye, and the body, working with material things in the world, instead of abstract symbols in a computer you might call ‘Do-ism’."
make  do-ism  mattjones  tangible  childhood  making  tinkering  russelldavies  kevinkelly  ai  thinkism  tcsnmy 
february 2009 by robertogreco
YouTube - robot generates a conception of itself
"it looks like a four-armed starfish, but so far it's unaware of its own shape. After flailing its arms for a while, however, the robot gets a sense of its design and begins to walk. The real feat comes when engineers remove a part of its leg: The robot senses a change in its structure and begins walking in a different way to compensate. The demonstration is the first proof that a robot can generate a conception of itself and then adapt to damage, a handy skill to have in unpredictable environments."
via:adamgreenfield  robots  intelligence  robotics  ai  consciousness 
october 2008 by robertogreco
Darwin at Home in Ten Minutes [see also: http://www.darwinathome.org/]
"This is a video of animations of Darwin at Home creatures which result from survival of the fittest and random mutation. Narrative by Gerald de Jong the author of the software behind the project."
locomotion  mathematics  science  evolution  biology  animation  modeling  via:kottke  graphics  walking  ai 
october 2008 by robertogreco
Kevin Kelly -- The Technium - Another One for the Machine
"Last week...a software program running on borrowed supercomputers...beat a US Go professional...Go has been Turing'd [as well as chess and checkers]. Driving a car has been Turing'd. The list of human cognitive activities that normal humans believe computers can't do is very short; Make art. Create a novel, symphony, movie. Have a conversation. Laugh at a joke. Are there other things people popularly believe computers can't do?"
go  chess  checkers  turing  singularity  future  ai  computing 
august 2008 by robertogreco
Kevin Kelly on the next 5,000 days of the web | Video on TED.com
"At the 2007 EG conference, Kevin Kelly shares a fun stat: The World Wide Web, as we know it, is only 5,000 days old. Now, Kelly asks, how can we predict what's coming in the next 5,000 days?"
onemachine  kevinkelly  via:grahamje  spimes  ubicomp  internet  ubiquitous  cloudcomputing  cloud  brain  convergence  digital  ai  semanticweb  future  futurism  predictions  technology  ted  statistics  data  email  communication  computing  computers  trends  media  web  networks 
july 2008 by robertogreco
Intel: Human and computer intelligence will merge in 40 years
"Most aspects of our lives, in fact, will be very different as we close in on the year 2050. Computing will be less about launching applications and more about living lives in which computers are inextricably woven into our daily activities."
everyware  future  intelligence  singularity  via:preoccupations  metaverse  ubicomp  virtualworlds  ai  computing  intel 
july 2008 by robertogreco
Edge 250 - ENGINEERS' DREAMS By George Dyson
"Data that are associated frequently by search requests are locally replicated—establishing physical proximity, in the real universe, that is manifested computationally as proximity in time. Google was more than a map. Google was becoming something else
georgedyson  sciencefiction  scifi  singularity  google  intelligence  artificial  ai  dreaming  science  programming  fiction  internet  literature 
july 2008 by robertogreco
Infoporn: Tap Into the 12-Million-Teraflop Handheld Megacomputer
"next stage in technological evolution is...the One Machine...hardware is assembled from our myriad devices, its software is written by our collective online behavior...the Machine also includes us. After all, our brains are programming & underpinning it"
computing  wired  cloud  kevinkelly  cloudcomputing  evolution  singularity  science  innovation  infodesign  collectiveintelligence  intelligence  computers  human  networks  mobile  mind  visualization  internet  future  brain  crowdsourcing  ai  data  it  learning2.0  trends  storage 
july 2008 by robertogreco
http://vavatch.co.uk/guide/
"This is a complete walkthrough for the Internet game of the Spielberg movie 'A.I.'. It gives away everything and speculates like mad. It's written linearly and assumes no background knowledge. You'll be able to find out about the latest updates of The Gu
ai  arg  walkthrough  games  gaming  marketing  cloudmakers  tv  film  microsoft  mysteries 
june 2008 by robertogreco
« earlier      
per page:    204080120160

Copy this bookmark:





to read