recentpopularlog in

robertogreco : deeplearning   8

Impakt Festival 2017 - Performance: ANAB JAIN. HQ - YouTube
[Embedded here: http://impakt.nl/festival/reports/impakt-festival-2017/impakt-festival-2017-anab-jain/ ]

"'Everything is Beautiful and Nothing Hurts': @anab_jain's expansive keynote @impaktfestival weaves threads through death, transcience, uncertainty, growthism, technological determinism, precarity, imagination and truths. Thanks to @jonardern for masterful advise on 'modelling reality', and @tobias_revell and @ndkane for the invitation."
https://www.instagram.com/p/BbctTcRFlFI/ ]
anabjain  2017  superflux  death  aging  transience  time  temporary  abundance  scarcity  future  futurism  prototyping  speculativedesign  predictions  life  living  uncertainty  film  filmmaking  design  speculativefiction  experimentation  counternarratives  designfiction  futuremaking  climatechange  food  homegrowing  smarthomes  iot  internetofthings  capitalism  hope  futures  hopefulness  data  dataviz  datavisualization  visualization  williamplayfair  society  economics  wonder  williamstanleyjevons  explanation  statistics  wiiliambernstein  prosperity  growth  latecapitalism  propertyrights  jamescscott  objectivity  technocrats  democracy  probability  scale  measurement  observation  policy  ai  artificialintelligence  deeplearning  algorithms  technology  control  agency  bias  biases  neoliberalism  communism  present  past  worldview  change  ideas  reality  lucagatti  alextaylor  unknown  possibility  stability  annalowenhaupttsing  imagination  ursulaleguin  truth  storytelling  paradigmshifts  optimism  annegalloway  miyamotomusashi  annatsing 
november 2017 by robertogreco
Physiognomy’s New Clothes – Blaise Aguera y Arcas – Medium
"In 1844, a laborer from a small town in southern Italy was put on trial for stealing “five ricottas, a hard cheese, two loaves of bread […] and two kid goats”. The laborer, Giuseppe Villella, was reportedly convicted of being a brigante (bandit), at a time when brigandage — banditry and state insurrection — was seen as endemic. Villella died in prison in Pavia, northern Italy, in 1864.

Villella’s death led to the birth of modern criminology. Nearby lived a scientist and surgeon named Cesare Lombroso, who believed that brigantes were a primitive type of people, prone to crime. Examining Villella’s remains, Lombroso found “evidence” confirming his belief: a depression on the occiput of the skull reminiscent of the skulls of “savages and apes”.

Using precise measurements, Lombroso recorded further physical traits he found indicative of derangement, including an “asymmetric face”. Criminals, Lombroso wrote, were “born criminals”. He held that criminality is inherited, and carries with it inherited physical characteristics that can be measured with instruments like calipers and craniographs [1]. This belief conveniently justified his a priori assumption that southern Italians were racially inferior to northern Italians.

The practice of using people’s outer appearance to infer inner character is called physiognomy. While today it is understood to be pseudoscience, the folk belief that there are inferior “types” of people, identifiable by their facial features and body measurements, has at various times been codified into country-wide law, providing a basis to acquire land, block immigration, justify slavery, and permit genocide. When put into practice, the pseudoscience of physiognomy becomes the pseudoscience of scientific racism.

Rapid developments in artificial intelligence and machine learning have enabled scientific racism to enter a new era, in which machine-learned models embed biases present in the human behavior used for model development. Whether intentional or not, this “laundering” of human prejudice through computer algorithms can make those biases appear to be justified objectively.

A recent case in point is Xiaolin Wu and Xi Zhang’s paper, “Automated Inference on Criminality Using Face Images”, submitted to arXiv (a popular online repository for physics and machine learning researchers) in November 2016. Wu and Zhang’s claim is that machine learning techniques can predict the likelihood that a person is a convicted criminal with nearly 90% accuracy using nothing but a driver’s license-style face photo. Although the paper was not peer-reviewed, its provocative findings generated a range of press coverage. [2]
Many of us in the research community found Wu and Zhang’s analysis deeply problematic, both ethically and scientifically. In one sense, it’s nothing new. However, the use of modern machine learning (which is both powerful and, to many, mysterious) can lend these old claims new credibility.

In an era of pervasive cameras and big data, machine-learned physiognomy can also be applied at unprecedented scale. Given society’s increasing reliance on machine learning for the automation of routine cognitive tasks, it is urgent that developers, critics, and users of artificial intelligence understand both the limits of the technology and the history of physiognomy, a set of practices and beliefs now being dressed in modern clothes. Hence, we are writing both in depth and for a wide audience: not only for researchers, engineers, journalists, and policymakers, but for anyone concerned about making sure AI technologies are a force for good.

We will begin by reviewing how the underlying machine learning technology works, then turn to a discussion of how machine learning can perpetuate human biases."



"Research shows that the photographer’s preconceptions and the context in which the photo is taken are as important as the faces themselves; different images of the same person can lead to widely different impressions. It is relatively easy to find a pair of images of two individuals matched with respect to age, race, and gender, such that one of them looks more trustworthy or more attractive, while in a different pair of images of the same people the other looks more trustworthy or more attractive."



"On a scientific level, machine learning can give us an unprecedented window into nature and human behavior, allowing us to introspect and systematically analyze patterns that used to be in the domain of intuition or folk wisdom. Seen through this lens, Wu and Zhang’s result is consistent with and extends a body of research that reveals some uncomfortable truths about how we tend to judge people.

On a practical level, machine learning technologies will increasingly become a part of all of our lives, and like many powerful tools they can and often will be used for good — including to make judgments based on data faster and fairer.

Machine learning can also be misused, often unintentionally. Such misuse tends to arise from an overly narrow focus on the technical problem, hence:

• Lack of insight into sources of bias in the training data;
• Lack of a careful review of existing research in the area, especially outside the field of machine learning;
• Not considering the various causal relationships that can produce a measured correlation;
• Not thinking through how the machine learning system might actually be used, and what societal effects that might have in practice.

Wu and Zhang’s paper illustrates all of the above traps. This is especially unfortunate given that the correlation they measure — assuming that it remains significant under more rigorous treatment — may actually be an important addition to the already significant body of research revealing pervasive bias in criminal judgment. Deep learning based on superficial features is decidedly not a tool that should be deployed to “accelerate” criminal justice; attempts to do so, like Faception’s, will instead perpetuate injustice."
blaiseaguerayarcas  physiognomy  2017  facerecognition  ai  artificialintelligence  machinelearning  racism  bias  xiaolinwu  xi  zhang  race  profiling  racialprofiling  giuseppevillella  cesarelombroso  pseudoscience  photography  chrononet  deeplearning  alexkrizhevsky  ilyasutskever  geoffreyhinton  gillevi  talhassner  alexnet  mugshots  objectivity  giambattistadellaporta  francisgalton  samuelnorton  josiahnott  georgegiddon  charlesdarwin  johnhoward  thomasclarkson  williamshakespeare  iscnewton  ernsthaeckel  scientificracism  jamesweidmann  faception  criminality  lawenforcement  faces  doothelange  mikeburton  trust  trustworthiness  stephenjaygould  philippafawcett  roberthughes  testosterone  gender  criminalclass  aggression  risk  riskassessment  judgement  brianholtz  shermanalexie  feedbackloops  identity  disability  ableism  disabilities 
may 2017 by robertogreco
Image-to-Image Demo - Affine Layer
"Recently, I made a Tensorflow port of pix2pix by Isola et al., covered in the article Image-to-Image Translation in Tensorflow. I've taken a few pre-trained models and made an interactive web thing for trying them out. Chrome is recommended.

The pix2pix model works by training on pairs of images such as building facade labels to building facades, and then attempts to generate the corresponding output image from any input image you give it. The idea is straight from the pix2pix paper, which is a good read."

[See also: https://phillipi.github.io/pix2pix/

"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either."



"Here we show comprehensive results from each experiment in our paper. Please see the paper for details on these experiments.

Effect of the objective
Cityscapes
Facades

Effect of the generator architecture
Cityscapes

Effect of the discriminator patch scale
Cityscapes
Facades

Additional results
Map to aerial
Aerial to map
Semantic segmentation
Day to night
Edges to handbags
Edges to shoes
Sketches to handbags
Sketches to shoes"]
machinelearning  art  drawing  via:meetar  deeplearning  neuralnetworks 
february 2017 by robertogreco
A Neural Network Playground
"Tinker With a Neural Network Right Here in Your Browser.
Don’t Worry, You Can’t Break It. We Promise.

Um, What Is a Neural Network?

It’s a technique for building a computer program that learns from data. It is based very loosely on how we think the human brain works. First, a collection of software “neurons” are created and connected together, allowing them to send messages to each other. Next, the network is asked to solve a problem, which it attempts to do over and over, each time strengthening the connections that lead to success and diminishing those that lead to failure. For a more detailed introduction to neural networks, Michael Nielsen’s Neural Networks and Deep Learning is a good place to start. For more a more technical overview, try Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville.

This Is Cool, Can I Repurpose It?

Please do! We’ve open sourced it on GitHub with the hope that it can make neural networks a little more accessible and easier to learn. You’re free to use it in any way that follows our Apache License. And if you have any suggestions for additions or changes, please let us know.

We’ve also provided some controls below to enable you tailor the playground to a specific topic or lesson. Just choose which features you’d like to be visible below then save this link, or refresh the page.

Show test data
Discretize output
Play button
Learning rate
Activation
Regularization
Regularization rate
Problem type
Which dataset
Ratio train data
Noise level
Batch size
# of hidden layers
What Do All the Colors Mean?

Orange and blue are used throughout the visualization in slightly different ways, but in general orange shows negative values while blue shows positive values.

The data points (represented by small circles) are initially colored orange or blue, which correspond to positive one and negative one.

In the hidden layers, the lines are colored by the weights of the connections between neurons. Blue shows a positive weight, which means the network is using that output of the neuron as given. An orange line shows that the network is assiging a negative weight.

In the output layer, the dots are colored orange or blue depending on their original values. The background color shows what the network is predicting for a particular area. The intensity of the color shows how confident that prediction is.

Credits

This was created by Daniel Smilkov and Shan Carter. This is a continuation of many people’s previous work — most notably Andrej Karpathy’s convnet.js and Chris Olah’s articles about neural networks. Many thanks also to D. Sculley for help with the original idea and to Fernanda Viégas and Martin Wattenberg and the rest of the Big Picture and Google Brain teams for feedback and guidance."
neuralnetworks  data  computing  deeplearning  ai  danielsmilkov  shancarter 
april 2016 by robertogreco
Random Radicals: A Fake Kanji Experiment
[via:
http://prostheticknowledge.tumblr.com/post/136754440176/random-radicals-continuation-of-project-by
"As humans, we are able to communicate with others by drawing pictures, and somehow this has evolved into modern language. The ability to express our thoughts is a very powerful tool in our society. Being able to write is generally more difficult than just being able to read, and this is especially true for the Chinese language. From personal experience, being able to write Chinese is a lot more difficult than just being able to read Chinese and requires a greater understanding of the language.

We now have machines that can help us accurately classify images and read handwritten characters. However, for machines to gain a deeper understanding of the content they are processing, they will also need to be able to generate such content. The next natural step is to have machines draw simple pictures of what they are thinking about, and develop an ability to express themselves. Seeing how machines produce drawings may also provide us with some insights into their learning process.

In this work, we have trained a machine to learn Chinese characters by exposing it to a Kanji database. The machine learns by trying to form invariant patterns of the shapes and strokes that it sees, rather than recording exactly what it sees into memory, kind of like how our own brains work. Afterwards, using its neural connections, the machine attempts to write something out, stroke-by-stroke, onto the screen."]

[See also: http://blog.otoro.net/2015/12/28/recurrent-net-dreams-up-fake-chinese-characters-in-vector-format-with-tensorflow/
via: http://prostheticknowledge.tumblr.com/post/136134267951/recurrent-net-dreams-up-fake-chinese-characters

"… I think a more interesting task is to generate data, which I view as an extension to classifying data. Like how being able to write a Chinese character demonstrate more understanding than merely knowing how to read that character, I think being able to generate content is also key to understanding that content. Being able generate a picture of a 22 year old attractive lady is much more impressive than merely being able to estimate that the this woman is likely around 22 years of age.

An example of a generative task is the translation machines developed to translate English into another language in real time. Generative art and music has been increasingly popular. Recently, there has been work on using techniques such as generative adversarial networks (GANs) to generate bitmap pictures of fake images that look like real ones, like fake cats, fake faces, fake bedrooms and even fake anime characters, and to me, those problems are a lot more exciting to work on, and a natural extension to classification problems."]

[See also: http://www.genekogan.com/works/a-book-from-the-sky.html
via: http://prostheticknowledge.tumblr.com/post/136157512956/a-book-from-the-sky-%E5%A4%A9%E4%B9%A6-another-neural-network

"A Book from the Sky 天书

Another Neural Network Chinese character project - this one by Gene Kogan which generates new Kanji from a handwritten dataset:

These images were created by a deep convolutional generative adversarial network (DCGAN) trained on a database of handwritten Chinese characters, made with code by Alec Radford based on the paper by Radford, Luke Metz, and Soumith Chintala in November 2015.

The title is a reference to the 1988 book by Xu Bing, who composed thousands of fictitious glyphs in the style of traditional Mandarin prints of the Song and Ming dynasties.

A DCGAN is a type of convolutional neural network which is capable of learning an abstract representation of a collection of images. It achieves this via competition between a “generator” which fabricates fake images and a “discriminator” which tries to discern if the generator’s images are authentic (more details). After training, the generator can be used to convincingly generate samples reminiscent of the originals.

… a DCGAN is trained on a labeled subset of ~1M handwritten simplified Chinese characters, after which the generator is able to produce fake images of characters not found in the original dataset."]
art  deeplearning  kanji  chinese  machinelearning  neuralnetworks 
january 2016 by robertogreco
Deep Belief by Jetpac - teach your phone to recognize any object on the App Store on iTunes
"Teach your iPhone to see! Teach it to recognize any object using the Jetpac Deep Belief framework running on the phone.

See the future - this is the latest in Object Recognition technology, on a phone for the first time.

The app helps you to teach the phone to recognize an object by taking a short video of that object, and then teach it what is not the object, by taking a short video of everything around, except that object. Then you can scan your surroundings with your phone camera, and it will detect when you are pointing at the object which you taught it to recognize.

We trained our Deep Belief Convoluted Neural Network on a million photos, and like a brain, it learned concepts of textures, shapes and patterns, and combining those to recognize objects. It includes an easily-trainable top layer so you can recognize the objects that you are interested in.

If you want to build custom object recognition into your own iOS app, you can download our Deep Belief SDK framework. It's an implementation of the Krizhevsky convolutional neural network architecture for object recognition in images, running in under 300ms on an iPhone 5S, and available under an open BSD License."

[via: https://medium.com/message/the-fire-phone-at-the-farmers-market-34f51c2ba885 petewarden ]

[See also: http://petewarden.com/2014/04/08/how-to-add-a-brain-to-your-smart-phone/ ]
applications  ios  ios7  iphone  ipad  objects  objectrecognition  identification  objectidentification  mobile  phones  2014  learning  deepbelief  petewarden  ai  artificialintelligence  cameras  computervision  commonplace  deeplearning 
june 2014 by robertogreco
The Fire Phone at the farmers market — The Message — Medium
"With the exception of a few paintings, all of Amazon’s demo “items” were commercial products: things with ISBNs, bar codes, and/or spectral signatures. Things with price tags.

We did not see the Fire Phone recognize a eucalyptus tree.

There is reason to suspect the Fire Phone cannot identify a goldfinch.

And I do not think the Fire Phone can tell me which of these “items” is kale.

This last one is the most troubling, because a system that greets a bag of frozen vegetables with a bar code like an old friend but draws a blank on a basket of fresh greens at the farmers market—that’s not just technical. That’s political.

But here’s the thing: The kale is coming.

There’s an iPhone app called Deep Belief, a tech demo from programmer Pete Warden. It’s free."



"If Amazon’s Fire Phone could tell kale from Swiss chard, if it could recognize trees and birds, I think its polarity would flip entirely, and it would become a powerful ally of humanistic values. As it stands, Firefly adds itself to the forces expanding the commercial sphere, encroaching on public space, insisting that anything interesting must have a price tag. But of course, that’s Amazon: They’re in The Goldfinch detection business, not the goldfinch detection business.

If we ever do get a Firefly for all the things without price tags, we’ll probably get it from Google, a company that’s already working hard on computer vision optimized for public space. It’s lovely to imagine one of Google’s self-driving cars roaming around, looking everywhere at once, diligently noting street signs and stop lights… and noting also the trees standing alongside those streets and the birds perched alongside those lights.

Lovely, but not likely.

Maybe the National Park Service needs to get good at this.

At this point, the really deeply humanistic critics are thinking: “Give me a break. You need an app for this? Buy a bird book. Learn the names of trees.” Okay, fine. But, you know what? I have passed so much flora and fauna in my journeys around this fecund neighborhood of mine and wondered: What is that? If I had a humanistic Firefly to tell me, I’d know their names by now."
amazon  technology  robinsloan  objects  objectrecognition  identification  objectidentification  firefly  mobile  phones  2014  jeffbezos  consumption  learning  deepbelief  petewarden  ai  artificialintelligence  cameras  computervision  commonplace  deeplearning 
june 2014 by robertogreco
At the Core of the Apple Store: Images of Next Generation Learning (full-length and abridged article) | Big Picture
"What are the essential features of the Apple Store’s learning culture?

* The learning experience is highly personalized and focused on the interests and needs of the individual customer.

* Customers can make mistakes with little risk of failure or embarrassment. Thinking and tinkering with the help of a staff member provide opportunities for deep learning.

* Challenges are real and embedded in the customer’s learning and work.

* Assessment is built right into the learning, focusing specifically on what needs to be accomplished.

A disruptive innovation? We think so. The Apple Store has created a new type of learning environment that allows individuals to learn anything, at any time, at any level, from experts, expert practitioners, and peers."
apple  applestore  learning  schooldesign  innovation  via:cervus  education  lcproject  technology  williamgibson  geniusbar  retail  studioclassroom  openstudio  thirdplaces  problemsolving  teaching  unschooling  deschooling  personalization  individualized  challenge  disruption  assessment  deeplearning  21stcenturylearning  learningspaces  thirdspaces 
december 2010 by robertogreco

Copy this bookmark:





to read