recentpopularlog in

robertogreco : johnhoward   2

Physiognomy’s New Clothes – Blaise Aguera y Arcas – Medium
"In 1844, a laborer from a small town in southern Italy was put on trial for stealing “five ricottas, a hard cheese, two loaves of bread […] and two kid goats”. The laborer, Giuseppe Villella, was reportedly convicted of being a brigante (bandit), at a time when brigandage — banditry and state insurrection — was seen as endemic. Villella died in prison in Pavia, northern Italy, in 1864.

Villella’s death led to the birth of modern criminology. Nearby lived a scientist and surgeon named Cesare Lombroso, who believed that brigantes were a primitive type of people, prone to crime. Examining Villella’s remains, Lombroso found “evidence” confirming his belief: a depression on the occiput of the skull reminiscent of the skulls of “savages and apes”.

Using precise measurements, Lombroso recorded further physical traits he found indicative of derangement, including an “asymmetric face”. Criminals, Lombroso wrote, were “born criminals”. He held that criminality is inherited, and carries with it inherited physical characteristics that can be measured with instruments like calipers and craniographs [1]. This belief conveniently justified his a priori assumption that southern Italians were racially inferior to northern Italians.

The practice of using people’s outer appearance to infer inner character is called physiognomy. While today it is understood to be pseudoscience, the folk belief that there are inferior “types” of people, identifiable by their facial features and body measurements, has at various times been codified into country-wide law, providing a basis to acquire land, block immigration, justify slavery, and permit genocide. When put into practice, the pseudoscience of physiognomy becomes the pseudoscience of scientific racism.

Rapid developments in artificial intelligence and machine learning have enabled scientific racism to enter a new era, in which machine-learned models embed biases present in the human behavior used for model development. Whether intentional or not, this “laundering” of human prejudice through computer algorithms can make those biases appear to be justified objectively.

A recent case in point is Xiaolin Wu and Xi Zhang’s paper, “Automated Inference on Criminality Using Face Images”, submitted to arXiv (a popular online repository for physics and machine learning researchers) in November 2016. Wu and Zhang’s claim is that machine learning techniques can predict the likelihood that a person is a convicted criminal with nearly 90% accuracy using nothing but a driver’s license-style face photo. Although the paper was not peer-reviewed, its provocative findings generated a range of press coverage. [2]
Many of us in the research community found Wu and Zhang’s analysis deeply problematic, both ethically and scientifically. In one sense, it’s nothing new. However, the use of modern machine learning (which is both powerful and, to many, mysterious) can lend these old claims new credibility.

In an era of pervasive cameras and big data, machine-learned physiognomy can also be applied at unprecedented scale. Given society’s increasing reliance on machine learning for the automation of routine cognitive tasks, it is urgent that developers, critics, and users of artificial intelligence understand both the limits of the technology and the history of physiognomy, a set of practices and beliefs now being dressed in modern clothes. Hence, we are writing both in depth and for a wide audience: not only for researchers, engineers, journalists, and policymakers, but for anyone concerned about making sure AI technologies are a force for good.

We will begin by reviewing how the underlying machine learning technology works, then turn to a discussion of how machine learning can perpetuate human biases."



"Research shows that the photographer’s preconceptions and the context in which the photo is taken are as important as the faces themselves; different images of the same person can lead to widely different impressions. It is relatively easy to find a pair of images of two individuals matched with respect to age, race, and gender, such that one of them looks more trustworthy or more attractive, while in a different pair of images of the same people the other looks more trustworthy or more attractive."



"On a scientific level, machine learning can give us an unprecedented window into nature and human behavior, allowing us to introspect and systematically analyze patterns that used to be in the domain of intuition or folk wisdom. Seen through this lens, Wu and Zhang’s result is consistent with and extends a body of research that reveals some uncomfortable truths about how we tend to judge people.

On a practical level, machine learning technologies will increasingly become a part of all of our lives, and like many powerful tools they can and often will be used for good — including to make judgments based on data faster and fairer.

Machine learning can also be misused, often unintentionally. Such misuse tends to arise from an overly narrow focus on the technical problem, hence:

• Lack of insight into sources of bias in the training data;
• Lack of a careful review of existing research in the area, especially outside the field of machine learning;
• Not considering the various causal relationships that can produce a measured correlation;
• Not thinking through how the machine learning system might actually be used, and what societal effects that might have in practice.

Wu and Zhang’s paper illustrates all of the above traps. This is especially unfortunate given that the correlation they measure — assuming that it remains significant under more rigorous treatment — may actually be an important addition to the already significant body of research revealing pervasive bias in criminal judgment. Deep learning based on superficial features is decidedly not a tool that should be deployed to “accelerate” criminal justice; attempts to do so, like Faception’s, will instead perpetuate injustice."
blaiseaguerayarcas  physiognomy  2017  facerecognition  ai  artificialintelligence  machinelearning  racism  bias  xiaolinwu  xi  zhang  race  profiling  racialprofiling  giuseppevillella  cesarelombroso  pseudoscience  photography  chrononet  deeplearning  alexkrizhevsky  ilyasutskever  geoffreyhinton  gillevi  talhassner  alexnet  mugshots  objectivity  giambattistadellaporta  francisgalton  samuelnorton  josiahnott  georgegiddon  charlesdarwin  johnhoward  thomasclarkson  williamshakespeare  iscnewton  ernsthaeckel  scientificracism  jamesweidmann  faception  criminality  lawenforcement  faces  doothelange  mikeburton  trust  trustworthiness  stephenjaygould  philippafawcett  roberthughes  testosterone  gender  criminalclass  aggression  risk  riskassessment  judgement  brianholtz  shermanalexie  feedbackloops  identity  disability  ableism  disabilities 
may 2017 by robertogreco
Guardian Unlimited | Comment is free | A decade of John Howard has left a country of timidity, fear and shame
"There was a strange sense that Australia, which had seemed so often to sleepwalk, mesmerised, through the past 11 years, had suddenly woken up. But where it might go and what it might do and be, no one any longer knew."
australia  politics  psychology  richardflanagan  johnhoward  georgewbush 
november 2007 by robertogreco

Copy this bookmark:





to read