Bonkers About Tech is supported by its readers. When you purchase through links on our site, we may earn an affiliate commission. Thank you.
Credit: Xiaolin Wu, Xi Zhang
Two scientists from China’s Shanghai Jiao Tong University have revealed that the only thing their AI system needs to distinguish criminals from law-abiding citizens is pictures of their faces. But don't get too excited, whilst this indicates that we could potentially arrest criminals for their future crimes like in Minority report, it is highly unlikely.
In their paper ‘Automated Inference on Criminality using Face Images’, published on the arXiv pre-print server, Xiaolin Wu and Xi Zhang from China’s Shanghai Jiao Tong University used an Artificial Intelligence computer program to detect whether a human is a criminal or not based on just their facial features. Although successful, the researchers data sets were controlled.
The dataset they used contained standard ID photographs of Chinese males between the ages of 18 and 55, and were free from facial hair, scars, and other markings. In their research, they claimed that the photographs they used were not police mugshots and that out of 730 criminals, 235 committed violent crimes "including murder, rape, assault, kidnap, and robbery."
According to Motherboard, they also completely removed any human bias by using finely controlled data sets. "In fact, we got our first batch of results a year ago. We went through very rigorous checking of our data sets, and also ran many tests searching for counterexamples but failed to find any," said Wu.
There were four machine learning algorithms in total which the researchers tested. Each algorithm was fed a set of 1856 facial images of which half were convicted criminals. The algorithms each used a different method to analyse facial features and classify the images as either being a criminal or a law-abiding citizen.
The results were startling. They found that all four classifiers were successful at inferring criminality just by looking at images of faces and found the criminals had differing facial features compared those not convicted of crimes. Moreover, "the variation among criminal faces is significantly greater than that of the non-criminal faces," Xiaolin and Xi write.
"All four classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic," the researchers write. "Also, we find some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle." The best classifier, known as the Convolutional Neural Network, achieved 89.51 percent accuracy in the tests.
"By extensive experiments and vigorous cross validations, we have demonstrated that via supervised machine learning, data-driven face classifiers are able to make reliable inference on criminality.", the researchers concluded.
So as you might expect, this has kicked up quite a bit of fuss and controversy and one of the researchers said "We have been accused on Internet of being irresponsible socially". One critic thought the whole thing was a joke until they realised there was a genuine paper behind it. Others have also questioned the validity of the paper based on the fact that one of the researchers has a gmail account!
Whatever your stance is on this, in my opinion it is just research and has yet to be proved with people of different gender, race, age and facial expressions. I guess the difficulty in this particular aspect of machine learning is the accuracy. It needs to be nearly 100% accurate otherwise you could end up classifying innocent people as criminals (false positives) or classifying criminals as being innocent (false negatives). Having said that, I think this is yet another example of deep neural networks surprising us all with their uncanny abilities to make predictions.