Someone is fidgeting in a long line at an airport security gate. Is that person simply nervous about the wait?
Or is this a passenger who has something sinister to hide?
Even highly trained TSA (Transportation Security Administration) airport security officers still have a hard time telling whether someone is lying or telling the truth – despite the billions of dollars and years of study that have been devoted to the subject.
Now, University of Rochester researchers are using data science and an online crowdsourcing to create a screening system that can more accurately detect deception based on facial and verbal cues.
They also hope to minimize instances of racial and ethnic profiling that TSA critics contend often occurs when passengers are pulled aside under the agency’s Screening of Passengers by Observation Techniques (SPOT) program.
“Basically, our system is like Skype on steroids,” says Tay Sen, a PhD student in the lab of Ehsan Hoque, an assistant professor of computer science. Sen is lead author of two new papers accepted for major computing conferences hosted by IEEE (Institute of Electrical and Electrionics Engineers) and ACM (Association for Computing Machinery). The papers describe the framework the lab has used to create the largest video deception dataset so far -- and why some smiles are more deceitful than others.
“A lot of times people tend to look a certain way or show some kind of facial expression when they’re remembering things,” Sen said. “And when they are given a computational question, they have another kind of facial expression.”
An advantage of the crowdsourcing approach is that allows researchers to tap into a far larger pool of research participants, and gather data far more quickly, than would occur if participants had to be brought into a lab, Hoque says. So far, the researchers have gathered 1.3 million frames of facial expressions from 151 pairs of individuals (dyads) playing the game.
Data science is enabling the researchers to quickly analyze all that data in novel ways. For example, they used facial feature analysis software to identify which of 43 facial muscles were being used in a given frame, and to assign a numerical weight to each.
The researchers then fed the results into the University’s supercomputer, using a machine learning technique called clustering, which looks for patterns without being assigned any predetermined labels or categories.
“It told us there were basically five kinds of smile-related ‘faces’ that people made when responding to questions,” Sen said. The one most frequently associated with lying was a high intensity version of the so-called Duchenne smile involving both cheek/eye and mouth muscles. This is consistent with the “Duping Delight” theory that “when you’re fooling someone, you tend to take delight in it,” Sen explained.
More puzzling was the discovery that honest witnesses would often contract their eyes, but not smile at all with their mouths. “When we went back and replayed the videos, we found that this often happened when people were trying to remember [something]” Sen said. “This showed they were concentrating and trying to recall honestly.”
So will these findings tip off liars to simply change their facial expressions?
Not likely. The tell-tale strong Duchenne smile associated with lying involves “a cheek muscle you cannot control,” Hoque says. “It is involuntary.”
“In the end, we still want humans to make the final decision,” Hoque says. “But as they are interrogating, it is important to provide them with some objective metrics that they could use to further inform their decisions.”
Md Kamrul Hasan, Zachary Teicher, Minh Tran, Matthew Levin, Yiming Yang, Kurt Haut, and Raiyan Baten – all students in the Hoque lab – also contributed to the research.
Students featured in the video (in order of appearance): Diana Maloku ’21, Kurtis Haut ’18, Shagun Bose ’20, Manny Rodriguez ’18, Ethan Wei ’19, Melissa Kagaju ’20.
Subscribe to the University of Rochester on YouTube:
Follow the University of Rochester on Twitter:
Be sure to like the University of Rochester on our Facebook page:
Help us caption & translate this video!
1 view
301
90
1 week ago 00:00:19 1
You can learn this method just by watching it once. #youtubeshorts #youtubeshorts #rope #skills