If I were asked where to locate the essence of physical identity, I would answer without hesitation, “in the immune system,” the part of our physiology tasked with distinguishing each of us from everybody and everything else in the world. The face comes in second. By messing with images of our face, technology messes with who we are. My message here is consistent with my overall view of technology. We need to be deliberate and careful in how we use it, especially when we capture, process, analyze and present images of ourselves.
Debate about universal facial expressions goes big
Software does pretty well at recognizing facial expressions. That’s the good news. Researchers subjected over 6 million YouTube videos to artificial intelligence driven programs. The AI was trained using images of faces whose emotions had been labeled by human raters. The program then compared, by sophisticated electronic means (thereby earning the dubious designation “intelligent”), the labeled videos with a pile of unlabeled ones that the system then tagged with emotions. Humans who rated a sample of the resulting labels found the machine’s designations to be pretty darn accurate at identifying emotion based on facial expression alone. An automated way to sort emotions expressed in faces could be of great use in research and maybe in clinical practice. Now the bad news. The system works well only within cultures. It isn’t very accurate at identifying emotions across different groups because--it seems pretty obvious to me--there is huge variation in how people of different cultures contextualize and manifest emotion. I fear that herein lies yet another potentially harmful blenderizing and stigmatizing manifestation of the unrecognized racist, sexist and other “-ist” assumptions that are baked into the well-meaning humans who construct and analyze data as well as into the databases themselves. “AI” does not mean bias free.
A Markerless 2D Video, Facial Feature Recognition–Based, Artificial Intelligence Model to Assist With Screening for Parkinson Disease: Development and Usability Study
Journal or Medical Internet Research 2021
Everybody knows that people with Parkinson’s Disease have tremors. There are a couple dozen other less well known signs and symptoms of the malady. One of them is loss of mobility in the face. In some people Parkinson’s affects the muscles that drive facial expression, resulting in a sober, unreactive look. These scientists found that a program to evaluate video images, developed with the help of AI, was pretty good at predicting who had Parkinson’s and who didn’t, based only on 10-15 second videos of their response to requests to make facial expressions, including smiling and holding a poker face. It remains to be seen if there is actually a need for a machine to tell clinicians what they themselves can discern at a glance.
Development and evaluation of a machine learning-based point-of-care screening tool for genetic syndromes in children: a multinational retrospective study
(I covered this article in a previous blog. But it fits so well into the topic of this one that I’m including it again. I expect you’ll find the terminology in this version more acceptable than in the last.)
The Lancet Digital Health 2021
When I was a pediatrics intern our academic service regularly encountered infants and children whose facial features suggested they’d been born with some sort of congenital problem. Too often that was all we could say about them. A pediatric neurologist friend who retired seven years ago estimated that during his long career he and his colleagues were able to pin an exact diagnosis on only 30% of children born with a handicap. Thanks to an array of new scientific tools, including DNA analysis, geneticists can identify a significantly larger percentage of variations from standard (however you define “standard”--a subject for another essay) than we did when I was in training and when my friend was practicing.
The face is especially likely to bear signs that something in the genome is different. This article reports on a machine learning program that was about 90% accurate in pairing images of faces with congenital syndromes. The best thing about the software is that it can be used remotely, in locations where professionals who are highly trained in pediatric genetics may be hard to come by. Early diagnosis and intervention can make an enormous difference in these kids’ quality of life over their lifetime. Not to be underplayed is the fact that the program did just as well with Hispanic and African, and just a little less well (80% accurate) with Asian children, as it did with White ones. Blindness to race is pretty unusual in image processing programs. (See the preceding and the next review.)
Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency
Proceedings of the Association for Computing Machinery 2021
Let me say yet again, facial recognition is imperfect and too often unfair. Twitter ran a contest to uncover biases in its public-facing data. The winning entry demonstrated that the company’s image cropping software, designed to choose salient features of customer images to display as previews, favored White over Black people and women over men, to name a few of the prejudices found embedded in the software. In other words, a photo of a Black person (even President Obama) and a White person together was significantly more likely to display the White person in the preview. Similarly, the image of a woman was likely to prevail over that of a man. To its credit, Twitter has dropped its cropping algorithms altogether.
A Pandemic of Dysmorphia: ‘‘Zooming’’ into the Perception of Our Appearance
Facial Plastic Surgery and Aesthetic Medicine 2022
How the COVID pandemic has changed our world! (This is my entry in the understatement of the year contest.) A couple of years ago who even knew what Zoom was? Today, for many of us, it’s an essential tool for work and for socializing. As Marshall McLuhan wrote in his influential 1967 book, The Medium is the Massage,* the technology humans use to communicate--speech, drums, smoke signals, the written word, sign language, telegraph, telephone, television, and now email, social media and Zoom, to name a few--shape not just our messages but ourselves. Facebook (very aptly named), with its incessant presentation of users’ ideal selves (sort of like a never-ending Christmas letter) leads to tons of insecurity in readers who compare their bland lives to the spiffed-up ones presented on their “friends’” home pages.
This article makes the case that Zoom can directly affect how we see ourselves, and not in a good way. In 2020 a Harvard dermatologist noticed a big spike of patients requesting procedures to tune up their faces, motivated by how they appeared to themselves on video calls during the pandemic. The physician, Dr. Kourosh, referred to her patients’ video screens as “funhouse mirrors” that returned distorted reflections. Every television regular knows that, even with the best lighting and video equipment, they require the services of a makeup artist and a hair stylist just to look “normal,” let alone glamorous. Kourosh named the phenomenon of not liking your looks based on what you see on a video monitor “Zoom dysmorphia.” “Dysmorphia” is the medical term for a distorted view of one’s body. Before Zoom dysmorphia there was Snapchat dysmorphia, people unhappy with their appearance on Snapchat. As the communication landscape continues to change at the speed of light I expect we’ll see many more electronically-induced dysmorphias than even Marshall McLuhan could imagine.
* Yes the word is “Massage,” not “Message,” a florid example of the sort of wordplay and mindplay that characterized McLuhan’s writing.