First a note to my loyal readers, who may wonder where I’ve been since I released my last set of reviews on April 28. I’ve been in Greeley Colorado and in Glencoe, Illinois. To be sure, the change of residence, from Greeley to Glencoe (from G to shining G) took plenty of time and attention, both before and after the moving date in mid-August. Be assured that I’m living happily in my new location and feeling more settled all the time. It’s not the move that has thrown sand in the gears of the engine of my literary output, so much as the coronavirus pandemic which, as of this writing, just gets worse and worse. At this moment Illinois has nearly the highest incidence of Covid-19 of all the states, though I’d be just as careful to limit my potential exposure to the virus if I lived in a state with less disease. Information and opinion about the pandemic has bombarded us, at ever increasing levels, since the coronavirus first reared its ugly head early this year. I came to feel that my little trickle of words about information technology in healthcare would be drowned out by the roar of science, opinion, and advice, both good and bad, that was emanating from all sides.At least now the din from the election and its aftermath, if not gone, is every day more ignorable. And several effective vaccines are being distributed as we speak. I feel an opening that I’m going to jump right into. So here it is, my next of what I expect will be a regular series of blogs. Please listen my podcast interview, recently recorded by the American Association for Physician Leadership, which can be found here.I'll let you know when there are new articles, blogs and podcasts (as well as TV interviews & masked rallies). Thank you for your allegiance. I look forward to hearing from you.-Marc Ringel, MD
Articles reviewed in this installment all have something to do with how humans use our eyes and the subtle and not-so subtle ways that information technology can influence what we perceive.
The authors of his study found that simply placing a picture of the patient’s face in the banner at the head of their electronic medical record was enough to reduce wrong patient prescribing errors by about 1/3 for patients presenting to the emergency room of Brigham and Women’s, a large tertiary care hospital in Boston.View Article
It’s hard to make a convincing argument yet (maybe never) that information machines are conscious. But it is wishful thinking to assume, just because these are just dumb mechanical servants, that they don’t have human biases. On their own, the devices have no prejudices. But they are rife with those of their masters, who designed and programmed them and chose what data to feed into their artificial brains. Biases are rife in all sorts of the supposedly neutral machine learning systems that are tasked with helping to decide (or even worse, to decide outright), who gets, for example bail, a mortgage or into college. This is one more study showing just how tilted the electronic playing field can be, in this case for the women and men of the 115th U.S. Congress. The study employed two datasets, one of uniform headshots (referred to among journalists “mugshots”) of the senators and representatives and a second set of images that these politicians tweeted of themselves. The images were classified by Google Cloud Vision, a widely used visual recognition system. The system was 10% less likely to correctly identify a woman congressperson than a man. The machine also assigned labels to the people in the images. The top three labels for men were “official,” “businessperson” and “spokesperson.” For women they were “smile,” “chin” and “outerwear.” And so on. (There’s much more.)View Article
This article reports on a study of five third-year U.S. radiology residents from multiple institutions. Their accuracy at reading nearly 2000 single (AP) chest radiographs was compared to that of an AI system that had been trained on over 300,000 x-ray studies. Gold standard interpretations were generated by triple consensus of expert radiologists. Residents and AI system performed similarly. The machines were better at not overlooking more routine stuff, such as presence of heart enlargement or tubes, while the residents outperformed their silicon colleagues in interpretation of more subtle findings, such as nodules and misplaced tubes. The day when radiology interpretation will be reliably performed without human oversight may still be a ways off. But automated image recognition will play an ever-increasing role in healthcare, including in the radiology suite.View Article
If you ask most any clinician about their electronic medical record (EMR) they’ll give you an earful. It’s been well documented that, thanks in large part to EMRs, physicians now spend more time looking at computer screens than they do looking at patients. EMRs are notoriously cumbersome. It sometimes feels like ease-of-use was the last thing on the mind of the programmers and engineers who designed them. Everybody knows that ease-of-use is by far the best predicter of the commercial success of any technology, unless the purchaser or standard-enforcer is not the same person as the user, which is the case almost everywhere healthcare services are delivered. Providers of EMRs are waking up (painstakingly slowly) to the need to make the products they produce or mandate more friendly to the clinicians on healthcare’s frontlines. This article from the Journal of Medical Internet Research describes how an automated eye-tracking device that records where an EMR user’s gaze lands on the screen can be employed to provide input data that feeds an artificial intelligence program tasked with learning, then anticipating the user’s information-seeking habits within a patient record.View Article
Every product and service imaginable, including healthcare providers, is rated online. As any sensible person knows, these rating systems, while sometimes providing valuable information to consumers, are fraught with all sorts of unreliable information, as the result of poor sampling, gaming or just plain ill will. This study included 866 people who were recruited via Amazon to review three fictitious nurse practitioners (NP) with purported patient reviews running from mostly positive to mostly negative. The researchers found that if a 68-second video that portrayed the NP interacting with a patient in a kind way was added to the patient reviews, research subjects were significantly more likely to say they would consult that NP for care, even in the face of negative patient reviews.View Article
A long time ago an older friend, a well-regarded novelist, told me she was glad to have grown up before Disney distributed its version of fairy tales because she much preferred the settings and characters she’d conjured in her own head, based on the stories she heard from her parents when she was a little girl. The authors of this study report that subjects interviewed after listening to an audiobook version of a story expressed more engagement with the story than when they’d seen the video instead. Their assessments were underscored by biometric sensors--skin temperature, sweating (conductivity) and pulse rate--that correlate with level of emotionality. Every healthcare professional knows, or ought to know, that, when the goal is engagement, it works much better to speak directly to a patient than to show them a video about their condition.View Article