Is Undeserved Faith in Technology Leading Us Down Blind Alleys?
Two recent articles on the expanding use of technology in schools and medicine illustrate the potential flaws in following the technology industry’s mantra of “fail fast and fix it later”.
“This School Banned iPads, Going Back to Regular Textbooks—But What Does the Science Say?“, a blog post by Jenn Ryan describes the rationale for the decision of an Australian public school to abandon its use of iPads and offers a good series of point-counterpoint arguments on both sides of the issue. My takeaway from reading the scientific findings is that its very unclear that iPads helped the broad population of students improve their academic skills or their technology skills. In short, the schools spent tens of thousands of dollars investing in an unproven technology and had no improvements to show for it. Despite this set of findings, Ms. Ryan reported that some parents were upset at the decision to back away from the iPad mandate. Why?
However, parents had mixed reactions, some saying they believed digital devices were essential for modern education….
Ms. Ryan, after looking at the evidence for the use of iPads, came to a different conclusion:
Parents who object saying that modern technology usage is a necessary skill for most job markets aren’t wrong; however, placing an emphasis on learning with iPads hardly seems to be the solution—a simple technology course or at-home use of these devices could suffice.
The second article on this topic, by Kaiser Health Network’s Liz Szabo has the following title:
A Reality Check On Artificial Intelligence: Are Health Care Claims Overblown?
As happens when the tech industry gets involved, hype surrounds the claims that artificial intelligence will help patients and even replace some doctors.
Ms. Szabo describes the current thinking in terms of AI and health care and finds that there is widespread optimism and enthusiasm. She opens with this:
Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.
IBM boasted that its AI could “outthink cancer.” Others say computer systems that read X-rays will make radiologists obsolete.
“There’s nothing that I’ve seen in my 30-plus years studying medicine that could be as impactful and transformative” as AI, said Dr. Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.
Even the Food and Drug Administration ― which has approved more than 40 AI products in the past five years ― says “the potential of digital health is nothing short of revolutionary.”
But then she immediately begins to throw cold water on the idea:
Yet many health industry experts fear AI-based products won’t be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra “fail fast and fix it later,” is putting patients at risk ― and that regulators aren’t doing enough to keep consumers safe.
Ms. Szabo notes that AI innovations in health care have become a magnet for venture capitalists and offers a lengthy description of various AI innovations that have fallen short of their promise and of how the FDA has fallen short of the mark in overseeing these new products, many of which are lightly regulated. One of the major problems these new products face is lack of sound data to use to develop their algorithms.
Many AI developers cull electronic health records because they hold huge amounts of detailed data, (Stanford researcher) Cho said. But those developers often aren’t aware that they’re building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.
A KHN investigation published in March found sometimes life-threatening errors in patients’ medication lists, lab tests and allergies.
And using bad data to formulate AI decisions has often resulted in an increase in “false positives” and that, in turn, can lead to needless tests and needless anguish on the parts of patients. Who can stop Big Data entrepreneurs who are seeking to make a profit from health? Ms. Szabo has the answer:
In view of the risks involved, doctors need to step in to protect their patients’ interests, said Dr. Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.
“While it is the job of entrepreneurs to think big and take risks,” Saini said, “it is the job of doctors to protect their patients.”
But Ms. Szabo overlooks another potential source for intervention: a robustly funded and more muscular regulatory agency. If entrepreneurs are encouraged to “think big and take risks” and doctors are supposed to “protect their patients”, regulatory agencies are supposed to enforce existing regulations…. and after reading Ms. Szabo’s article it seems that their mission is compromised.