Home > Uncategorized > “Hey Alexa! Give Me Some Two-Digit Multiplication Problems to Work On!”

“Hey Alexa! Give Me Some Two-Digit Multiplication Problems to Work On!”

February 24, 2021

I have been fascinated by the concept of Artificial Intelligence (AI) for decades. I can recall reading a book in the 1980s about efforts at Carnegie Mellon to develop a robot that could walk across a part of the campus that was heavily travelled by students and had several obstacles like water fountains, trees, etc. The book underscored how much of learned human behavior like avoiding on oncoming pedestrian or landscape feature requires an incredible succession of mathematical calculations.

But AI has come a LONG way since the 1980s… and GPS technology and the miniaturization of computers that has transpired since then make AI ubiquitous, as this NYTimes article by Craig Smith illustrates. The pervasiveness of AI has several legal, ethical, and practical consequences. Whether the benefits of AI outweigh the potential for harm is imponderable… but a debate on this issue NOW would be helpful— especially given the downside of AI should it be used for ill.

The Times article does a good job of explaining what AI is and how it has come to permeate our lives. “Smart” appliances, the identification of books and movies (and articles) we might like by a media outlet, pop-up ads on social media, and videos YouTube recommends for us are all the result of AI. My iPhone now opens by recognizing my face and my computer screen comes to life when I touch it just so. All of these features feed our need for instant gratification and convenience, but they also provide a trove of personal data that Apple can sell to third party vendors. They also lead to the possibility of a world where everything I write, every reaction I offer on social media, every comment I make on line, every email I send could be accessed.

The darkest dystopian world would be one where a totalitarian government is in place and they have access to and complete control over the web. In such a non-liberal government, those whose views do not conform with the party in power could be denied access to the web or (ahem) “persuaded” to cease from putting “seditious” information online. A country like China controls news sources in such a fashion and totalitarian leaders around the globe are identifying insurgents and resistors by monitoring online communication. In such a non-liberal country schools would use AI to identify the children who are “gifted and talented” and segregate them at an early age from their peers, who will receive schooling to limit their ability to think independently. Orwell’s imagined world where three totalitarian governments rule the globe, define history, and decide what information the masses need is plausible if AI is used for the purpose of a small group controlling everyone.

But here’s a Utopian spin on that “dark” scenario. What if nations around the world agreed that global warming was an urgent problem that defied marketplace controls and used AI to monitor everyone’s use of carbon? What if nations around the world decided that the AI social media algorithms that promote discord were wrong and banned them entirely? What if schools around the world decided that their children should be given the chance to learn at a rate that makes sense to them and have the opportunity to deeply pursue those topics that interest them the most?

We are at an important crossroads in terms of managing information. The debates on “what to do with social media” and the displacement of workers by technology are debates about AI as are the debates on the extent to which we want to embrace “personalization” in schools. Those debates should begin with the end in mind. Here are some questions that underlie the debates on AI:

  • Do we believe that ALL information should be made available to ALL citizens or is some information so toxic it should be banned altogether? If we DO want to ban some information, who decides what is banned? The government? A “council” appointed by the media? The marketplace?
  • Do we believe that clearly false information should be banned from circulation online or do we trust the end-users to sort out fact from fiction? If we DO want to ban false information, who decides what is banned?
  • Do we want divergent free-thinking independent life-long learners in the future or do we want citizens who unquestioningly believe what “authorities” tell them? If we want the former, are our schools preparing students for that future… or are they preparing them for the dystopian future?
Categories: Uncategorized Tags:
%d bloggers like this: