Posts Tagged ‘technology’

When No Community Exists a School Bus Can Be a Hub

February 11, 2020 Leave a comment

This idea for providing Pre-K programs to remote rural families, or more accurately, child-rearers, touches all the bases. It offers literacy, support services for children, and a wide array of social services for adults… and it is inexpensive. This makes much more sense than trying to get 3 year olds to use computers to learn how to read.

In Lockport, Fear Wins Out Over Privacy and Facial Recognition is In Place

February 10, 2020 Leave a comment

I posted earlier about the Lockport (NY) School District’s decision to spend millions of dollars to implement facial recognition software designed by Aegis in its schools, a decision made two years ago that received pushback from several community members, the State Department, and the ACLU. I read this weekend in Davey Alba’s SFGate article that despite these appeals and after addressing the State’s concerns, the district is moving forward with the plan. The ACLU’s spokesperson did an eloquent job of explaining the negative consequences:

“Subjecting 5-year-olds to this technology will not make anyone safer, and we can’t allow invasive surveillance to become the norm in our public spaces,” said Stefanie Coyle, education counsel for the New York Civil Liberties Union. “Reminding people of their greatest fears is a disappointing tactic, meant to distract from the fact that this product is discriminatory, unethical and not secure.”

And make no mistake: fear WAS the selling point and technology was the clear antidote:

Robert LiPuma, the Lockport City School District’s director of technology, said he believed that if the technology had been in place at Marjory Stoneman Douglas High School in Parkland, Fla., the deadly 2018 attack there may never have happened.

“You had an expelled student that would have been put into the system, because they were not supposed to be on school grounds,” LiPuma said. “They snuck in through an open door. The minute they snuck in, the system would have identified that person.”

…When the system is on, LiPuma said, the software looks at the faces captured by the hundreds of cameras and calculates whether those faces match a “persons of interest” list made by school administrators.

That list includes sex offenders in the area, people prohibited from seeing students by restraining orders, former employees who are barred from visiting the schools and others deemed “credible threats” by law enforcement.

If the software detects a person on the list, the Aegis system sends an alert to one of 14 rotating part- and full-time security personnel hired by Lockport, LiPuma said. The human monitor then looks at a picture of the person in the database to “confirm” or “reject” a match with the person on the camera.

If the operator rejects the match, the alert is dismissed. If the match is confirmed, another alert goes out to a handful of district administrators, who decide what action to take.

The technology will also scan for guns. The chief of the Lockport Police Department, Steven Abbott, said that if a human monitor confirmed a gun that Aegis had detected, an alert would automatically go to both administrators and the Police Department.

So now, the citizens of Lockport can presumably rest easy… that is unless the Aegis software mistakes one of their children as a “person of interest” made by the school administrators… or they resemble anyone on a secret list put together by the administrators and police… or they are a person of color.

In Lockport, black students are disproportionately disciplined. In the 2015-16 school year, 25% of suspended students in the district were black even though enrollment was only 12% black, according to data from the federal Department of Education.

LiPuma, the director of technology, said he believed that Lockport’s system was accurate. He also said he, as well as some other school officials, would like to add suspended students to the watch list in the future, despite the State Education Department’s recent directive that Lockport make it clear in its policy that it is “never” to use the system “to create or maintain student data.” Most school shootings in the past decade, LiPuma said, were carried out by students.

“The frustration for me as a technology person is we have the potential” to prevent a school shooting, he said. “If something happens, I’m not going to feel any better about that, but it wasn’t my decision. That’s on State Ed.”

Jason Nance, a law professor at the University of Florida who focuses on education law and policy, warned that listing students as “persons of interest” could have unintended consequences.

“If suspended students are put on the watch list, they are going to be scrutinized more heavily,” he said, which could lead to a higher likelihood that they could enter into the criminal justice system.

Jayde McDonald, a political science major at Buffalo State College, grew up as one of the few black students in Lockport public schools. She said she thought it was too risky for the school to install a facial recognition system that could automatically call the police.

“Since the percentages for the false matches are so high, this can lead to very dangerous and completely avoidable situations,” McDonald said.

So an unproven technology that has the potential to inaccurately profile potential offenders is being sold to a school district based on the presume put forth by a district technologist who claims that shootings in Florida would have been prevented with this new product in place. What’s wrong with this picture? Why are we allowing fear to dominate the lives of our school children? How can this be reversed?

Anonymous eSchool News Contributor Offers Chilling School Safety Ideas

January 20, 2020 Comments off

This article from eSchool News, posted by an anonymous contributor, offers a list of ways the school district he or she oversees is dealing with school safety issues… and the solutions offered are chilling. In the name of school safety all of the students in the district are completely forfeiting THEIR anonymity and privacy by inviting adults to monitor their every move within the school and every word they write. I hope that the school board and parents in the district led by the “contributor” have thought about the kind of citizens they are developing in the name of safety.

Cost Cutting Conservative Canadian Leaders Reveal True Purpose of E-Learning: Saving Money!

January 16, 2020 Comments off

The Toronto Star uncovered documents indicating that Ontario’s Conservative Premier Doug Ford’s vision for the expansion of e-learning had nothing to do with improving opportunities for students and everything to do with saving money. As reported in Press Progress the Star wrote:

“A ‘confidential’ government document obtained by the Star shows Premier Doug Ford’s government considered keeping online learning optional until 2024 and planned to slash school board funding while creating courses to sell to other jurisdictions at a profit …

Marked “not for distribution,” the six-page document also envisioned allowing students to get high school diplomas “entirely online” starting in September 2024 …”

The Star report offered more details, indicating an intent to cut funding to school boards by by $34.8 million starting September 2020, $55.8 million in 2021, $56.7 million in 2022 and $57.4 million in 2023-2024 with that level of savings continuing in perpetuity while offering “…a full catalogue of online ‘gold standard’ courses,” an oxymoron to be sure.

The memo also called for school boards to gradually increase their on-line offerings and go into the business of marketing their courses to other districts outside of the province in order to generate revenue.

The Ministry of Education did not dispute the existence of the document, but they did contend that the notion of replacing teachers with computers was not part of the overall plan and that privatization was not part of their long term agenda. i doubt that many teachers or school boards are trusting those words after hearing for months that e-learning was all about students.


Is Undeserved Faith in Technology Leading Us Down Blind Alleys?

January 6, 2020 Comments off

Two recent articles on the expanding use of technology in schools and medicine illustrate the potential flaws in following the technology industry’s mantra of “fail fast and fix it later”.

This School Banned iPads, Going Back to Regular Textbooks—But What Does the Science Say?“, a blog post by Jenn Ryan describes the rationale for the decision of an Australian public school to abandon its use of iPads and offers a good series of point-counterpoint arguments on both sides of the issue. My takeaway from reading the scientific findings is that its very unclear that iPads helped the broad population of students improve their academic skills or their technology skills. In short, the schools spent tens of thousands of dollars investing in an unproven technology and had no improvements to show for it. Despite this set of findings, Ms. Ryan reported that some parents were upset at the decision to back away from the iPad mandate. Why?

However, parents had mixed reactions, some saying they believed digital devices were essential for modern education….

Ms. Ryan, after looking at the evidence for the use of iPads, came to a different conclusion:

Parents who object saying that modern technology usage is a necessary skill for most job markets aren’t wrong; however, placing an emphasis on learning with iPads hardly seems to be the solution—a simple technology course or at-home use of these devices could suffice.

The second article on this topic, by Kaiser Health Network’s Liz Szabo has the following title:

A Reality Check On Artificial Intelligence: Are Health Care Claims Overblown?

As happens when the tech industry gets involved, hype surrounds the claims that artificial intelligence will help patients and even replace some doctors.

Ms. Szabo describes the current thinking in terms of AI and health care and finds that there is widespread optimism and enthusiasm. She opens with this:

Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could “outthink cancer.” Others say computer systems that read X-rays will make radiologists obsolete.

“There’s nothing that I’ve seen in my 30-plus years studying medicine that could be as impactful and transformative” as AI, said Dr. Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the Food and Drug Administration ― which has approved more than 40 AI products in the past five years ― says “the potential of digital health is nothing short of revolutionary.”

But then she immediately begins to throw cold water on the idea:

Yet many health industry experts fear AI-based products won’t be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra “fail fast and fix it later,” is putting patients at risk ― and that regulators aren’t doing enough to keep consumers safe.

Ms. Szabo notes that AI innovations in health care have become a magnet for venture capitalists and offers a lengthy description of various AI innovations that have fallen short of their promise and of how the FDA has fallen short of the mark in overseeing these new products, many of which are lightly regulated. One of the major problems these new products face is lack of sound data to use to develop their algorithms.

Many AI developers cull electronic health records because they hold huge amounts of detailed data, (Stanford researcher) Cho said. But those developers often aren’t aware that they’re building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.

A KHN investigation published in March found sometimes life-threatening errors in patients’ medication lists, lab tests and allergies.

And using bad data to formulate AI decisions has often resulted in an increase in “false positives” and that, in turn, can lead to needless tests and needless anguish on the parts of patients. Who can stop Big Data entrepreneurs who are seeking to make a profit from health? Ms. Szabo has the answer:

In view of the risks involved, doctors need to step in to protect their patients’ interests, said Dr. Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.

While it is the job of entrepreneurs to think big and take risks,” Saini said, “it is the job of doctors to protect their patients.”

But Ms. Szabo overlooks another potential source for intervention: a robustly funded and more muscular regulatory agency. If entrepreneurs are encouraged to “think big and take risks” and doctors are supposed to “protect their patients”, regulatory agencies are supposed to enforce existing regulations…. and after reading Ms. Szabo’s article it seems that their mission is compromised.

Categories: Uncategorized Tags:

Today’s Collegians are Surveilled 24/7, in Keeping with In Loco Parentis Standards Set By Student’s Parents

December 27, 2019 Comments off

I was initially appalled when I read the headline in Drew Harwell’s Washington Post article that appeared earlier this week. It’s title, “Colleges are turning students’ phones into surveillance machines, tracking the locations of hundreds of thousands”, led me to wonder why college students were accepting this surveillance… until I reflected on the upbringing of today’s students.

The students entering college today are the first generation to go through their lives being surveilled from cradle to campus. Their parents almost certainly had baby monitors in their rooms and, as part of the post-Columbine generation, likely attended schools with video monitors in the hallways. Upon entering adolescence, their parents purchased cell phones and provided them with phone service, enabling the parents to monitor their every movement and check on every text and phone call and monitor their screen time. In short, in loco parentis- the concept that colleges should keep track of students in the same fashion as parents, is far different in the age of telecommunications than it was when I entered college in the 1960s and when my children entered in the 1980s and 1990s. I was not surprised to read the reaction of one parent who was pleased with the impact of this kind of monitoring:

Some parents, however, wish their children faced even closer supervision. Wes Grandstaff, who said his son, Austin, transformed from a struggling student to college graduate… said the added surveillance was worth it…

He now says he wishes schools would share the data with parents, too. “I just cut you a $30,000 check,” he said, “and I can’t find out if my kid’s going to class or not?”

The article also offers a chilling description of how acceptable this kind of monitoring is to students today and how administrators justify its use based on the results:

This style of surveillance has become just another fact of life for many Americans. A flood of cameras, sensors and microphones, wired to an online backbone, now can measure people’s activity and whereabouts with striking precision, reducing the mess of everyday living into trend lines that companies promise to help optimize.

Americans say in surveys they accept the technology’s encroachment because it often feels like something else: a trade-off of future worries for the immediacy of convenience, comfort and ease. If a tracking system can make students be better, one college adviser said, isn’t that a good thing?

As a parent who did not have a baby monitor, I can appreciate the “convenience, comfort and ease” that such a device offers. It would have saved many trips up and down stairs to see if my daughter was really taking a nap and many nights of shuttling between our bedroom and hers when she was fighting a childhood illness. And as a high school disciplinarian in the late 1970s I would have appreciated the ability to remotely monitor distant hallways and to track students who were wandering off campus instead of attending class. But as a parent and school administrator, I have some misgivings about the overreach of technology, especially when it is being used to classify students and predict misbehavior as described in the article:

A classifier algorithm divides the student body into peer groups — “full-time freshmen,” say, or “commuter students” — and the system then compares each student to “normal” behavior, as defined by their peers. It also generates a “risk score” for students based around factors such as how much time they spent in community centers or at the gym.

The students who deviate from those day-to-day campus rhythms are flagged for anomalies, and the company then alerts school officials in case they want to pursue real-world intervention.

And what might that intervention looks like? In one case cited in the article, the university sent an adviser to knock on the student’s door. On one level, that kind of intercession seems invasive. Yet if the gathered data suggests the student is suicidal or, worse, contemplating and capable of carrying out some kind of shooting the institution would be faulted if it failed to act. This kind of conundrum contributes to the mixed responses of students, a response that is ultimately fatalistic given the ceaseless “advancement” of technology:

Students disagree on whether the campus-tracking systems are a breach of privacy, and some argue they have nothing to hide. But one feeling is almost universally shared, according to interviews with more than a dozen students and faculty members: that the technology is becoming ubiquitous, and that the people being monitored — their peers, and themselves — can’t really do anything about it.

But some administrators and students are rightfully concerned. Here’s the reaction of a disaffected administrator:

“It embodies a very cynical view of education — that it’s something we need to enforce on students, almost against their will,” said Erin Rose Glass, a digital scholarship librarian at the University of California San Diego. “We’re reinforcing this sense of powerlessness … when we could be asking harder questions, like: Why are we creating institutions where students don’t want to show up?”

And here’s a disenchanted student’s reaction:

“We’re adults. Do we really need to be tracked?” said Robby Pfeifer, a sophomore at Virginia Commonwealth University in Richmond, which recently began logging the attendance of students connected to the campus’ WiFi network. “Why is this necessary? How does this benefit us? … And is it just going to keep progressing until we’re micromanaged every second of the day?

Mr. Harwell does an admirable job of providing a balanced perspective on this difficult issue, his closing paragraphs reveal the paradoxical perspective on the issue of 24/7 surveillance:

Joanna Grama, an information-security consultant and higher-education specialist who has advised the Department of Homeland Security on data privacy, said she doubted most students knew they were signing up for long-term monitoring when they clicked to connect to the campus WiFi.

She said she worries about school-performance data being used as part of a “cradle-to-grave profile” trailing students as they graduate and pursue their careers. She also questions how all this digital nudging can affect students’ daily lives.

“At what point in time do we start crippling a whole generation of adults, human beings, who have been so tracked and told what to do all the time that they don’t know how to fend for themselves?” she said. “Is that cruel? Or is that kind?”

Here’s What’s Happening in SOME American Teenage Bedrooms— And It Isn’t Good News to this Geezer

December 2, 2019 Comments off

I just finished reading a recent NYTimes article by Taylor Lorenz titled “Here’s What’s Happening in the American Teenage Bedroom“… and I am in despair if this is the way American teenagers are defining “success”. The article describes a 15 year old suburban Philadelphia teenager named Rowan Winch who is making $10,000 a month through various on-line entrepreneurial undertakings. But, as the article notes, Rowan Winch is not interested in money.

Rowan, like most teenagers on the internet, wasn’t after fame or money, though he made a decent amount — at one point $10,000 a month and more, he said. What Rowan wanted was clout.

On the internet, clout is a social currency that can be used to obtain just about anything. Rack up enough while you’re young, and doors everywhere begin to open. College recruiters notice you. Job opportunities and internships come your way. Your social status among peers rises, money flows in. Even fame becomes a possibility, if that’s what you’re after.

The description of Rowan Winch ceaselessly entering posts on his phone brought to mind scenes from the movie Social Network, based on the “life story” of the role-model of all tech geeks Mark Zuckerberg. And based on what I’ve read about Mr. Zuckerberg, he, too, was after clout more than he was after wealth or fame.

I find the quest for clout even more distressing and disturbing than the quest for money or fame… for “clout” seems to be an anodyne tech term for POWER… and those who seek clout— like those who seek power— are not interested in the ends of power, only the acquisition of it. And the notion that someone who spends hours on end staring at screens, searching for memes that attract clicks from others, trying to accumulate “clout” strikes me as a soulless undertaking.

One of the story lines in the article was how Instagram suspended one of Rowan Winch’s most popular sites, leaving him temporarily bereft. He showed resilience, though… with a plan to replace his blocked Instagram site with a YouTube site. Here’s his explanation of why, which concludes the story:

“With YouTube I want to get big enough so the people that inspired me are my friends. It was like that with my meme pages,” he said.

“The more followers you have, the more voice you have,” he said. “The more clout you have, the more power you have.”

When a teenager believes “followers” are “friends” it is evident we need to increase social-emotional learning in schools.


Categories: Uncategorized Tags: