Home > Uncategorized > The Hazards of Unfettered Data Sharing and Poorly Crafted Red Flag Legislation

The Hazards of Unfettered Data Sharing and Poorly Crafted Red Flag Legislation

March 4, 2019

Over fifteen years ago I wrote an article for Education titled “A Homeland Security Bill for Public Education“, an article that advocated the sharing of pertinent information among social service case managers, medical professionals, school districts, and police. I reasoned that the various staff member’s confidentiality pledges precluded them from sharing important information with the other agencies in a timely fashion, citing several cases from my work experience where such sharing would have benefitted the clients they were striving to help.

Of late, similar recommendations have come forth in the form of “red flag” legislation that would allow police or family members to take weapons away from individuals with documented mental health problems, individuals who might pose a harm to themselves or others. These “red flag” laws seem to be eminently reasonable. Indeed, some NRA officials and politicians who reflexively oppose any effort to limit anyone’s access to any weapons whatsoever are seemingly open to considering “gun violence restraining orders“.

After reading a recent Motherboard article about the use of data collected by social service agencies, schools, and police in Canada and in several US cities, though, I am having second thoughts about my recommendations and about the efficacy of “red flag” legislation. The Motherboard article underscores the importance of carefully crafting any legislation and/or regulations that deal with data access, for once an individual receives a “red flag” it is difficult to reverse that designation. The article opens with these paragraphs:

Police, social services, and health workers in Canada are using shared databases to track the behaviour of vulnerable people—including minors and people experiencing homelessness—with little oversight and often without consent.

Documents obtained by Motherboard from Ontario’s Ministry of Community Safety and Correctional Services (MCSCS) through an access to information request show that at least two provinces—Ontario and Saskatchewan—maintain a “Risk-driven Tracking Database” that is used to amass highly sensitive information about people’s lives. Information in the database includes whether a person uses drugs, has been the victim of an assault, or lives in a “negative neighborhood.”

The Risk-driven Tracking Database (RTD) is part of a collaborative approach to policing called the Hub model that partners cops, school staff, social workers, health care workers, and the provincial government.

As you can see, the description of the “Hub model” is eerily similar to what I recommended in my 2003 Education Week article. But when I wrote that article, I did not foresee the advent of facial recognition technology… or the widespread use of data warehousing by schools, the medical profession, social service agencies, and law enforcement… or the  avalanche of data that would be collected by social media sites. With all of these technology tools in play, it would appear that some kind of failsafe algorithm might come into play, a means of identifying an at risk individual with laser like accuracy. Such pin-pointing would presumably target those individuals likely to engage in mass shootings or crimes. But it begs the problem of how and when to engage law enforcement officials and how and when to compel an individual to seek treatment for mental illness. As Valerie Steeves, a University of Ottawa criminologist, noted in a VICE article on the use of the Hub model: “As soon as you’re identified [as at-risk], it changes how people interact with you. At that point, you become the problem: ‘we need to watch you, all the time, so we can fix you.’” As one who worked for six years as a high school disciplinarian, I can recall how difficult it was for a youngster who misbehaved as a freshman to shed his or her image as a “troublemaker”… and, as we’ve seen in recent years, Google never forgets. Ill advised posts on social media can limit one’s opportunities as much as poor report cards or low SAT scores.

If we hope to use the massive amounts of data we are collecting on individuals to screen them for “risky behavior” or “mental fitness” we need to enact legislation that sets clear guidelines for the collection and use of that data. We now have surveillance cameras gathering data in schools, shopping areas, at intersections, and, in some cases, on our phones and on our home computers. Who owns that data? Who decides how it can be used? Social media records our “likes” and “loves”, the things that make us laugh, the things that make us cry, and the things that make us angry. Who can buy that data? Who has access to it? Virtually all of our purchases and media consumption results in the collection of data, making it possible for some agency to determine the books we read, the movies we watch, the foods we purchase, the places we are planning to take our vacations, and the major purchases like houses and cars we are examining on-line. Who has access to this data? How is it being used.

15 years ago, I thought that the notion of data sharing was straightforward. The school district’s guidance counselor assigned to a student, the social worker assigned to that student, the probation officer working with that student, and the mental health counselor working with that student, and the physician(s) working with the should all feel free to share information with each other. Each clearly had the student’s well-being at heart and they would each benefit from sharing whatever they knew without completing reams of paperwork or getting clearance through their chains-of-command. Now, I’m not so sure, particularly when the data platforms like those used in the “Hub model” are privately operated and owned and there are no clear parameters on how and when the data are purged.

These questions are complicated and thorny. Presumably we would want to know that someone who is planning a mass shooting has acquired a stockpile of weapons. We would also want to be able to confiscate weapons from someone who is a potential terrorist and know who is communicating with on-line ISIS recruiters. But is everyone who is stockpiling weapons a threat to us? Is everyone who is researching Arabic and Muslim websites a potential terrorist? Is a website purporting to be an ISIS recruitment site a bona fide site?

It would be helpful to have these issues brought to the forefront now, before the data being collected are made available to whomever is willing to pay for it for whatever purposes they wish. I just googled myself. I have 36,000+ that came forth in .42 seconds. The 8th item on the list from MyLife.com indicates that I once lived in Portland, OR. That is demonstrably false…. but there it is for all to see and draw their own conclusions. I’m leaving it there because their is no way I can keep track of all the misinformation that is accumulating. But if I were identified as someone “we need to watch…, all the time, so we can fix you” I might not sleep too soundly as the misinformation accumulates.



%d bloggers like this: