Digital safety 2019: 2 parts of its emerging ‘middle layer’

Anne Collier
8 min readJan 23, 2019

--

Part 1 of this series was 2018 highlights. This part aims to shine light on some interesting ideas and developments that people have surfaced for a better Internet in 2019 and beyond (Part 3 looks at the middle layer build-out’s huge next step, Facebook’s coming Oversight Board).

So what is this “middle layer”? It’s a way of thinking about Internet safety that people have actually been discussing and building in various ways for most of this decade. We just haven’t thought of it as a whole. It’s like the proverbial elephant we know is there, but we’re all blindfolded and busy dealing with only our part of it — usually in the areas of prevention education, intervention (including law enforcement’s), content moderation or regulation — so we’re not seeing it as a whole. I suggest that we think together about the whole animal.

Why a “middle layer”? For a long time we’ve been working the problem on just two levels: in the cloud and on the ground. In the “cloud” it’s content moderation, machine-learning algorithms for detection before harm happens, transparency reports, and other self-regulatory tools. The “ground” is a whole array of traditional solutions, such as regulation, 911 and law enforcement, lawsuits, school discipline, hotlines providing care for specific risks and groups (e.g., domestic violence, sexual assault, depression, suicidal crisis) and of course parenting.

All of that is needed, but it’s not enough. Not anymore. Because our new, fast-changing, global-but-also-very personal media environment calls for new approaches to regulation and user care. We now need to be working consciously on three levels, and it’s on the middle level at which some really interesting thinking has been going on — especially in the areas of regulation and moderation.

The regulation part

Regulation as we know it is not enough. “What we don’t hear nearly enough,” wrote University of Toronto law professor Gillian Hadfield in Quartz last summer, “is the call to invent the future of regulation. And that’s a problem.” Interestingly, even the platforms are on board with that, Wired reports. Facebook CEO Mark Zuckerberg announced an independent court of appeals for content decisions, according to Quartz.

Whatever shape that takes, it’s the independent part that defines the middle layer — not part of what platforms do and not part of what government does — though it certainly works with both.

“Our existing regulatory tools…are not up to the task. They are not fast enough, smart enough, or accountable enough to achieve what we want them to achieve,” Dr. Hadfield wrote. I would add that they also obviously work only country-by-country, and the content on the platforms is global — even a single thread on Twitter, for example, can cross multiple national boundaries.

What we need now 1) folds in tech expertise that keeps up with the pace of tech change, 2) allows for laws to be reviewed and adapted to changing needs, maybe even have an expiration date, and 3) draws on multiple perspectives, not just policymakers’ or those of the companies being regulated but of age and demographic groups the regulators aim to protect — and those of researchers!

‘Super-regulation’

For that first criterion, Dr. Hadfield calls for “super-regulation” — “‘super’ because it elevates governments out of the business of legislating the ground-level details and into the business of making sure that a new competitive market of [licensed] private regulators operates in the public interest.”

These private regulators fit the description of “middle layer” because they’d have to keep both governments and “regulatory clients” happy “in order to keep their license to regulate.” Keeping their clients happy means developing “easier, less costly and more flexible ways of implementing regulatory controls.”

This layer of competitive independent regulators has actually been developing for some time, Hadfield says. She gives examples such as Microsoft “leading efforts to build global standards for privacy and cybersecurity and Google submitting to AI safety principles.” Other, slightly different, parts of the new regulatory layer have been in development too, as described by researcher Tijana Milosevic in her new book, Protecting Children Online?

Some ideas offered by researcher and author Tarleton Gillespie — a “public ombudsman” or “social media council” — could fall into either the regulation or moderation category, or both. “Each platform,” he writes in Wired, “could be required to have a public ombudsman who both responds to public concerns and translates those concerns to policy managers internally; or a single ‘social media council’ could field public complaints and demand accountability from the platforms,” he adds, citing a concept fielded by the international NGO Article19. “By focusing on procedures, such oversight could avoid the appearance of imposing a political viewpoint.” That would be imperative because, to be effective, the middle layer has to be credible to all whom it serves. Independence from the platforms and, in some countries, government, is key.

The moderation part

Think of content moderation as user care. It both protects users and defines “the boundaries of appropriate public speech,” writes Dr. Gillespie in his 2018 book Custodians of the Internet. The thing is, most of that protection and definition is internal to the platforms — to the cloud. It’s being done by private companies, not by governments and traditional care providers such as crisis hotlines or even 911 (on the ground).

There are several problems with that. The platforms have neither the context nor the expertise to provide real care. All they can do is delete content, which can help a lot in some cases, but — as Gillespie spells out in detail in his book — a lot of content doesn’t get deleted. Not necessarily intentionally on the platforms’ part and not only because of sheer volume, but because deletion decisions are sometimes really complicated. One person’s free speech is another person’s harm. And images considered commonplace in one country can be extremely incendiary and dangerous in another. Potential impacts on the ground often can’t be imagined by platform moderators who’ve never been to the place where the incendiary content was posted.

Another problem is that what we’re talking about, here, is not mainly about technology — even though so many (especially those of us not born with the Internet) think that it is. It’s actually about our humanity. What happens on the Internet is rooted in people’s everyday lives and relationships, so getting content deleted is often, not always, more like taking a painkiller than really getting at the pain. Which is why Internet help needs some way to connect with on-the-ground help, such as parents, school administrators, risk prevention experts and mental healthcare specialists; they’re the ones qualified to get at the real issue and help alleviate the pain. I suggest that connection has to happen through an intermediary, a middle layer, that can provide context and screen out what isn’t accurate or actionable.

Filling the context gap

Because what’s happening on the ground, in offline life, is the real context of what we see online. In his hearing on Capitol Hill last spring, Facebook CEO Mark Zuckerberg suggested algorithms were getting better and better and would eventually solve the context problem. Yes, maybe for some kinds of content, but not cyberbullying, one of the most prevalent online risks for kids. Nothing is more contextual or constantly changing — within a single peer group at a single school, let alone hundreds of millions of youth in every country on the planet. Even Facebook says in its latest Transparency Report that harassment is content of a “personal nature” that’s hard to detect and proactively remove without context. I agree, and suspect school administrators do too. It’s hard to understand what hurts whom — and what is or isn’t intended to hurt — without talking with the kids involved even in a single peer group, much less school. [Children are just one of many vulnerable classes in the Internet user care discussion, probably the oldest case study to consider and the one with the largest body of research.]

So a middle layer of moderation has been developing in the form of “Internet helplines” throughout Europe, in Brazil, and in Australia and New Zealand. Some have folded Internet content deletion into longstanding mental healthcare helplines serving children. Others became part of other longstanding charities such as Save the Children Denmark and Child Focus in Belgium. Some, like SaferNet in Brazil, were nonprofit startups created just for the Internet, and still others, such as Australia’s eSafety Commissioner’s Office and New Zealand’s Netsafe, are to a degree part of national governments. But the government-based ones are not regulators, and so far they seem to meet that crucial trust criterion of keeping apart from national politics.

Help in 2 directions

These helplines provide help in two directions: up to the platforms and down to users. To the platforms, the greatest service is context (because how can algorithms and the people who write them tell social cruelty from an inside joke that only looks cruel to an outsider?). Context makes abuse reports actionable so harmful content can come down. The great majority of abuse reports the platforms get are what they call “false positives”: not actionable. There are all kinds of reasons for that, from users not knowing how to report to users abusing the system to users reporting content that doesn’t (without context) seem to violate Terms of Service to users abusing the system. And then there’s the content that hurts users but doesn’t violate Terms. There is so much the platforms can’t possibly know, which is why they need help. They need to acknowledge this.

To users, Internet helplines help with things neither the platforms nor services on the ground can help with: understanding the issue in the context of its occurrence, where to go for the best on-the-ground help in their country, how to gather the evidence of the online harm the app/platform needs in order to take proper action, and when content should be sent to the platform for suggested deletion. I say “suggested” because obviously only the platforms can decide what violates their own Terms of Service — however, that independent, trusted 3rd party can cut through a lot of guesswork.

Trust is essential

I’m not saying these services are perfect; they’re only a very important start in building out an effective middle layer. There’s much work to do, including developing uniform best practices for helplines worldwide, and I hope it stops being ad hoc and piecemeal. But the work has begun. It’s independent of the platforms and in many cases even of government. Trust is essential. To be effective, the operations in this new layer need the trust of users, platforms and governments.

As it’s built out, the middle layer will provide more and more support for people, platforms and policymakers, enabling each to serve the others better. Closing the circle of care, you might say. Now it just needs to be built out more proactively and strategically — not in reaction to tragedies and laws (sometimes badly written ones) — and drawing on the expertise of all stakeholders, including our children.

Anne Collier is founder and executive director of The Net Safety Collaborative, home of the U.S.’s social media helpline for schools. She has been writing about youth and digital media at NetFamilyNews.org since before there were blogs and advising tech companies since 2009 (full-length bio here).

--

--

Anne Collier
Anne Collier

Written by Anne Collier

Youth advocate; blogger, NetFamilyNews.org; founder, The Net Safety Collaborative

No responses yet