elevenM’s Tessa Loftus on parenting the internet generation and the opportunity for digital media platforms to improve the experiences of everyone who is growing up on the internet.
An incident…
On a recent Friday evening, I was confronted by the familiar parental experience of a comforting a child who is crying and throwing up. But while the experience may have been familiar, the situation was anything but.
Just as I was finishing my workday on the Friday in question, the parent of one of my son’s friends texted me, saying “Your son is chatting with someone else from school on Discord and the language is getting a bit intense and I wondered if you wanted to intervene.”
Thinking that this person might be just a bit sensitive to boisterous boys getting shouty (my son and his friends seem to have only one volume setting, and it’s ‘full’), I went to check on my son, to find that he was playing Minecraft with a different friend and was not on Discord at all.
My initial confusion quickly turned to concern when she sent me a screen shot of someone saying to a new user in their group “This is [my son’s name] and you need to get off this group c***”.
Am I a parent or a cyber security incident manager?
At this point I got serious and quickly learnt something important — cyber security investigations may be complex and difficult, but they are nothing to attempting to get precise information from a tween boy, even one who wants to be helpful. Did he have Discord, had he used it, what was his username? These comparatively simple requests for information, and the assurance that I knew this person was not him, were met with tears and increasing panic. He couldn’t remember his username (how can you forget your username for a chat platform?!), didn’t have Discord on his computer anymore, didn’t know why or what happened to it, couldn’t remember when he last used it…
This is the point at which he went into full panic-attack mode and started throwing up, and I was forced to abandon cyber-security duties for parenting duties…
We tend to think of cyber security in the workplace context, where logs are kept and experts are available. Or we think about personal online security, such as password habits and not sharing personal information with strangers. But with no logs, no in-house IT department, and the world’s worst witness, where does one even start? We checked emails (it turns out that my son has meticulous email management habits, and deletes everything he’s actioned, making it impossible to find usernames, password reset requests or in fact any other information), browser cache, browser-saved passwords, but beyond concluding that using my son’s primary email address was not linked to the platform that he mysteriously no longer had on his computer, there was little information available for us to find.
In my role as in-house cyber security operations centre to my kids, I found that my children have learnt their cyber security and privacy lessons well — they use handles that don’t reflect their true names and they only communicate in closed groups with people they know in real life. The advice that I give out in the workplace daily — use strong passwords, be conscious of what platforms you share what information on, protect your real identity by managing your digital identity — has been smoothly absorbed by my children. As, I suspect, it has been by many of their peers.
But, sadly, good information management habits are not enough to protect kids on digital platforms, and there is a bigger discussion that is raised by these kinds of experiences. Many people (and certainly many parents) don’t know that the reason many tech platforms require users to be over 13 is because, in the US, additional protections, controls and consent processes are required on any platform being used by people under 13. Social media platforms, understandingly if self-servingly, chose to operate in the simplest possible regulatory environment. However, nearly everyone knows, and certainly tech companies know, that children under the age of 13 access the vast majority of platforms.
Consequently, most digital platforms are not designed to be used by kids (I would argue that a 13-year-old kid still needs more protection than one who is 17, but that is a whole other discussion) and this affects a whole range of elements from reporting, to functional use, to content controls. But digital platforms are an important part of kids’ lives and inarguably a part of their future, and as much as I’d like to control exactly what my kids are doing online, they also need to learn all of this: how to protect themselves, how to seek to help, how to report issues, and what to do when confronted by an anonymous malicious impersonator. As with an actual playground, the digital spaces that our children play, and learn, in need to have some room for mistakes. Some inherent understanding of the more reckless approach to life that is taken by small humans who will voluntarily climb trees and leap from heights.
How do we make this system better?
There’s an important lesson for all organisations here, and that’s to think honestly about who your users are, how they’ll be using your product, and what they need from it. A recent report from Thorn shows that 40-45% of kids aged 9-12 use the regular social media platforms (Facebook, Instagram, Snapchat etc), and that they are twice as likely to block someone inappropriate as they are to inform a parent or adult. Kids will be using digital products and platforms, and they need to be able to understand how things like reporting and blocking are accessed on whatever platform they are on. If digital platforms were honest about who their users are (and certainly all of my son’s almost-but-not-quite old enough friends use Discord), then it may also be easier for parents to don their cyber security hat if something does go wrong, because that would also be factored into the design.
In releasing their ‘age-appropriate design code’, the UK Information Commissioner said “For all the benefits the digital economy can offer children, we are not currently creating a safe space for them to learn, explore and play. This statutory code of practice looks to change that, not by seeking to protect children from the digital world, but by protecting them within it.”
Tech platforms need to take their fingers out of their ears and start moving the bar higher than ‘this is the minimum level for regulatory compliance’. As technology becomes increasingly interwoven into the fabric of our environment, and we lose choice and control in how and where we interact and what we disclose, the need for stronger ethical standards grows. Digital platforms must move away from ‘plausible deniability’ as a basis for business, otherwise this will be only the beginning of Friday nights spent comforting a crying child while at a loss as to how to protect them.
Photo by Jessica Ticozzelli from Pexels