The Internet Will Never Be Kid-Friendly

Let’s stop trying to make it so and instead make it adults-only.

Despite decades of legislative efforts, corporate dollars, and parental anxiety, the internet is still no place for kids. Child-proofing the internet is failing technically, legally, and culturally. For many, this is a sign that we need to work harder, but I propose a different lesson: let’s stop trying to make the internet kid-friendly. Instead, let’s pivot to developing an adults-only internet.

The technical realities of child-proofing the internet have long plagued advocates, and the situation isn’t getting better. Parents know they can’t manage the risks their children face online and are begging for help, despite an endless barrage of parental controls, content filters, and monitoring tools. Every ‘safety’ measure becomes another IT burden on parents—and those that could do the most to shore up these measures do very little. Apple lets app developers determine the appropriate age rating for apps, so it’s perhaps unsurprising that, in just a day, researchers found 200 apps with inappropriate content that were rated for children. And neither Apple nor Android app stores bother with labels for data collection and sharing, which is rampant among kids apps. Parents take to forums for help with YouTube Kids after they’ve tried to set filters, clear cache, and flag shows, “but it keeps going back to the really dark content.” 

Age verification, on the other hand, is already commonplace in many different settings that adults use every day, like online banking, sports betting, and alcohol delivery. Eight states now accept digital driver’s licenses in Apple Wallet, with dozens more considering it. Louisiana’s digital ID system lets adults verify their age in under 45 seconds without revealing any personal information. Other options have already started to develop—from analyzing email networks to using advanced cryptographic techniques that can verify age while preserving anonymity. Modern operating systems could easily provide age verification with a simple one-time check and then send an ‘adult’ signal—no additional information need be shared. While companies spend billions annually on content moderation and child safety features, a simple adults-only approach could slash these costs if the tech industry just invested up front in age-verification infrastructure. 

Legally, things are even messier. California’s Age-Appropriate Design Code, the latest attempt to childproof the internet, was blocked by federal courts, and for some good reasons. The law created a complex regulatory maze requiring companies to create detailed risk assessments, collect more personal information to estimate age while paradoxically promising privacy, and make subjective judgements about what content might harm children at different levels of development. Courts found this approach unconstitutional; it violated free speech rights and created so much uncertainty that many websites could not figure out how to comply.

Existing laws designed to protect kid’s data—namely the Children’s Online Privacy Protection Act (COPPA) and the Family Educational Rights and Privacy Act (FERPA)—represent the regulatory failure we can continue to expect on this path. The two laws combine to create a loophole that allows schools to “consent” on behalf of parents to expose their children to edtech companies that profit as student data empires. One such company, PowerSchool, boasts that it holds data on 75% of North American K-12 students and recently exposed more than 6,500 school districts, 62.5 million students, and over 9.5 million educators to hackers in a data breach. When parents ask where their children’s data is or how it’s being used, neither the schools nor the edtech platforms have answers. These complex rules create regulatory vulnerabilities instead of protecting kids. In contrast, a straightforward adults-only internet would require a simple content-neutral rule, similar to how we handle adult spaces, materials, and substances in the physical world, from liquor stores to betting parlors. 

Culturally, we should not want the internet to be Disney World, sanitized and infantilized for universal consumption. The internet, at its best, is both wild and vital—a space for authentic human expression and serious civic engagement. The internet should be a place for debating controversial ideas, discussing politics, organizing and advocating, exploring facets of identity and communities, making sense of disturbing news, and experimentation. Artists should be able to push boundaries, while activists should be able to coordinate protests. Businesses should be able to operate knowing their customers aren’t children draining their parents’ bank accounts. Moms should be able to share content about breastfeeding strategies without being blocked. Adults should be able to flirt on dating apps without stumbling upon tweens and teens. By trying to make it a place for kids, the internet becomes milquetoast. It turns into something bland, barely nutritious, and easily digestible. That’s not the internet we need. Of course, getting kids off the internet won’t do much for the seemingly inevitable degradation of content or the emptiness of ad-saturated feeds, but child-proofing removes any potential edginess of the internet. Most critically, our attempts to make the internet safe for children actively undermine its essential role as a forum for democratic discourse and social progress.

We’re pouring resources into increasingly complex solutions that fail to make the internet safe for children. And for what? Kids aren’t thriving online.

The internet exposes children to disturbing content at every turn. There’s of course porn—with one study showing most kids have seen online pornography by 12 years old, 15% were ten or younger, 64% of children stumbled across it accidentally while hanging out online, and over half had been exposed to violent forms of pornography including those involving choking, clear pain, or rape. Even mainstream social media and video platforms regularly host sexual content seen by kids. The live element of social media means children can encounter graphic real-time tragedies beyond filters—from suicides streaming on popular apps to fights broadcast live.

Then there’s the damaging everyday social media content: pro-anorexia communities with millions of views, viral challenges that have sent thousands of kids to emergency rooms, and endless streams of cruel pranks and bullying behavior that normalize harm. Instagram’s own internal research revealed that 32% of teen girls said that when they felt bad about their bodies, Instagram made them feel worse. In addition to the heinous content are the weighty issues beyond their emotional capacity and nuanced understanding—from the many global crises to the daily flow of human tragedy, all served up at rapid pace in bite-sized, sensationalized headlines for maximum shock value. 

The exposure of children to unsavory characters online takes countless forms. By the time a child reaches 13 (before which they are supposed to be protected by COPPA), online ad companies already have 72 million data points about them. A whopping 96% of edtech apps share data with third parties including advertisers. Social media platforms make $11 billion a year in ad revenue by vacuuming up every click, view, and interaction, creating sophisticated portraits of our children’s interests and vulnerabilities. Messaging features—buried within everything from gaming platforms to homework apps—create endless opportunities for contact with strangers. Sextortion schemes targeting young boys have exploded, with predators leveraging social engineering and adolescent impulsivity to obtain compromising content that’s then used for blackmail, some of which have resulted in victims committing suicide. 

But even for the minority of children who somehow avoid pornography and direct exploitation, they are still losing something profoundly important. Every hour spent scrolling is an hour lost from the essential experiences of childhood. While kids stare at screens, they’re missing out on the physical development that comes from outdoor play, the emotional intelligence built through face-to-face interactions, and the creative problem-solving skills sparked by unstructured exploration. American children spend significantly less time playing outside than previous generations and more than seven hours a day in front of a screen.

And those numbers don’t account for time at school. The amount of time students spend on screens in school varies widely, from one to several hours a day, and for every hour students are on screens, they spend 38 minutes off-task. On average, they don’t last six minutes before accessing social media, messaging friends, or engaging in other digital distractions. And now AI tools are being integrated with the same lack of foresight, consensus, or evidence. According to the CDC in 2023, 40% of students had persistent feelings of sadness or hopelessness, 20% of students seriously considered attempting suicide, and nearly one in ten actually tried to kill themselves. Some children are so anxious that doing basic things like cooking and riding bikes around the neighborhood are introduced as “independence therapy.” For a growing number of kids, AI companions are quickly relieving them of the important hard lessons of friendship and dating. This made headlines last year when a Florida boy killed himself to be with his AI girlfriend

Recent legislation demonstrates growing international recognition that certain digital spaces should be restricted to adults. Florida’s groundbreaking social media ban prohibits children under 14 from holding a social media account and requires parental consent for 14- and 15-year-olds, while establishing strict consent and age verification requirements. Australia similarly banned children under 16 from using social media. Multiple U.S. states, including Arkansas, Mississippi, Texas, and Utah, have enacted laws requiring adult websites to implement age verification systems. France has already blocked major pornography websites that failed to implement robust age verification systems, with French courts explicitly prioritizing child protection over unrestricted internet access. 

Building on these efforts, we can imagine something more transformative: an internet-free childhood and a child-free internet. Just as we accept that certain spaces and activities are reserved for adults, we could create a clear boundary between the adult digital world and the rich offline world of childhood by recognizing that independent internet access, like driving or voting, is fundamentally an adult capability. Through a combination of state laws, age-gating infrastructure, and cultural expectations, we could return the internet to its proper place as an adult space, while reclaiming childhood from screens. Just as we teach teenagers to drive in controlled settings before they hit the open road, we can introduce digital skills through structured, supervised environments as children approach the age of adulthood. Schools can teach professional computer skills without throwing students into the wider internet, preparing them for the digital workplace without sacrificing their development to screens. Meanwhile, homes and communities would reinvest in the spaces and activities that nurture childhood development. As the common expression puts it, we should all “touch grass,” but at the very least we should ensure that children do. An adults-only internet would be better for everyone. Kids don’t need to be online, nor do they need to be behind the wheel, or in a voting booth, or drinking alcohol. We should invest in legal, technical, and cultural infrastructures that support aging into the responsibilities of being online as an adult, not finding more ways to keep our kids on screens.