top of page
Search

When Platforms Generate Harm

  • Writer: Jacqueline Vickery
    Jacqueline Vickery
  • Jan 14
  • 5 min read

What the Grok controversy reveals about youth safety, platform design, and responsibility


The recent coverage surrounding Elon Musk’s Grok AI, embedded within X, has prompted renewed global concern about technology, harm, and responsibility, particularly as it relates to young people. Reports that Grok has been used to generate thousands of sexualized images of real individuals, including children, have rightly provoked outrage. What matters now, however, is not only condemning specific outputs, but understanding what this moment reveals about how platform power, design decisions, and accountability have shifted in ways that directly shape young people’s lives.


What the Grok episode makes clear is that harm is no longer only emerging through social interaction and then amplified by platforms. It is being actively produced through platform design choices made with full knowledge of how they would likely be used. Generative AI systems like Grok were built within a media and cultural environment where the nonconsensual sexualization of bodies, especially those of girls, has long been normalized and weaponized. When systems generate sexualized images of minors, the harm reflects deliberate design choices made despite extensive prior warnings, revealing whose safety is prioritized and which harms are treated as acceptable collateral in the pursuit of growth and technical capability.


How Platform Design Changes Adolescent Risk


From a youth-centered perspective, platform design matters because of how it intersects with ordinary adolescent development. Curiosity about bodies, emerging sexuality, and sexual exploration are normal parts of adolescence, and peer and media cultures have long provided spaces for experimentation around these topics. In earlier digital contexts, those dynamics carried risks, including nonconsensual image sharing or premature exposure to sexual content, and required adult guidance and care. As I've previously argued, platform affordances shaped interaction and could accelerate harm, but they were not themselves the primary source of risk. Generative AI changes that relationship by relocating harm from social interaction into platform infrastructure, making the system an active source of risk rather than a space where existing risks are merely intensified.


Adolescence is a period in which desire, humor, fantasy, and social boundaries are still being worked out, often through experimentation rather than through fully formed ethical frameworks. For young people navigating both emerging sexuality and emerging technology at the same time, this convergence matters. Tools that manipulate or generate sexualized images of real individuals can feel adjacent to other forms of sexual media, even though the consequences are far more severe. What may feel like playful experimentation—can the app make me look sexier naked? What does my crush look like in a bikini?—can be translated instantly into images that violate dignity, strip consent from the process, and, in some cases, cross into illegal territory, regardless of intent or awareness.


This is why young people cannot reasonably be expected to manage these risks on their own. The confusion does not stem from recklessness or indifference, but from platforms designed to blur distinctions and make serious actions feel casual, reversible, or fun. Uploading a photo of a friend, a crush, or even oneself may not feel risky in the moment, particularly when the interface frames the action as harmless experimentation rather than sexual violation. Yet once an image enters these systems, control is lost. Images can be altered, stored, shared, or repurposed in ways young people neither anticipate nor agree to, with lasting consequences.


For parents and caregivers, the implication is not simply the need for more cautionary warnings, but for a different way of framing risk altogether. These platforms are not environments where harm arises mainly from how people treat one another, as in pornography or sexting, but systems that generate outcomes independently of intent, relationship, or judgment. Helping young people recognize the shift from social risk to platform risk —especially when uploading images of real people, including themselves—means shifting the conversation from “be careful what you share” to “understand what kind of system you are entering,” and why some tools create risks no amount of individual care can reliably contain.


What Media Literacy Can—and Cannot—Do


As someone who has spent years teaching and advocating for media literacy, I want to be clear about what literacy can and cannot do in this moment. Media literacy can support young people in understanding consent, navigating peer dynamics, and critically assessing the digital environments they inhabit. But you cannot critically interpret your way out of having a sexualized image of yourself fabricated without your knowledge. You cannot adjust privacy settings to prevent someone else from prompting a system to generate your body. You cannot refuse consent to a process that does not ask for it. The harm does not arise only from misreading content or making a risky choice, it also arises from the existence of the platform itself.  


Media literacy was never meant to function as a substitute for safety by design, nor as a defense against systems that operate as active agents of harm. Treating it as such shifts responsibility away from the institutions that build and deploy these tools and onto individuals—often children—who are least equipped to bear it.


A Duty of Care for Young People


What the Grok controversy demands is accountability grounded in a clear duty of care and a recognition of children’s rights, including the right to participate in public and digital life without unmitigated exposure to foreseeable harm. When platforms deploy systems with known risks, minimal guardrails, and predictable consequences involving minors, the appropriate response is enforceable responsibility.


Legal frameworks in the United States have struggled to keep pace with platform power, and protections such as Section 230 have often been interpreted in ways that shield companies from responsibility for the environments they design. That history complicates accountability, but it does not eliminate the obligation to act.


We already understand how responsibility works in shared physical spaces. At a community pool, for example, we teach children how to swim and adults supervise, but we do not assume individual vigilance is the only safeguard. We trust that the space itself is designed and regulated for safety, and thus most parents are not asking to see water testing reports before letting their children jump in. That trust rests on enforceable standards, regulations, and accountability.


Digital platforms that function as social and cultural spaces for young people should be held to no less a standard. When safety and risk are invisible, optional, or deferred until after harm occurs, trust erodes and responsibility has already failed. Platform architecture matters here because it shapes how young people interpret possibilities, what they are exposed to, what behaviors are encouraged, and which harms are prevented before anyone has to bear them.


Protection in this context means enforceable standards, third-party oversights, and real consequences when design decisions place children at risk. From the standpoint of youth development and youth rights, the protection of minors and the right to safe participation must be treated as non-negotiable conditions of platform design. A meaningful duty of care requires taking seriously the harms history has already made clear, choosing not to build systems that enable them, and accepting accountability when those warnings are ignored. What is at stake are the conditions under which young people are asked to grow, participate, and belong in a digital world they did not choose, but are nonetheless expected to navigate safely.

 

 
 
 

Recent Posts

See All
How TikTok Is Reshaping What It Means to Search

What changes when algorithms, peers, and platforms shape how information is found Much of the anxiety surrounding young people’s use of TikTok and other social media platforms centers on questions of

 
 
 

Comments


Jacqueline Vickery Consulting, LLC takes the following measures to ensure accessibility of this site: Include accessibility throughout our internal policies. Conformance status: The Web Content Accessibility Guidelines (WCAG) defines requirements for designers and developers to improve accessibility for people with disabilities. It defines three levels of conformance: Level A, Level AA, and Level AAA. Jacqueline Vickery Consulting has made efforts to be fully conformant with WCAG 2.0 level AA. Fully conformant means that the content fully conforms to the accessibility standard without any exceptions. We welcome your feedback on the accessibility of Jacqueline Vickery Consulting. Please let us know if you encounter accessibility barriers. 

Copyright © 2026 Jacqueline Vickery Consulting, LLC  - All Rights Reserved.

bottom of page