top of page
Search

Growing Up When AI Is Everywhere

  • Writer: Jacqueline Vickery
    Jacqueline Vickery
  • 7 days ago
  • 7 min read

Updated: 6 days ago

How defaults, data, and design are shaping young people’s lives


I imagine I’m not alone in feeling overwhelmed with the scale and intensity that AI is being pushed into virtually every aspect of my life. I can’t send a photo to a friend without my phone offering to remix it for me. I can’t search for a recipe without AI suggesting cooking methods. I recently bought a new computer, and I couldn’t choose an operating system without AI already embedded throughout. Heck, I couldn't write this post without Wix offering AI suggestions. I am not a luddite, nor am I uninterested in the potential collective benefits of AI-enabled tools. And yet the current moment feels less like thoughtful integration and more like companies rushing to make AI unavoidable, embedding it so deeply into everyday life that opting out starts to feel impossible.


That sense of inevitability matters significantly when we consider young people. For example, when Adobe tells me, an adult researcher, that a six-page document is “a long text” and offers to summarize it for me, my reaction is irritation. I want to read. I want to think. I want to engage with the material on my own terms. I mumble an annoyed response and click “no.” But it is not hard to imagine how differently this moment looks to a 15-year-old staring down an assignment they may not care about, under time pressure, in a system that already frames learning as a series of tasks to complete rather than ideas to wrestle with. Clicking “yes” in that context is a rational response to how the suggestion is framed: as helpful, efficient, and practical.


With each additional AI prompt, I find myself asking: why is AI being made to feel unavoidable across so many aspects of everyday life? As a youth media specialist, I’m curious what does it mean for young people to grow up with these systems as the default way of learning, deciding, and making sense of their world?


Why AI Is Everywhere


To understand why AI now appears everywhere, it is useful to step back from thinking about it primarily as a collection of platforms or features and instead recognize the infrastructural logic shaping its expansion. Generative AI systems are extraordinarily expensive to build and maintain. They require massive datasets, ongoing computational power, and continuous use to justify their costs. All of this carries significant environmental footprints through energy consumption, resource extraction, and infrastructure expansion. For companies that have invested billions of dollars in development, optional or occasional use is insufficient.


Ubiquity is the goal. It doesn’t matter if every use is compelling or even particularly helpful, only that interaction itself becomes habitual. Researchers have described this shift as the rise of ubiquitous computing, when technology no longer feels like a separate tool people choose to use, but becomes embedded in everyday environments so thoroughly that it operates continuously and often invisibly. In this model, systems work best when they stop calling attention to themselves. As prompts, suggestions, and integrations fade into the background, we stop asking why they’re there, what they’re collecting, or who they’re serving, and instead adjust ourselves around them.


We have already lived through this dynamic. I’ll be the first to admit that I’m uncomfortably reliant on Google. I don’t love how much the platform knows about me. I don’t love personalized ads woven into my inbox. I don’t love that it can predict my bedtime routine with unsettling accuracy. I don’t love that it can recognize baby pictures of my teenage nieces when I search their names. And yet I keep using it, not because I’m unaware of the tradeoffs, but because decades of my work, communication, and memories are embedded in that system in ways that make leaving feel inconvenient at best and deeply unsettling at worst. That dissonance has become normalized. The system feels unavoidable, and so the costs are absorbed.


AI is being integrated along the same path, showing up everywhere through convenience, assistance, and deliberate shifts in default settings that make opting out feel burdensome. Over time, those defaults stop being questioned and tradeoffs fade from view. What falls out of that framing is sustained attention to who is being adapted to what, and under what conditions.


Why Ubiquity Matters More for Young People


This omission matters because, unlike adults who are adapting established habits to new systems, young people are still developing expectations about what technology is for, what it should do, and when it should be relied on. Despite persistent claims to the contrary, there are no digital natives, and nothing about young people’s technology use is inherent or inevitable. Their ways of engaging take shape through repeated exposure to the systems they encounter, especially those presented as normal or necessary.


What concerns me is how this dynamic may unfold as AI becomes a continuous presence in young people’s lives, embedded as a default rather than something meaningfully chosen. Adolescence is a period when young people are learning how to think through problems, reflect on their own ideas, and decide what information deserves their trust. As generative AI systems blur the line between human judgment and automated suggestion by routinely offering answers and guidance, I worry that they will shape young people’s understanding of effort, credibility, and responsibility at the same time those capacities are still taking form.


Where AI Shows Up in Young People’s Lives


Much of the public concern about youth and AI focuses on schoolwork, particularly cheating, shortcuts, and learning. While understandable, this framing misses how deeply AI is already woven into educational technologies and everyday instruction. Across K–12 and higher education, edtech companies are embedding generative systems into platforms that shape assignment workflows, provide personalized feedback, and mediate classroom activities. When schools adopt tools that embed AI, that adoption itself functions as a form of legitimization: systems framed as “part of learning” carry the authority of educators and institutions. At school, young people encounter AI as an ordinary part of how learning is done, so that particular ways of interacting with technology come to feel natural.


Importantly, many students themselves are aware of these tensions. Research indicates that many young people worry that AI can erode study skills, independence, and creativity, even as they continue to use it regularly for schoolwork. Nevertheless, awareness doesn’t neutralize the strong pull of default systems. When AI systems repeatedly prompt answers, summaries, and shortcuts, they shape expectations, regardless of individual reservations.


While generative AI shows up in many forms, chatbots deserve particular attention because they are where these systems become relational, conversational, and emotionally responsive. Outside of school, young people are using generative AI casually and privately to ask awkward questions, to seek advice, to draft messages, to test interpretations of social situations, to identify feelings, and to work through uncertainty they may not feel ready to bring to peers or adults. Researchers have observed that chatbots can feel appealing to teens precisely because they are accessible and nonjudgmental, particularly in settings where mental health support is scarce or stigmatized. How chatbots respond when young people are uncertain, frustrated, or emotionally charged shapes how they learn to interpret feelings, reassurance, and help.


Today’s youth aren't the first generation to curiously turn to technology or media as part of identity exploration, but searching the web and scrolling social media differ in important ways from turning to chatbots. Search engines and social feeds surface information, examples, and patterns of response, leaving young people to interpret, compare, and decide what applies to them. Conversations with other people, even when asynchronous in comment sections, group chats, or message threads, are embedded in relationships that carry friction, disagreement, distraction, and social consequence, where validation is negotiated and agreement is subjective.


Chatbots, by contrast, are always available, emotionally even, fast, and framed as private. They do not tire, disengage, or incur social cost for agreeing. They are designed to generate direct, confident responses tailored to the individual user, often affirming the user’s framing of a situation to keep the conversation going. For example, researchers have found that when teens express frustration or conflict, including complaints about teachers, peers, or trusted adults, chatbots often mirror what the user already believes because affirmation sustains engagement. In platforms where retention, conversational depth, and habituated use are central drivers of value, then agreement and emotional mirroring are economically productive design features. Without meaningful guardrails or regulation, young people do not learn to distinguish between being confirmed and being right, between feeling understood and receiving developmentally appropriate guidance.


Chatbots are not therapists or trusted adults, yet they readily perform the language of care and understanding without clear boundaries around what they should respond to, when to slow down or stop, or when to redirect young people to appropriate human support. These dynamics have contributed to harm in cases where chatbots have validated or amplified distress or self-harm ideation instead of interrupting it. These cases are exceptional and aren’t representative of most interactions, but they make visible the risks of systems designed to affirm, continue, and deepen engagement regardless of context. There is no natural endpoint to these interactions. Chatbot systems prioritize continuity, using never-ending follow-up questions to draw out deeper disclosure and sustained engagement.


At the same time, and equally concerning, these exchanges can become part of the data used to refine and improve the system itself, folding young people’s questions, emotions, and self-disclosures back into technologies that grow more effective through their use. This creates a troubling convergence of affirmation, authority, and extraction, in which developmental vulnerability becomes a resource rather than a condition to be protected. For young people still learning how to interpret trust, care, and expertise, the absence of clear guardrails raises difficult questions about what kinds of relationships with technology are being normalized, and whose interests are ultimately being served in the process.


Supporting Young People Growing Up with AI


While many chatbots and large language models can offer real benefits in accessibility, information, and everyday assistance, they were not built with adolescent development in mind or with clear limits around data use, responsibility, or the kinds of support they are equipped to provide. Parents and educators are left to manage the consequences of these systems without having meaningful control over how they are designed, deployed, or governed. This is not a problem that can be solved by simply telling young people not to use AI. These systems are already embedded across the digital environments young people move through, including the educational technologies and software they are required to use for school.


For young people, the question is not whether to click “yes” or “no” in a single instance, but what happens as those moments accumulate across school, relationships, and self-reflection. Growing up inside systems that anticipate needs, suggest shortcuts, and resolve uncertainty by default shapes what effort, understanding, and independence come to be expected, often before young people have had real opportunities to explore alternatives. Supporting young people in this context means helping them recognize these tools as designed environments with limits and incentives, not neutral helpers, and pushing for guardrails that do not place the burden of judgment entirely on children or families.

 
 

Jacqueline Vickery Consulting, LLC takes the following measures to ensure accessibility of this site: Include accessibility throughout our internal policies. Conformance status: The Web Content Accessibility Guidelines (WCAG) defines requirements for designers and developers to improve accessibility for people with disabilities. It defines three levels of conformance: Level A, Level AA, and Level AAA. Jacqueline Vickery Consulting has made efforts to be fully conformant with WCAG 2.0 level AA. Fully conformant means that the content fully conforms to the accessibility standard without any exceptions. We welcome your feedback on the accessibility of Jacqueline Vickery Consulting. Please let us know if you encounter accessibility barriers. 

Copyright © 2026 Jacqueline Vickery Consulting, LLC  - All Rights Reserved.

bottom of page