A new survey found that parents significantly underestimate their teen’s use of generative artificial intelligence. The technology, popularized by the conversational platform ChatGPT, allows users to ask pretty much any question of an AI bot and receive a specific and detailed original response.
The survey, conducted this spring by the nonprofit organization Common Sense Media, polled 1,045 U.S. adult parents and their teens. Overall, 70 percent of the teens said they used generative AI.
Of the teens who said they’d used at least one platform, only 37 percent of their parents were aware of their use. Almost a quarter of parents whose child had used the technology didn’t know it, and nearly 40 percent of the adult respondents whose teen had experimented with the tools weren’t sure about their experience doing so.
SEE ALSO:
AI apps can automate your home life, and it’s more cool than scary
Teens with generative AI experience said they most commonly used platforms for homework help, to avoid being bored, and to translate something from one language to another. Many of the teens admitted to using generative AI for school assignments without the teacher’s permission. Some teens created new content using someone else’s voice or image, sometimes as a joke or to tease another person. While participants were asked about their experience with using AI to make explicit nonconsensual imagery, those findings will be included in a future report.
Amanda Lenhart, head of research for Common Sense Media, said that parents should start discussing the complexities of generative AI with their children so that they’re prepared for what they might encounter.
“You’ve got to talk about it because your kids are already using it, even if you don’t think they are,” Lenhart said.
What your teen should know about generative AI
While the technology may benefit teens in numerous ways, Lenhart noted that young users may not be aware of its potential for reproducing bias and sharing incorrect or misleading information, often known as hallucinations.
Teens also might not know when using generative AI tools is tantamount to cheating; or violates another person’s privacy, when it’s used to create imagery without their consent; or, understand that the large language models upon which generative AI platforms are built can have different training, tuning, and guardrails.
“These tools pull from the best and the worst of the internet and the best and worst of humanity, and that means that sometimes what they share with us is both wrong and awful,” Lenhart said.
Mashable Top Stories
Stay connected with the hottest stories of the day and the latest entertainment news.
Sign up for Mashable’s Top Stories newsletter
By signing up you agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Parents may not know any of this important information if they’ve not read about or used generative AI, which is why it’s helpful for adults to familiarize themselves with the technology as they start discussing it with their child, Lenhart added.
She recommends that parents test out platforms that use this technology. They may even already spend time on platforms or services that incorporate generative AI, without realizing it.
Google’s Gemini search product, for example, currently produces generative responses to user inquiries. Meta’s Facebook Messenger and WhatsApp communication tools have generative AI capabilities as well. Basic Meta AI prompts include “give me a personality quiz” and “how do I speed read?”
Teens on Snap likely know its generative chatbot My AI, which adults could experiment with and then discuss with their child, Lenhart said.
When engaging with teens about generative AI, parents can play with the technology together with or alongside their child. Lenhart cited asking it to write a silly poem, make up Halloween costume ideas, or plan a week of meals as examples. A parent could also watch their teen test a chatbot’s knowledge of a topic they’re passionate about and then discuss how to know whether the AI was wrong or right.
Whatever parents choose, Lenhart says to focus on finding ways to help teens identify how AI platforms do or don’t work and have that conversation together.
Generative AI at school
The survey suggests those discussions aren’t happening in the classroom, which Lenhart said is consequential.
Common Sense research shows that teens who do participate in complex conversations and lessons about the topic at school have a more nuanced understanding of generative AI’s pros and cons. But the survey found that this isn’t happening for over half of the young respondents.
Lenhart said it’s important for teens to be told the rules for using generative AI when completing schoolwork, which may vary from teacher to teacher.
“Most adolescents don’t want to cheat.”
“Most adolescents don’t want to cheat,” Lenhart said. “They want to use it in a way that is appropriate for learning, so tell them where that is so they can walk up to that line and not cross it.”
And according to the survey, suspicion of cheating may disproportionately affect Black students. In the survey, Black teens were twice as likely as White and Latino teens to say that their teachers incorrectly flagged their schoolwork as the product of generative AI when it wasn’t. Lenhart said this finding mimics historic inequities in school-based discipline for Black youth.
“AI tools are not exempt from the bias you probably already experience in your school,” Lenhart said.
The issue illustrates why it’s so important that adults, at school and home, are aware of the challenges that teens will face in the era of generative AI.