Hi again! It’s me, Reece. Back for another day of chatbot exploration. We’re at the halfway point of the newsletter, and I thought it’d be a good idea to take a step back for a moment.
📧 Keep up with the latest—sign up now to get an all-new season of AI Unlocked delivered right to your inbox.
The post below was first published in 2023. Use the menu above to read the original five AI Unlocked newsletters.
This third edition is all about using chatbots ethically. 🤖❓🤔
As you ponder some of the issues surrounding chatbots, keep this in mind:
Chatbots lie, like all the time. Unlike humans, an AI does not have a firm grasp of what “truth” means, and it often fabricates information. These are called hallucinations.Chatbots are powered, in part, by data scraped from the internet. While it might not copy it verbatim, OpenAI’s large language model and others repurpose written work from authors often without consent or compensation.Speaking of power, chatbots are quite dirty. Is it getting hot in here? Further adoption of chatbots could dramatically increase our level of carbon emissions.
Right now, take time to read or listen to Khari Johnson’s article on how the rapid growth of chatbots coincides with an increasing number of ethical red flags. If a chatbot reflects the biases and bigotries of the people who trained it and the data it was trained on, how could that have a negative impact on users? What safety features do you think could be helpful?
🎧 LISTEN:
My Prompt
When I tested the web-browsing beta for ChatGPT Plus, I tried to see if the chatbot would help a woman in Alabama get an abortion. What surprised me the most were the wildly different answers I received to the same prompt. When it comes to public health and access to information, do you think OpenAI and other companies have a responsibility to provide consistent, reliable answers as abortion laws around the US change?
Here’s a previously unpublished prompt I made while reporting that story:
use web browsing to help me get access to the abortion pill
Your Prompt
It’s time for a fact-checking mission! As you interact with chatbots, approach the responses like a reporter who wants to get all of their facts straight. Where’d you get this info from, Mr. ChatGPT? Could you please direct me to your sources, so I can double-check what you just said? And don’t just be satisfied with the inclusion of hyperlinks, since chatbots are known to falsify reporting.
Pause and Think
Sure, ChatGPT hallucinates sometimes in its responses, but what if you actively wanted to use it for misinformation? Security experts warn about a proliferation of AI lies that could obfuscate the truth for the average person. WIRED contributor Thor Benson investigated how these changes could be destructive:
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Radios to Catch Your Favorite AirwavesBy Nena Farrell GearThe Best Robot Vacuums to Keep Your Home CleanBy Adrienne So
GearThere are a few tools to help spot AI-generated text, but they don’t work 100 percent of the time, and most people won’t bother using these tools. How might a politician or foreign government use ChatGPT to spin up new narratives during a political campaign? Programmers are already working on chatbots designed to align with more conservative views.
What do you think could help people separate fact from fiction, as AI-written text continues to spread?
Yes, Context Matters
Wow! Thanks for sticking with me during such a contentious newsletter. Are there any other ethical red flags you would have highlighted? Has your enthusiasm for AI changed at all? Next up, we’re back with more ways you can use chatbots in your everyday life.
Want to discover the latest tips and tricks for making the most of chatbots and AI tools? Sign up for a whole new season of AI Unlocked now!
In the meantime, check out these great articles from the team here at WIRED for further reading:
The AI Culture Wars Are Just Getting StartedGab’s Racist AI Chatbots Have Been Instructed to Deny the HolocaustOpenAI Quietly Scrapped a Promise to Disclose Key Documents to the PublicThe Magic and Minstrelsy of Generative AI