Create Your First Project
Start adding your projects to your portfolio. Click on "Manage Projects" to get started
Using AI changes your brain— and it might be ruining everything
n MIT study published in June 2025 discovered that those who consistently used artificial intelligence programs, such as ChatGPT, exhibited lower levels of brain engagement and increasingly lazier behavior.
A Brief History of AI
Artificial intelligence as a concept has existed since 1927’s “Metropolis,” a film featuring a complex AI-assisted robot. By 1952, computer scientist Arthur Samuel created a real-life system, called machine learning, to train a computer to play checkers. Four years later, Dartmouth professor John McCarthy held a summer workshop about the new future of the systems, which he decided to call “artificial intelligence.”
The Department of Defense funded further research into the program in the 1960s. Influenced by media at the time, this effort was for a “general AI,” trying to develop a versatile machine capable of independent thought. Without any strong results, research shifted by the 1990s toward “narrow AI.” These programs can’t think of or create anything. Rather, they’re fed a complex set of data and trained by programmers to be able to recognize and replicate patterns.
Hollywood was still ahead of reality, as 2008’s “Iron Man” introduced “Jarvis,” an AI-assistant that would talk to its user, answer questions and control mechanical operations upon command.
In the real world, advancements in technology fueled the 2010s wave of automation, revolutionizing industries based in manufacturing and assembly-line work. In 2017, Google introduced Google Lens, which used AI pattern-recognition to identify elements in user-taken pictures.
By 2022, generative AI models were introduced to consumers with the release of ChatGPT, where users could input prompts and get a response. That same year, Midjourney was released, with a model that would turn text prompts into a piece of digital art. Now, the power— and potential dangers— of AI were in the hands of everyone.
This was the beginning of what ChatGPT’s parent company, OpenAI, called the “Intelligence Age.”
AI Invades Schools
The effects of a publicly available AI trickled into a plethora of ecosystems. Social media algorithms improved, video games got more complex, autocorrect got more predictive— and homework got a bit easier.
Increasingly, students started turning to AI platforms like ChatGPT to help with— or to fully complete— their school assignments.
Sophie Henkel, an elementary education major at GRCC, described that she’d had encounters with children that were swapping critical thinking and hands-on experiences for a digital substitute, even before AI.
“It’s a little bit concerning how reliant (kids) are on technology, and not using their imagination and playing, like how kids should be,” she said.
Nataliya Kosmyna, the primary author of the MIT study, worries that AI will worsen those problems by introducing the “absolutely bad and detrimental” idea that she calls “GPT kindergarten.”
Despite the potential concerns, schools are making an effort to integrate AI into their systems. Jonnathan Resendiz, Faculty Director of Grand Rapids Community College’s AI Incubator, had previously told The Collegiate that he sees “a future where AI has a bigger role in shaping how we learn… even influencing school policies to keep up with new tech.”
GRCC recently even announced a “three-year initiative focused on helping students apply AI to real-world business challenges” through a seven-week capstone course where they develop and use an AI model trained on data supplied by 20 local businesses.
Schools are, however, putting safeguards in place to counteract AI’s most significant drawback: misinformation. While misinformation on a national and global scale tends to be targeted and with intent to cause confusion, AI-fueled misinformation is often unintentional— and frequently unnoticed.
Since artificial intelligence is just a repository of data, the programs don’t have the ability to fact-check themselves— they don’t know what’s right or wrong, just what they’ve been programmed with. So, while their training data is mostly accurate, they frequently “hallucinate,” or misinterpret data and produce an invented fact that’s often factually incorrect. This can occur in as many as 79% of searches.
This results in what GRCC Professor of Technology and Humanities David Billings described as an AI pulling “several real facts, then something it just makes up, and then something from a French conspiracy website, (that it ties) together in a convincing way,” leading to a finished product not based in authentic research, but that the student might believe is genuine.
As a result, many schools have instituted anti-AI policies, even directing teachers to use AI detectors to check the students’ work. Students frequently find ways around the detectors, though. As reported by The Atlantic, some students combined responses from multiple AI programs or asked the system to include intentional typos. Additionally, updated technology allows the model to process images and abstract questions to solve more complex assignments and produce personalized study guides.
Making matters more complicated, many teachers break their own AI policies. A Gallup poll found that 60% of K-12 teachers used an AI tool during the 2024–25 school year, and 32% of them used the tools at least weekly. Vanderbilt University administrators made headlines in 2023 when they used AI to write a mass email after a shooting at Michigan State University.
AI on the Brain
This influx in the use of artificial intelligence is especially concerning as the full effects AI can have on the brain come into light.
In the MIT study, 54 subjects, aged 18 to 34, were asked to write essays on a variety of SAT prompts. Those assigned to write their essays with ChatGPT routinely displayed weaker brain connectivity and “consistently underperformed at neural, linguistic, and behavioral levels.”
The essays they wrote were called “soulless” by two English teachers who reviewed them, noting that they were less unique from other essays in the group and less specific in their focus, compared to their non-AI counterparts.
“Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels,” the study said.
Members of the group that weren’t allowed to use AI, however, showed high levels of neural connectivity and engaged more curiously with their writing topic.
When asked later to rewrite one of their essays, the group using AI “remembered little of their own essays,” and showed weaker brain waves— a sign that the subjects hadn’t engaged their deep memory.
Shockwaves and Brainwaves
The consequences of this could go beyond poor performance on an essay. A 2011 article in the Behavioural Brain Research journal found that, as new neurons are formed in the hippocampus of adult brains, they become incorporated into the larger circuitry structure for learning and development— unless they’re not engaged in “some kind of effortful learning experience,” when they will likely die. Usage of AI, which often doesn’t engage these parts of the brain regularly enough, poses a serious threat to continued cognitive development.
These concerns haven’t stopped AI’s wide implementation in nearly every field. National University found that 77% of companies are “either using or exploring the use of AI in their businesses,” and the United Nations’ Trade and Development organization predicts that artificial intelligence will be a $4.8 trillion industry by 2033. Last April, Pew Research Center found that 79% of polled Americans use AI at least several times a day. In an executive order, President Trump said it “will play a critical role in how Americans of all ages learn new skills, consume information, and navigate their daily lives.”
Why is AI still such an ever-present endeavor, despite the known problems?
The Road is Still Generating
In a 2024 blog post, OpenAI founder Sam Altmman— who’s since said that investors were “overexcited” about AI— wrote that it “helps us accomplish much more than we ever could without AI; eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine… we can have shared prosperity to a degree that seems unimaginable today.”
That standard, of shared prosperity, is the goal for many leaders in the AI ecosystem. By using machine learning, these programs can, in theory, complete tasks faster and more efficiently than humans— and can even do things humans are incapable of doing, due to their ability to recognize and predict patterns.
What these experts don’t consider, though, is what AI means for humans. In practice, artificial intelligence isn’t just finding a cure to cancer and playing chess. It’s making music, writing stories, filling in for therapists, or even taking the place of friends.
For decades, movies and shows presented audiences with artificially-augmented assistants, trained to do anything it could to make its user’s life easier— the machine would do the hard stuff so the person could focus on living.
Our reality is not like the movies. In a world where 70% of people looking for work say it’s hard to find one, where 770,000 people are homeless or living in shelters, where one-third of Americans struggle to afford groceries while Elon Musk is expected to soon be the world’s first trillionaire, it’s the machine that makes the art.
AI gets to live while we’re forced to survive.
Proponents of AI point to its problem-solving capabilities and immense potential as a blanket defense of why a generated clip of Will Smith eating spaghetti somehow has more merit than a human-created film.
But AI having the power to make things doesn’t mean it should have free rein over anything it’s capable of. In the uncomfortably pertinent words of Dr. Ian Malcom in “Jurassic Park”— “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”
The news is important because it helps to maintain a shared sense of reality— if everyone has access to the truth, untainted by individual biases or propaganda, they can work to solve the problems that have plagued every society since the beginning of time. Progress is only possible when the information is accurate.
Education is worth pursuing because absorbing new information creates a more well-rounded baseline to approach any number of experiences. Being exposed to unique viewpoints and ideas makes it possible to synthesize them into an authentic point of view to bring into the world.
Art is worth engaging with because it represents the combined efforts of an enormous group of people, each putting their unique points of view into their work. As a result, the final product has something to say, a truth about humanity that will impact each member of its audience in a different and personal way.
Generating any of these things robs them of their soul— they exist, yes, but completely devoid of the reason they were created in the first place.
The dystopian view of AI has always been that it will use its sentience to overtake us when we least expect it. We find ourselves now in a different situation— one where AI doesn’t have sentience, but we’re willingly gambling ours to give it more to do.
AI is not inherently dangerous. It’s not inherently anything. It’s a tool. Like any tool, we have to decide how to use it.
In an interview with Vox, cognitive neuroscientist Sam Gilbert spoke to this threat of AI— the inherent sinisterness of using a computer program to express our humanity for us isn’t that it’s capable of doing so. It’s that we want it to.
“The real risk may not be that we outsource too much thinking, but that we surrender our agency to decide which thoughts are worth thinking at all,” he said.



