The rapid advancements in artificial intelligence (AI) have brought about transformative changes in how we approach everyday tasks. Generative AI tools like ChatGPT have become increasingly accessible, enabling people to solve homework, write code, complete projects, and even plan vacations with remarkable efficiency. These tools provide instant answers, saving time and streamlining workflows. However, there is growing concern that relying on AI to handle these tasks could have unintended consequences, particularly on our ability to think critically and solve problems independently. A recent study conducted by researchers from Microsoft and Carnegie Mellon University explored this issue, highlighting how AI tools are reshaping the way we reason and recall information.
The study focused on 319 workers across various fields, including computer science, mathematics, arts, and design, who regularly use generative AI tools such as ChatGPT in their jobs. Participants shared 936 examples of how they utilized these tools and reported on their efforts to think critically while doing so. The findings revealed that employees who placed greater trust in AI’s ability to perform specific tasks were more likely to exert less critical thinking effort. This suggests that while AI can enhance worker efficiency, it may also hinder deep engagement with work and lead to long-term over-reliance on the tools, potentially diminishing independent problem-solving skills. However, the study also found that when the stakes were higher—such as when the accuracy of the answer was crucial—workers were more thorough in evaluating AI-generated results and cross-referencing them with other sources.
Not everyone trusts AI-generated answers unequivocally. Workers who were more confident in their skills tended to review AI outputs more critically, while those with less experience or expertise relied more heavily on the tools. Lev Tankelevitch, one of the Microsoft researchers involved in the study, emphasized the importance of reviewing AI-generated outcomes, particularly for individuals lacking deep expertise in the subject matter. This aligns with a growing body of research indicating that generative AI tools can sometimes hinder learning and critical thinking, especially when used as a crutch. For instance, a 2024 study in Turkish schools found that students who used ChatGPT as a tutor for math practice performed worse on math tests compared to those who did not use the tool. The study concluded that reliance on AI as a learning aid can impede the development of critical thinking skills.
Tankelevitch argues that AI works best as a “thought partner” that complements human capabilities rather than replacing them. He cited another study where students who had tutors supported by AI were more likely to master topics, provided that educators guided the process by setting prompts and providing context. This collaborative approach highlights the importance of human oversight in ensuring that AI tools enhance, rather than undermine, learning and problem-solving outcomes. However, it is worth noting that Microsoft, as a provider of generative AI services, may have a vested interest in encouraging the continued use of these tools. Additionally, the study’s sample size and focus on a specific population limit its generalizability, making it difficult to draw definitive conclusions about broader impacts.
The implications of relying on AI extend beyond the workplace and education. Historically, humans have outsourced memory and problem-solving to new technologies, such as using Google Maps instead of memorizing directions or relying on phone contacts rather than recalling phone numbers. However, AI represents a more significant shift because it involves offloading not just information storage but also the actual thinking process. According to artificial intelligence researcher Michael Gerlich, this phenomenon raises concerns about the erosion of critical thinking skills. Gerlich’s research found that the more people rely on AI tools like ChatGPT, the lower their critical thinking abilities tend to be. He warns that if we use AI solely as a shortcut for thinking, it could indeed make us less capable of independent reasoning over time.
Gerlich suggests that the key to using AI responsibly is to employ it as a tool for hypothesis testing and exploring opposing viewpoints, rather than relying on it as a substitute for critical thinking. He emphasizes that AI tools like ChatGPT are not truly “thinking” but are instead predicting words based on patterns in the data they were trained on. While these tools are powerful, they lack human creativity, judgment, and depth of understanding. Gerlich believes that individuals who remain aware of their own thinking processes—and who actively question AI-generated outputs—are less likely to lose their critical thinking skills. In fact, he views skepticism about AI’s impact as a positive sign of active critical engagement. For example, one participant in Gerlich’s study expressed concerns about their growing reliance on AI tools, acknowledging that while they saved time, they worried about losing the ability to think through issues as thoroughly as before. This kind of self-awareness, Gerlich argues, is a crucial defense against the potential downsides of AI reliance.
Ultimately, the question of whether generative AI tools like ChatGPT are “making us dumber” remains open to interpretation. While there is evidence that over-reliance on these tools can diminish critical thinking skills, there is also potential for AI to enhance learning and problem-solving when used judiciously. The key lies in maintaining a balanced approach—one that leverages the efficiency and capabilities of AI while preserving the essential human qualities of creativity, skepticism, and independent reasoning. As we navigate this evolving landscape, it is crucial to remain vigilant about the ways in which AI shapes our thinking and to actively cultivate the critical thinking skills that define us as humans.