The opinions expressed herein are derived from our research and own experiences in:
developing a few AI Agents,
observing student engagement across different variations of our AI classes,
engaging in discussions within AI committees and with attendees of BizAI 2024.
In the new era of artificial intelligence (AI) and large language models (LLMs), critical thinking skills have become more important than ever before. A 2022 survey by the World Economic Forum revealed that 78% of executives believed critical thinking would be a top-three skill for employees in the next five years, up from 56% in 2020 [1]. As AI systems become more advanced and capable of performing a wide range of tasks, it is crucial for humans to develop and maintain strong critical thinking abilities to effectively leverage these tools and make informed decisions.
One of the key reasons critical thinking is so valuable in this context is that LLMs excel at providing information and executing tasks based on their training data, but they often struggle with higher-level reasoning, problem decomposition, and decision-making. While an LLM can generate code, write articles, or answer questions, it may not always understand the broader context or implications of the task at hand. This is where human critical thinking comes into play.
For example, let's consider a scenario where a company wants to develop a new product. An LLM can assist by generating ideas, conducting market research, and even creating a project plan. However, it is up to the human decision-makers to critically evaluate the generated ideas, assess their feasibility and potential impact, and direct the AI model where it made mistake or missed something significant such as company values, long-term goals, and potential risks.
Moreover, as the value of knowing "how" to perform a task decreases due to the capabilities of LLMs, the value of knowing "what" to do and "why" increases. Because AI can manage a lot of "how" to perform a task, it frees professionals to focus on "what" and "why". By developing strong critical thinking abilities, professionals can effectively collaborate with AI systems, leveraging their strengths while compensating for their limitations. This synergy between human reasoning and AI capabilities has the potential to make professionals more productive, bring costs down and help companies grow manifold. However, it is important to note that critical thinking skills must be actively cultivated and practiced. As professors, we need to think of ways to teach students with the tools and training necessary to thrive in an AI-driven world. Let us consider an example of how AI and critical thinking can be taught in tandem.
In one of our courses we teach students how to effectively use AI models to augment their thought process and plan AI agents for revamping business processes. Students explore how to plan an AI agent that learns the tacit knowledge, which experts develop over years of experience. Further, how another AI agent can use this tacit knowledge in conjunction with Retrieval Augmented Generation (RAG) as part of its context to generate decisions or content that mimics the complex decision making of an expert.
Through this process, students not only learn technical skills related to AI and LLMs but also develop essential critical thinking abilities such as problem decomposition, strategic planning, and effective communication. They learn to view AI as a tool to augment and enhance their own thinking, rather than a replacement for human judgment and decision-making. They also have better understanding of the limitations of AI models. These AI models solve a lot of "how" type problems that professionals earlier had to spend significant time learning, planning and working on. However, these models also come with their own set of challenges such as context window, limited reasoning abilities, and variability in responses. Hence, there is strong need for students to prepare for AI integration in workplaces accounting for AI models' limitations.
Teaching students to view LLMs as highly knowledgeable assistants that sometimes get confused and need direction is a valuable approach. It encourages students to take an active role in guiding and correcting the AI, rather than simply accepting its outputs at face value. They recognize that while AI can provide valuable insights and generate ideas, it is ultimately up to humans to critically evaluate and act upon that information. This understanding helps students develop a healthy and productive relationship with AI, one in which they are in control and can effectively leverage these tools to support their own learning and growth.
While the collaboration between humans and AI presents numerous opportunities, it is essential to be aware of potential drawbacks and risks. As AI models become more advanced and capable, there is a genuine concern that some early learners, may become overly dependent on these tools. This over-reliance could diminish their critical thinking and problem-solving abilities, possibly fostering "intellectual laziness." Individuals might become less inclined to learn and explore new concepts on their own, relying instead on AI for answers. Further, they may lose faith in their own judgment and may stop questioning the AI model's output. In one of our research studies, we observe this behavior among early software developers who start relying on AI models too much. This situation could widen the divide between those who use AI to boost their productivity and those who lean on it too much.
To counter these risks, it's important that, along with fostering critical-thinking abilities, we need to stress the need for critical engagement with AI. We should encourage students to scrutinize and question the outputs of AI actively. They need to help students see that excessive reliance on AI can lead to a lack of depth in understanding and personal growth. By advocating for a strategy that equally values AI resources and independent thinking skills, we can guide learners through this new landscape successfully.
As we look towards the future, the increasing importance of critical thinking skills in the AI era will have significant implications for job markets and educational curricula. Professionals who can effectively collaborate with AI systems and leverage their capabilities will be in high demand. Hence, faculty will need to adapt their programs to ensure that students understand the importance of using AI as a tool to augment their thinking and not as a replacement. Further, we must rethink our courses and integrate more emphasis on the "what", challenging students to apply their critical thinking skills to real-world problems and decision-making scenarios.
This is not a trivial task, and it will require collaboration and idea-sharing among faculty members. We have been actively exploring these issues and would greatly value the perspectives and insights of our colleagues on this topic. We welcome further discussions and encourage you to reach out to us to share your thoughts and experiences.
It's important to note that these insights are primarily anecdotal and have not undergone scientific scrutiny. Additionally, the research involving developers where we noted instances of intellectual laziness has not been validated yet through peer review.
World Economic Forum. (2022). The Future of Jobs Report 2022. Geneva, Switzerland.