The five-member executive leadership team had come prepared to this meeting to make their case to the CEO and board chair as to where the company should invest in AI over the next 6 -18 months.
Each one had turned to their own favorite AI tool to generate what they believed would make an irrefutable business case to support their positions, including predicted cost/benefit analyses, revenue and profit projections, expected organizational impact and timelines.
Not surprisingly, they used different types of prompts, devised by their team members (mostly interns and new hires).
The result: Each leader presented what they believed to be an airtight case to support their own positions, using conflicting (and in many cases, inaccurate or outdated) information. At the close of the five presentations, the CEO whispered to the Board chair, who nodded in agreement.
“We’re giving you each up to three minutes to succinctly describe your thought process and the steps you took to validate your assumptions,” said the CEO.
The leadership team members sputtered in protest. When asked by one whether they could bring in a team member to respond, the CEO laughed derisively. “Not a chance. You own your position, and you better be able to defend it yourself.”
Turns out, not one person could muster a convincing response, and the decision had to be shelved indefinitely, resulting in costly delays that put the company solidly behind their major competitors in their use of AI.
What’s the problem?
Using AI itself is not the problem. It’s the habitual cognitive offloading we do when we accept our AI agent’s responses without question that’s causing our judgment and critical thinking skills to atrophy, much like our muscles when we stop exercising.
Inexperienced workers aren't the only ones who are increasingly relying on AI to perform certain tasks, but they're the ones who are disproportionately at risk. Digital natives tend to have shorter attention spans and are less likely to question GenAI output because they don’t have the context or knowledge to question whether the data is valid or the conclusions make any sense. In addition, the workplace culture may not have explicit norms that require them to evaluate AI outputs independently.
If and when they do pass on their work to their managers (the so-called “humans in the loop") for review and revision, they’re missing the feedback loop that builds judgment. That's because their managers are often too busy or distracted to sanity-check the work, even when they know it was generated by AI. Or it could be that they think it looks okay as is.
Studies show that higher confidence in GenAI is associated with less critical thinking, while higher self-confidence in GenAI is associated with more critical thinking. Let that sink in.
In research conducted by the Swiss Business School, survey participants expressed concerns about their growing dependence on AI tools. In another study by Microsoft and Carnegie Mellon, survey respondents reporting "using no critical thinking skills whatsoever" for 40% of their tasks delegated to GenAI.
Trouble at the top: Misaligned data, unexamined conclusions
When leaders turn to AI to do their thinking, whether it’s to write reports, analyze data, track performance or make recommendations, they often draw conflicting conclusions. The reasons: They’re using different prompts, sometimes even using different AI tools, with no shared framework for evaluating the quality of AI-generated work, just like the leadership team in our opening scenario.
As a result, leaders are apt to make flawed decisions be based on misaligned (and often wildly inaccurate) data.
The organizational blind spot
As organizations race to adopt AI, few seem to be thinking about the cognitive capabilities they need to protect along the way. Human Resources and Organizational Development leaders can be instrumental in facilitating conversations to help their organizations think through answers to questions like these:
- How will we ensure that younger generations have opportunities to develop the cognitive skills and knowledge they’ll need as the more seasoned workers retire?
- What does the “human in the loop” actually mean for our organization, and who should play that role?
- How can we ensure that people have the skills and motivation to apply critical thinking skills and sound judgment to make well-informed decisions?
- How can leaders play the role of mentors and coaches as their teams use AI for their work?
- Who is responsible for quality control, under what conditions?
The ultimate question that organizations need to be thinking about right now: What practices, habits, or norms can business leaders put into place to keep critical thinking alive in an AI-assisted workplace? If you have answers, I’d love to hear them and share them with others.
Founded in 1994 by Nancy Settle-Murphy, Guided Insights (formerly Chrysalis International) is a facilitation, training and strategic communications consulting firm based in Boxborough, MA - just 35 minutes from Boston, MA and 20 minutes of Worcester, MA.
The company's virtual team of seasoned facilitators, organizational development professionals, trainers and strategists is committed to helping teams achieve desired results more quickly by collaborating more successfully. A special area of focus for the firm is helping virtual teams who work across various cultures, functions and time zones.
www.guidedinsights.com