Thinking fast, slow and artificial - is cognitive surrender really taking over our brain when faced with AI?
The recently published paper by two researchers from the Wharton School at the University of Pennsylvania, US - titled "Thinking - Fast, Slow and Artificial - How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender" - is promising to introduce a new concept of "cognitive surrender" in the neuroscience books and to spook policy makers and others trying to contain an unruly, erratic even rise of AI in various parts of social life.
POSTS
Irene Petre
2/18/20264 min read


"Thinking - Fast, Slow and Artificial - How AI is reshaping Human Reasoning and the Rise of Cognitive Surrender" is by no means, an excellent paper, still warm off the presses - published just last week by the Wharton School of the University of Pennsylvania, US. It coins the concept of “cognitive surrender” in the face of widespread AI use and unchallenged AI generated outputs, arguing that the vast majority of people seem to adopt AI results with minimal or no verification, although it is now well known that AI tools, especially LLMs can often produce hallucinated answers (hallucination rates vary by model, provider, type of questions asked and how they are phrased and many other variables): https://www.visualcapitalist.com/sp/ter02-ranked-ai-hallucination-rates-by-model/.
Some key take-aways:
Using 1372 subjects in an adapted Cognitive Reflection Test (CRT), the researchers show that participants chose to consult AI with no additional verification in over 50% of cases if given the choice
When AI was correct, test accuracy increased by 25% but
When AI produced incorrect answers (the researchers purposefully included faulty AI answers to test if participants reactivate deliberative processing and override faulty AI outputs), test accuracy decreased by 15%
Participants with higher trust in AI and lower need for cognition and fluid intelligence showed greater surrender to AI reasoning
However, if there are incentives and feedback given to participants – their test accuracy increases in all cases and more of them chose to verify and correct AI outputs.
Building on the famous book of D. Khaneman, “Thinking fast and slow” which discusses the two core human thinking systems (System 1 – fast, intuitive processing and System 2 – slow, deliberative and analytical), the authors advocate for a System 3 – artificial cognition that operates outside the brain and which can introduce novel cognitive pathways. It can lead to cognitive surrender of System 1 and 2 as shown in their study and also boost confidence, even after AI errors.
Despite drawing alarm bells about an unwise unruly use of AI tools, at the same time the paper manages to quietly, and somehow imprudently, lift AI reasoning (System 3) above human thinking, as not only is System 2 more advanced than System 1 (and it follows that System 3 would be the most advanced of the three) but it makes a claim that, through intended or unintended design, System 3 has the power to subordinate or switch off human judgement, agency and accountability. The authors warn that agentic AI tools are not merely assisting decision-making (like a GPS would) but they became decision makers.
Whilst this study captures very well a widely spread social phenomenon, it also has limitations in our view – there is evidence that adapted Cognitive Reflection Tests highly correlate with strong numeracy, acquired knowledge and math ability (which are different from reflective ability), and which abilities do not tend to characterise the majority of the population, not even in developed countries with strong education systems. Therefore, if faced with such tests, and given the possibility to use props such as LLMs, most people would use a prop (AI assistant) without questioning the prop’s answer – because they lack the knowledge to question it or feel unsure about their own math abilities in what looks like math based tests.
Another critique would be that the study fails to realise that what many study participants, who chose to use AI assistants to help answer some of the CRT questions, are doing is to use the AI assistants similarly to a traditional computing tool (pocket calculator) – where you digit an input (an mathematical operation or function in the case of the traditional pocket calculator or a question in the case of the AI assistant) and then take the output (computational result or AI answer) at face value as generated by the tool, without questioning its corectness.
This means that when using AI like that, people not only place trust in the capabilities of AI to generate the correct answer but they also relate to it as a tool, many of them without consciously intending to delegate decision making authority to the AI "tool" or without realising they do in fact delegate such authority and agency to the AI tool. Whether people intend to delegate authority to AI assistants or not, or if they realise what they are doing or not - it is likely to be a bigger question. The answer will likely depend on the type of social group people belong to, their age and level of education and the purpose of their use of the AI tool (e.g. for daily operative work tasks - of their own free will or mandated by their employer, for technical research, for entertainment, for purely one off utilitarian purposes - like passing a specific exam - , for companionship and other social reasons).
In the end we need not forget that we all are inherently "lazy" (in the sense we are programmed to seek the least minimum effort). The Principle of the Least/Minimum Effort is known to characterise human behaviour all over the world - it posits that people naturally chose the course of action that requires the least amount of perceived effort (physical, mental and emotional) to achieve a goal and this human feature may very well prove to be our Achilles' heel in our messy relationship with AI tools, the more some AI developers and designers will try to exploit inherent human biases and foibles. The key remains for users to be well informed about new technologies and their workings and for producers of AI tools to follow certain design ethics - and for that sometimes regulators and Academia really need to work more closely together with the industry.
IGEA Healthcare
Strategic Advisory for Life Sciences
Switzerland, UK, Italy
contact@igeahealthcare.com
© 2025. All rights reserved.
