- Enterprise GenAI
- Posts
- 30% of GenAI Projects Fail
30% of GenAI Projects Fail

Get AI to do the menial tasks while humans can solve complex problems! << This has always been the pitch for all automation and AI-related tools, yes?
But who has actually proved this works? Some Microsoft and Carnegie Mellon researchers put this to the test, and the results will make you think.
Higher AI use = Lower critical thinking
In essence, the study shows that “higher confidence in GenAI was associated with less critical thinking.” This means that if your employees trust AI to get the work done, they aren’t inclined to waste their cognitive energy on critically evaluating it. In the long run, they might lose out on necessary problem-solving skills altogether.
“Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved,” they note.
That gives us much to consider about how we use AI in the workplace (beyond efficiency and productivity metrics).
30% of Gen AI projects are doomed to be jettisoned
At a recent event, Meta India’s Gaurav Geet Singh made a statement that “30% of all generative AI projects are expected to be discontinued due to unmet outcomes.” He appears to have taken a hint from Gartner’s report from last year, which said much the same thing.
So, how not to end up in the 30%? Gartner identifies four critical issues.
Poor data quality: Get your data scientists or GenAI vendors to ensure the data you have is of high quality and suitable for the use case you’ve chosen.
Inadequate risk controls: Set up robust security and governance systems for your AI projects. Some Tune AI thinking is here.
Escalating costs: Calculate the total cost of ownership (TCO) before you embark on experiments. A primer on AI costs is here. Talk to us to create your own personalized TCO.
Unclear business value: Set your goals right, sign up internal AI champions, design training programs, monitor use and sustain engagement. Let us help! Hit reply and I’ll find a time with you to discuss this.
Lessons from the AI Summit in Paris

Venue of the AI Action Summit, Paris
For those of us interested not just in using AI but also in understanding its larger impact on the world, the Artificial Intelligence Action Summit held in Paris was an eye-opener.
Not because any big decisions were made, but because everyone—from startups and business leaders to politicians and activists—is still thinking about how AI is changing life.
Countries are realizing that they can no longer simply stay wary of AI’s negative implications (Europe wants to cut back on regulation)
Europe and the US have major differences in how to regulate AI (the US thinks even cut-back versions are too much)
DeepSeek has created both hushed whispers around national security and bold claims about efficient and sustainable AI
“Public interest projects,” like Current AI, are being set up as a collaboration among EU governments and private companies
Let’s say governments worldwide are watching closely and want to enable and regulate AI effectively. However, as Kevin Roose of the New York Times writes, it feels like “watching policymakers on horseback trying to install seatbelts on a passing Lamborghini.”
We can only wait and carefully watch. In every edition of this newsletter, I’ll share what I’m seeing in the world of AI.
Best,
Anshuman Pandey
P.S. DeepSeek R1 is now available on Tune Chat