- Enterprise GenAI
- Posts
- AI Needs Data, Not Hype
AI Needs Data, Not Hype

It’s been a good week for us at Tune AI. I’m delighted to share with you that Tune has been recognized in the enterprise AI agents category of CB Insights Game Changers 2025.
The New York-based technology research firm is a strong validation of our belief that enterprise leaders need a flexible platform + service approach to innovate with AI.
Read more about Tune AI’s approach and the CB Insights report here
New tool: Tried Notebook LM yet?
I was mind blown when I tried Google’s new AI research assistant, Notebook LM. At its core, like most modern AI tools, it takes in multimodal input and creates summaries.
What’s actually fantastic is the audio overview feature. Notebook LM takes the sources and input to create natural sounding, engaging discussions that are an absolute game changer for content creation.
Try it out yourself
New challenge: GenAI is all great, but have you got useful data?
The State of AI study by Appen suggests that the biggest challenge you’ll face in implementing GenAI is sourcing and managing high quality data!
With regard to that, the study finds that:
Custom data will be the primary method of training/fine-tuning models
Data accuracy has dropped 9% since 2021 (a matter of critical concern)
Data process bottlenecks are increasing, creating issues throughout the AI lifecycle
To overcome these challenges, CTOs are demanding human-in-the-loop solutions over autonomous ones
Read more of the report here: Appen’s 2024 State of AI report
If you’re thinking about creating your own human-in-the-loop AI solutions, let me know if I can help. Hit reply and we’ll set up some time to discuss it.
New perspective: How to handle the uncanny valley of Generative AI?
Much of the discussions around GenAI have been about how to use it. In fact, I myself am most excited when there’s a new tool I can try or a use case I can solve for.
But this article on AI’s uncanny valley in the MIT Technology Review made me sit up and rethink.
Ken Mugrage and Srinivasan Raguraman talk about our mental models around AI and how it might inadvertently lead us to using it the wrong way to a point of no return. They suggest that like other design antipatterns, subtle differences in context/assumption/mental model can change one’s experience or perception of an LLM.
They point to Professor Ethan Mollick’s argument that AI shouldn’t be understood as good software but instead as “pretty good people.”
I thought this piece makes a fantastic reading to take our minds off the practicalities of use cases, efficiency and ROI toward the larger existential question of what AI does to us as humans.
Read more here: Reckoning with generative AI’s uncanny valley
New approach: Have you considered on-prem AI implementation?
AI and other modern technologies are inherently related to cloud. CTOs think of AWS or Azure when experimenting with generative AI. Only fair, given the last few decades have seen extraordinary benefits from the scalability, convenience and cost advantages of cloud.
What about security? In GenAI, security is the cornerstone of success. I believe that on-prem offers a better alternative.
As promised in last week’s newsletter, I wrote about it in our latest blog post here.
Hope you enjoyed reading about the latest in GenAI as much as I did. Next week, we discuss governance.
Best,
Anshuman Pandey
P.S. Llama 3.2 Vision Language model is now available on Tune Chat