- Enterprise GenAI
- Posts
- The Hidden Costs of AI
The Hidden Costs of AI

OpenAI recently launched a 1-800 line where users can call (even from a landline) to access their GenAI tools. Texting via WhatsApp works, too. For those of us in cities with smartphones, the utility is irrelevant.
But it outlines confidence in two things:
Advancement in voice-to-text (imagine a GenAI call center, for instance)
Reach without stable Internet (think remote customers, traveling salespeople, etc.)
That’s not all. As part of the 12 days of ship-mas, OpenAI has also launched ChatGPT Search, collaborative writing/coding canvas, Sora (finally!) and a new $200 monthly payment tier.
Affordable supercomputer
On the hardware front, NVIDIA launched the Jetson Orin Nano Super Developer Kit. Small enough to fit in your palm, this computer boasts a dramatic increase in inference performance, memory bandwidth and more.

This is a great starting point, especially if you’re looking to take your GenAI on-prem. Write to me if you’d like to discuss whether it suits your needs.
Can every employee manage their AI?
Every employee with an AI assistant is - in a way - a manager. They need to train, review, fact-check and update the AI. This can fundamentally change the culture of your organization.
Imagine a junior programmer writing massive amounts of bad code and bringing it to seniors for review! This means all employees must have the critical skills to fact-check the AI.
GenAI can create exponentially more content. If everything hasn’t already been written about, it will be by ChatGPT. How, then, will your writers come up with new ideas? Your employees need skills to innovate and transform.
Scammers and miscreants will find newer, more realistic ways to operate. Your employees need the awareness and caution about emerging threats.
In essence, GenAI can do the job. Whether it does it right or not is up to you!
But, aren’t the LLM makers managing their AI?
Well, apparently not. “Developers still understand little about how their general-purpose AI models operate,” found the recent International Scientific Report on the Safety of Advanced AI.
Sam Altman of OpenAI appears to accept that as well. “We certainly have not solved interpretability,” he said.
One solution for this could be open source. The ability to look into the code and fine-tune the LLM yourself would minimize this to a considerable extent. Until we solve the org culture problem AI creates, it’s best to be critical, aware and cautious.
Read the entire International Scientific Report on the Safety of Advanced AI here.
Wishing you happy holidays 🎄 and I’ll be back next week!
Best,
Anshuman Pandey
P.S. Llama 3.3 70B is now available on Tune Chat