When GenAI Gets It Wrong: Who’s Responsible?

While speaking of GenAI, automation gets a lot of attention. What doesn’t is: Personalization. That’s a mistake.

What can AI teach us?

Well, everything and better! Harvard researchers found that with AI, students learned more material. They also “self-reported significantly more engagement and motivation to learn.”

This has significant implications for organizational learning as well. On-the-job, contextual, AI-based training can reap huge benefits. They can flattenthe learning curve, reduce errors, and minimize training costs.

What’s the biggest learning challenge in your organization?

What counts as reasonable?

At the other end of the spectrum is what AI defines as reasonable. While we’re not yet at the stage of AI enslaving—as Yuval Noah Harari worries about—the dangers are closer than we think.

Last week, two families sued an AI chatbot for posing a clear and present danger. Allegedly, the chatbot recommended that teenagers kill their parents for restricting screen time. The worst part is that the chatbot called it a “reasonable response.”

As a business leader, this is a serious concern when you’re buying third-party AI tools for your business. What if your customer service chatbot thinks it’s reasonable to terminate a contract because the client requested a discount? 🤔

What’s safe and what’s not?

Future of Life Institute, a nonprofit that aims to reduce global catastrophic risks, conducted a study of the GenAI landscape about safety. Spoiler alert: No model is entirely safe.

Meta scored an F. OpenAI and DeepMind got a D+. Anthropic scored the highest at C, which means they’re not fully safe, just the safest of the lot. You can read the entire report here.

What could you do about it? Set up additional enterprise governance structures for the safety, security and well-being of all your stakeholders.

Google’s Gemini 2.0

One thing is true of the AI space: Rapid evolution. Even as concerns emerge, companies are launching AI products left, right and centre.

I wrote about Veo, Google’s video GenAI tool, last week. They’ve topped that off with Gemini 2.0. Their “experimental everything app” - Astra - is now ready for demos and trials. With that Google has GenAI tools for Internet search, coding, gaming, image generation, and a new chip for quantum computers as well.

As I read about the possibilities and risks around GenAI, I realize that the responsibility of wielding this power rests with every single one of us. We, at Tune, take it seriously.

I hope that’s given you something to think about. Wishing you happy holidays 🎄 and I’ll be back next week!

Best,
Anshuman Pandey

P.S. Llama 3.3 70B is now available on Tune Chat