96.6% Embrace AI - Is It Overhyped?

Do you think AI has been helpful in your business? If yes, you are on the same side as the 96.6% of executives surveyed by Data & AI Leadership Exchange.

While it’s not surprising, it is indeed telling that less than 4% of us are AI cynics. With respect to Generative AI, we’re in a place of extreme investor excitement and confidence.

The AI bubble

New AI companies are springing up everywhere, and valuations are well into the billions. The financial mania surrounding GenAI technology is so widespread that some investors have begun wondering about a potential bubble.

As authors, Byrne Hobart and Tobias Huber ask, “Would an artificial intelligence bubble be so bad?

The economist summarizes their central argument as follows.

They argue that a culture of risk aversion, shaped by aging populations, has led to economic stasis. Financial exuberance may help escape this trap, they suggest, by driving investment in technologies that offer potentially spectacular rewards for the world.

How would you feel if the AI mania was a bubble that eventually burst (think: dot-com bubble, US housing bubble, or more recently, the crypto bubble)?

In AI we trust?

While the exuberance in North America is palpable, our European counterparts are less so.

  • Less than one-third of European respondents were aware of GenAI

  • Only 23% of respondents use GenAI for work at all!

  • Only 35% said their companies promote GenAI use at work

  • Nearly 30% of respondents are “extremely concerned” about deepfakes, misinformation, fake news and potential misuse of data

This cynicism - and I daresay aversion to new tech - is perhaps a good thing, too. It helps us strengthen the ethical and safety structures around our tech.

Who is to impose these safeguards? The government through regulation like in pharmaceuticals? Or the industry itself, like entertainment and films?

AI safety

Dr. Brian Anderson, a family doctor from Massachusetts, is collaborating with big players like Mayo Clinic, Microsoft, Google, etc., to build “quality assurance labs” to evaluate AI tools for the healthcare industry.

Essentially, he’s arguing that healthcare can regulate itself. Would you feel comfortable and safe for your physician to use an AI tool that’s only approved by an industry body?

It would be exciting to see how these regulations evolve.

AI for a new software development paradigm

Speaking of healthcare, startup founder Ethan Knox makes an interesting analogy about how we use agentic AI in coding today.

He called it a “Doctor-Patient strategy” - “similar to how a doctor operates on a patient, but never the other way around.” I first thought it would be horrible if the patient operated on the doctor, but that’s not the point he’s making.

In fact, he’s saying that the code-bot approach to agentic AI treats the codebase as a single-layer, linear, unidimensional thing when, in reality, it’s quite the opposite. As a result, we end up with “code rot.”

In response to this, he recommends writing GenAI-able code. Details in his blog post.

When we speak of paradigm-shifting AI, is this what we mean?

Here’s wishing you a very happy new year. Until next time,

Best,
Anshuman Pandey

P.S. Llama 3.3 70B is now available on Tune Chat