Is the Giant AI Hype Machine Starting to Slow Down, or Just Taking a Short Break?

No comments

I’VE DONE MY BEST to stay away from the AI hype machine.

It hasn’t been easy. That’s because so many influential people have jumped on the Artificial Intelligence bandwagon, fueling both giddy excitement, and sometimes, fears of gloom and doom.

But don’t look to me for any great insight on that. I’m a lower-your-expectations kind of guy, and despite all the AI hype, I believe it is probably going to be a lot less earth-shattering than many people think.

That’s what being The Skeptical Guy does to you.

We’re also reached the point in AI’s development where you’re starting to see more and more cautionary voices starting to push back against the over-the-top AI hype.

Making the point over AI hype

Some articles from the past week seem to make this point:

  • Generative AI may be creating more work than it saves (From ZDNet)
    • Key insight: “There’s common agreement that generative artificial intelligence (AI) tools can help people save time and boost productivity. Yet … the back-end work to build and sustain large language models (LLMs) may need more human labor than the effort saved up front. … That’s the word from Peter Cappelli, a management professor at the University of Pennsylvania Wharton School, who spoke at a recent MIT event. On a cumulative basis, generative AI and LLMs may create more work for people than alleviate tasks. LLMs are complicated to implement, and ‘it turns out there are many things generative AI could do that we don’t really need doing,’ said Cappelli.”
  • Press Pause on the Silicon Valley Hype Machine  (From The New York Times)
    • Key insight: “The question (today) isn’t really whether AI is too smart and will take over the world. It’s whether AI is too stupid and unreliable to be useful. … It feels like … AI is not even close to living up to its hype. In my eyes, it’s looking less like an all-powerful being and more like a bad intern whose work is so unreliable that it’s often easier to do the task yourself. That realization has real implications for the way we, our employers and our government should deal with Silicon Valley’s latest dazzling new, new thing.”
  • Google is Playing a Dangerous Game with AI Search  (From The Atlantic)
    • Key insight: “Google has introduced a new feature that effectively allows it to play doctor … the search giant is rolling out its ‘AI overview’ feature to everyone in the United States … Many Google searches will return an AI-generated answer right underneath the search bar, above any links to outside websites. This includes questions about health. … But this is still a chatbot. In just a week, Google users have pointed out all kinds of inaccuracies with the new AI tool. … Health answers have been no exception; a number of flagrantly wrong or outright weird responses have surfaced. … (like) rocks are safe to eat.” 

The New York Times article takes the biggest shot at the never-ending hype over Artificial Intelligence, but the others also undercut the notion that AI is some sort of miracle technology that is going to change work and life as we know it.

Will AI actually get better anytime soon?

Peter Cappelli, the Wharton Business School professor known for his thoughtful insights and low-key delivery, really makes this case in the ZDNet story. They quote him saying:

“While AI is hyped as a game-changing technology, ‘projections from the tech side are often spectacularly wrong,’ he pointed out. ‘In fact, most of the technology forecasts about work have been wrong over time.’ He said the imminent wave of driverless trucks and cars, predicted in 2018, is an example of rosy projections that have yet to come true.”

And Julia Angwin, the founder of Proof News who also writes about tech policy, makes this point in the The New York Times:

“Consider for a moment the possibility that perhaps A.I. isn’t going to get that much better anytime soon. After all, the A.I. companies are running out of new data on which to train their models, and they are running out of energy to fuel their power-hungry A.I. machines. Meanwhile, authors and news organizations (including The New York Times) are contesting the legality of having their data ingested into the A.I. models without their consent, which could end up forcing quality data to be withdrawn from the models.

Given these constraints, it seems just as likely to me that generative A.I. could end up like the Roomba, the mediocre vacuum robot that does a passable job when you are home alone but not if you are expecting guests.

Companies that can get by with Roomba-quality work will, of course, still try to replace workers. But in workplaces where quality matters — and where workforces such as screenwriters and nurses are unionized — A.I. may not make significant inroads.”

Who do you believe?

HERE’S MY TAKE: Is A.I. overhyped? It sure feels like it, but that doesn’t mean that Artificial Intelligence isn’t going to have a significant impact on the work and life.

The notion that A.I. may be overhyped is a fairly new one, as Business Insider noted last month in an article titled It looks like it could be the end of the AI hype cycleAlthough it goes back and forth about the promise of AI, it does quote an Artificial Intelligence critic who makes the case that, “the verdict is still out on whether the companies behind foundation AI models dependent on expensive chips can turn their products into viable, profitable businesses. And last month offered several reasons to raise doubt.

But for every story that focuses on out-of-control AI hype, there are many more that dig into the potential promise of AI, as Fortune did last week in an interview with Josh Bersin, a well-known technology expert who is pretty bullish on the future of AI.

So who do you believe when it comes to AI — the optimists who frequently fuel the hype, or the skeptical types who think it’s all been overblown and will probably be less than meets the eye?

Maybe you’re like me and fall somewhere in the middle. My guess is that we’ll have a better sense of where AI is going sometime in 2024. That’s about as far into the future as I can see, but maybe you have a better view.

Other trends and insights 

  • 82% of 2024 college graduates are confident they will get a job (From Benefitsnews.com)
  • Job openings have fallen more than 30% from their peak in March 2022 (From HiringLab.org)
  • The future of the gig economy in California rests in the hands of the state’s Supreme Court (From CalMatters.org)
  • Why you should rethink return-to-office mandates (From FastCompany.com)
  • Philadelphia Mayor issues return-to-office order for thousands of city employees (From WHYY.org)
  • Millennials call it “quiet vacationing,” but it’s really remote work gone wrong — and it’s CEOs’ worst nightmare (From Fortune.com)
  • Employees are cheating on workplace drug test at a record clip (From CBSNews.com)

And your latest dose of AI news … 

  • Study finds that employers appear more likely to offer interviews, higher pay to those with AI skills (From HRDive.com)
  • Is AI going to be end of SEO as we know it? (From Build Mode by A.Team)
  • How chief learning officers are upskilling employees with generative AI (From WorkLife.news)
  • U.S. intelligence agencies were using generative AI 3 years before ChatGPT was released — but they still think it should be treated as a “crazy, drunk friend” (From Fortune.com)
  • OpenAI, WSJ Owner News Corp Strike Content Deal Valued at Over $250 Million (From WSJ.com)
  • Meta’s new AI council is composed entirely of white men (From TechCrunch.com)
  • OpenAI says its ChatGPT voice isn’t a Scarlett Johansson rip-off. Johansson disagrees (From FastCompany.com)
  • OpenAI Releases Former Staffers From Non-Disparagement Clauses (From Bloomberg.com)

Loyal Readers: I’ve been writing this weekly wrap-up for 20 years — from Workforce.com to TLNT.com to Fuel50 and now here on The Skeptical Guy. It would help to know what you think, so email comments to me at  johnhollon@yahoo.com.

Leave a Reply