Fiction logo

Artificial Intelligence

Huge AI funging leads to hype and 'grifting', warns DeepMind Demis Hassabis

By Sanju TalukderPublished 2 months ago 3 min read
Like
Demis Hassabis: 🅒Toby SAMUEL DE ROMAN/GETTY IMAGE

British AI pioneer says the billions of dollars being poured into start-ups is obscuring scientific progress in the field.

The surge of money flooding into artificial intelligence has resulting in some crypto-like hype that is obscuring the incredible scientific progress in the field, according to Demis Hassabis, co-founder of deepMind.

the chief executive of Google's AI research division told the Financial Times that the billions of dollars being poured into generative AI start-ups and products "brings with it a whole attendant bunch of hype and maybe some grifting and some other things that you see in other hyped-up areas, crypto or whatever.

"Some of that has spilled over into AI, which is a bit unfortunate. And it clouds the science and the research, which is phenomenal, "he added. "In a way, AI's mpt hyped enough but in some senses it's too hyped. We're talking about all sorts of things that are just not real.

The launch of OpenAI's ChatGPT chatbot in November 2022 sparked an investor frenzy as start-ups raced to develop and and deploy generative AI and attract venture capital funding.

VC groups invested $43.5bn in 2,500 AI start-ups equity rounds last year, according to market analysts CB Insights.

Public market investore have also rushed into the so-called Magnificent Seven Technology companies, including Microsoft, Alphabet and Nvidia, that are spearheading the AI revolution. Their rise has helped to propel global stock markets to their strongest firs-quarter performance in five years.

But regulators are already scrutinising companies for making false AI-related claims, "one shouldn't greenwash and Gensler, chair of US Securities and Exchange Commission, in December.

In spite of some of the misleading hype about AI, Hassabis, who last week received a knighthood for services to science, said he remained convinced that the technology was one of the most transformative inventions in human history.

"I think we're only scratching the surface of what I believe is going to be possible over the next decade-plus," he said,"We're at the beginning, maybe, of a new golden era of scientific doscovery, a new Renaissance."

the proof of concept for how AI could accelerate scientific research, he said, was DeepMind' AlphaFold model, released in 2021.

AlphaFold had helped predict the structures of 200mn proteins and was now being used by more than 1mn biologist around the world. DeepMind is also using AI to explore other areas of biology and accelerate research into drug discovery and delivery, material science, mathematics, weather prediction and nuclear fusion technology. Hassabis said his goal had always been to use AI as the "ultimate tool for science".

DeepMind was founded in London in 2010 with the mission to achieve "artificial general intelligence" that matches all human cognitive capabillities. Some researchers have suggested that AGI may still be decades away, if attainable at all.

Hassabis said that one or two more critical breakthroughs were needed before AGI was reached. But he added: "I wouldn't be surprised if it happened in the next decade. I'm not saying definitely going to happen but I wouldn't be surprised. You could say about 50 percent chance. And that timeline hasn't changed much since the start of DeepMind."

Given the potential power of AGI, Hassabis said it was better to pursue this mission through the scientific method rather than the hacker approach favoured by Silicon Valley. "I think we take a more scientific approach to building AGI because of its significance,"he said.

The DeepMind founder advised the British government about the first global AI Safety Summit held at Bletchley Park last year. Hassabis welcomed the continuing international dialogue on the subject, with subsequent summit due to be held by South Korea and Franch, and the creation of UK and US AI safety institutes.

"I think these are important first steps," he said. "But we've got a lot more to do and we need to hurry because the technology is exponentially improving."

Last week, DeepMind researchers released a paper outlining a new methodology, called SAFE, for reducing the factual errors, known as hallucinations, generated by large language models such as OpenAI's GPT and Google's Gemini. The unreliability of these models has ld to lawyers making submission with fictitious citations and deterred many companies feom using them commercially.

Hassabis said DeepMind was exploring different ways of fact checking and grounding its models by cross-checking responses against Google Search or Google Scholar, for example.

He compared this approach to the way that its AlphaGo model ha mastered the ancient game of Go by double-checking its output, a large language model could also verify whether a response made sense and make adjustments. "It's alittle bit like AlphaGo when it's making a move. You don't just spit out the first move that the networking thinks about. It has some thinking time and does some planning," he said.

When challenged with authenticating 16,000 individual facts, SAFE agreed with crowdsourced human annatators 72% of the time - but was 20 times cheaper.

ExcerptAdventure
Like

About the Creator

Sanju Talukder

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.