AGI Dream ‘Meaningless’: Godfather Of AI


‘We could well be in a world where AI already has these dangerous capabilities and that is why there is a need for careful scientific evaluation to determine the risk and capabilities of these machines.’

Kindly note that this illustration generated using ChatGPT has only been posted for representational purposes.

 

Key Points

  • Yoshua Bengio said AGI may not be the next big breakthrough due to AI’s ‘jagged intelligence’.
  • He warned AI can be powerful yet weak, and dangerous if misused.
  • Deepfakes, scams, and non-consensual AI images are rising, raising safety concerns.

Yoshua Bengio, the Canadian computer scientist often hailed as the godfather of artificial intelligence, on Friday said it would be better to forget artificial general intelligence (AGI) as the next big thing if modern systems continued to have jagged intelligence — extremely good at many complex tasks but really bad in many simple ones.

Yoshua Bengio on AGI Reality

“We can escape this vision of an AGI moment if AI continues to have these jagged capabilities.

“We could well be in a world where AI already has these dangerous capabilities and that is why there is a need for careful scientific evaluation to determine the risk and capabilities of these machines,” he said at the India AI Impact Summit.

Academicians, researchers, and policymakers have cautioned against the AI juggernaut, saying while its prospects remain exciting in many fields — especially in areas of scientific research and drug discovery or curing cancer — it is also prone to hallucinations, biases, and botched outcomes.

Jagged Intelligence Concerns

“Maybe a few years ago, it would have been exciting to think of AGI reaching human levels, but now it is meaningless as we are going to have things that will be extremely stupid and weak in some ways and dangerous in the wrong hands.

“Businesses also need to determine whether AI is good for what they are trying to do,” Bengio said while launching the International AI Safety Report.

Google DeepMind AGI Timeline Debate

His comments come days after Google DeepMind CEO and Cofounder Demis Hassabis said the era of AGI is still about five to eight years away, and the systems need to be trained further and possess capability to learn on their own.

While general purpose AI capabilities have continued to improve — especially in mathematics, coding, and autonomous operation — some of the dangers of these models and systems are already out in the open. Deepfakes are on the rise along with fraud and scams.

Bengio expressed particular concern over the amount of AI-generated non-consensual intimate images, which affect women and girls — something that brought Elon Musk’s Grok into public attention last month.

He also warned that closed models are not immune to attacks just like open source ones, and that the guardrails are still not up to the mark.

Deepfake and Misuse Risks

The AI Safety Report added that AI systems can discover software vulnerabilities and write malicious code.

‘In one competition, an AI agent identified 77 per cent of the vulnerabilities present in real software.

‘Criminal groups and state-associated attackers are actively using general-purpose AI in their operations.

‘Whether attackers or defenders will benefit more from AI assistance remains uncertain,’ it said.

AI Safety and Job Loss Fears

The report also said that general-purpose AI will likely automate a wide range of cognitive tasks, especially in knowledge work, with early evidence showing no effect on overall employment, but some signs of declining demand for early-career workers.

Bengio also expressed his concern that governments are not doing enough to address the issue of job losses due to AI.

Feature Presentation: Ashish Narsale/Rediff



Source link

administrator

Leave a Reply

Your email address will not be published. Required fields are marked *