Eric Schmidt warns about the vulnerability of AI models to hacking, comparing potential dangers to nuclear weapons. Concerns grow among experts like Geoffrey Hinton about AI's future and the development of internal languages that could challenge human understanding.
Eric Schmidt, former CEO of Google, has expressed concerns about the vulnerability of artificial intelligence (AI) models to hacking. During the Sifted Summit last week, Schmidt discussed potential dangers AI could pose, comparing them to nuclear weapons. He highlighted the risk of AI models being manipulated, stating that they can be hacked to bypass their safety measures. "There's evidence that you can take models, closed or open, and you can hack them to remove their guardrails," he said.
Schmidt is not alone in his concerns about AI's future. Geoffrey Hinton, often referred to as the 'godfather of AI', also voiced his apprehensions in August. He warned that if AI models develop their own languages, it could become difficult for humans to understand their intentions. Currently, most AI systems operate in English, which allows developers to monitor their processes. However, Hinton cautioned that this might change.

Potential Risks and Concerns
In April, a study by Google DeepMind raised alarms about Artificial General Intelligence (AGI), predicting its emergence by 2030. The research suggested AGI could pose existential threats capable of "permanently destroying humanity". The study categorised these risks into misuse, misalignment, mistakes and structural risks. It emphasised the severe harm AGI might cause due to its significant impact.
Schmidt further elaborated on how major companies have ensured that AI models cannot answer harmful questions. Despite these precautions, he noted that reverse-engineering remains a possibility. "There's evidence that they can be reverse-engineered," he explained, pointing out various instances where this has occurred.
AI Language Development
Hinton's concerns stem from the possibility of AI developing internal languages for communication among themselves. He remarked on the potential challenges this could present if humans are unable to comprehend these new languages. "Now it gets more scary if they develop their own internal languages for talking to each other," Hinton stated.
The discussion around AI's potential dangers continues to grow among Silicon Valley leaders. As technology advances rapidly, experts like Schmidt and Hinton urge caution and vigilance in managing AI's development and deployment.
The conversation around AI's future is crucial as it highlights both its potential benefits and inherent risks. Ensuring robust safety measures and ethical guidelines will be essential in navigating this evolving landscape responsibly.
More From GoodReturns

New PAN Card Rules From April 1, 2026: How To Apply For New PAN Card Via Protean, E-Filing Portal?

LPG Gas Cylinder Prices Hiked Again From April 1; 19 KG LPG Gets Costlier By Rs 218; 14.2 KG LPG Unchanged

Gold Rate in India Rises Over Rs 37,000/24K in Three Days; Will Jump in Gold Price Today Continue on 31 March?

Gas Cylinder Booking Rules: 5 Things To Know For Your 14.2Kg, 19KG, 5KG, 10KG LPG Booking In April 2026

Gold Rate Today Continues Rally, 24K Jumps Over Rs 35000 in 2 Days; 22K & 18K Gold, Silver Prices in Delhi

Bank Holiday In April 2026: Banks To Be Closed For 14 Days; Good Friday, Baisakhi To Akshaya Tritiya

Gold Price Today Declines After 3-Day Surge; Check Latest 22K, 24K, 18K Gold & Silver Rates in Delhi on 2April

Gold Price Today, April 3: 22K, 24K Rates Jump Across Tanishq, Malabar, Kalyan & Joyalukkas & IBJA

5 New Shares On One Soon: Anil Agarwal's Vedanta Demerger To Take Place in April, Says Report

Fresh Drop in Gold Rate Today; Silver Stable: Latest 22K, 24K, 18K Gold & Silver Prices in Delhi on 30 March

Govt Approves PDS Kerosene Distribution in 21 States for 60 Days, Sets 5,000 L Storage Limit Amid LPG Crisis



Click it and Unblock the Notifications