AI

Tech Tonic | Mark Zuckerberg makes absolute sense, but are we close to an AI God?

The voices are loud and clear. OpenAI CEO Sam Altman is confident that human-esque artificial general intelligence, or AGI, will be ready for primetime in the “reasonably close-ish future.” Scientists at Google DeepMind believe there’s a 50 percent chance AGI will be ready for deployment within the next few years, even as early as 2028.

PREMIUM
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/(REUTERS)

Elon Musk believes it’ll be by 2026. Ilya Sutskever’s Safe Superintelligence Inc. believes superintelligence is within reach, and is positive his new start-up’s team, investors, and business model are aligned to achieve this with minimal distractions. All serious voices are not to be ignored.

Mark Zuckerberg hasn’t, and instead, countered this by saying something most of us have been thinking. For this, I must reference Zuckerberg’s recent conversation with YouTuber Kane Sutter. “It’s almost as if they kind of think they’re creating God or something and that’s not what we’re doing. I don’t think that’s how this plays out,” Zuckerberg said. He believes the future isn’t going to be one with one large, really smart AI doing everything. Instead, there will be a variety of AI tools in place, customised to specific requirements people have. That is also the reason he has been seen siding with the cause of open-source AI, as against closed ecosystems.

He isn’t the only one attempting to be the voice of reason. In April, I remember reading French start-up Mistral’s founder and CEO Arthur Mensch’s interview with The New York Times. He didn’t hold back. “The whole A.G.I. rhetoric is about creating God. I don’t believe in God. I’m a strong atheist. So, I don’t believe in A.G.I.,” it doesn’t come any clearer than this. Irrespective of his religious beliefs, he’s very clear that AI companies mostly tend to be American, and that itself causes a cultural complication.

“These models are producing content and shaping our cultural understanding of the world. And as it turns out, the values of France and the values of the United States differ in subtle but important ways,” he illustrated. Mensch doesn’t at all find any comfort in tech’s obsessive pursuit of trying to make technology as, or more cognitive, than humans.

It is worth reading a Substack conversation between Gary Marcus, who is the author of Rebooting.AI, and Grady Booch, an IBM Fellow and Chief Scientist for Software Engineering at IBM Research. Booch and Marcus agree that large language models are inherently wonky. Booch has a lot more to say, and I’ll simply quote. It makes for focused reading.

“AGI seems just around the corner, and you yourself fall into that trap when you say ‘it’s now mostly a matter of software’. It’s never just a matter of software. Just ask Elon and his full self-driving vehicle, or the Air Force and the software-intensive infrastructure of the F-17, or the IRS with their crushing technical debt. I have studied many of the cognitive architectures that are supposed to be on the path of AGI: SOAR, Sigma, ACT-R, MANIC, AlphaX and its variations, ChatGPT, Yann’s latest work, and as you know have dabbled in one myself (Self, an architecture that combines the ideas of Minsky’s society of mind, Rod’s subsumption architecture, and Hofstadter’s strange loops). In all of these cases, we all think we grok the right architecture, the right significant design decisions, but there is so much more to do,” he says.

There’s more. “Heck, we’ve mapped the entire neural network of the common worm, and yet we don’t see armies of armoured artificial worms with laser beams taking over the world. With every step we move forward, we discover things we did not know we needed to know. It took evolution about 300 million years to move from the first organic neurons to where we are today, and I don’t think we can compress the remaining software problems associated with AGI in the next few decades,” a complete analysis. Just being smarter than humans cannot be the sole criterion for AGI. It has to be more, it has to be able to do most of what humans do. Can it? No one is too sure, except perhaps folks in research labs who know things we don’t. Or do they?

The one thing we can be sure of is that the future isn’t yet written. There’s of course work on AI’s transformation into AGI. There will be the spectre of regulation in increasing intensity as time passes. People who created or own the data that’s being used to train the data-intensive models, will have a say at some point. We may well get to AGI within the timelines that Elon Musk or the Google Deepmind scientists hint at. Or we may not. There is an equal chance for the latter too. As it should be, till there’s some clarity on where we are actually headed.

Vishal Mathur is the technology editor for the Hindustan Times. Tech Tonic is a weekly column that looks at the impact of personal technology on the way we live, and vice-versa. The views expressed are personal.


Source link

Online Editor - Valley Vision

Welcome to Valley Vision News, where Er Ahmad Junaid leads our team in delivering real news in both English and Urdu. We're your go-to source for independent coverage, focusing on stories from around the globe, with a spotlight on India and Jammu and Kashmir. From breaking news to in-depth analysis, we've got you covered. Join us on our journey to stay informed and empowered. Join with us at Valley Vision News.

Related Articles

Leave a Reply

Back to top button