Artificial intelligence will surpass human intelligence decades earlier than previously predicted according to the man known as the “father of AI”
Ben Goertzel, the PhD mathematician and futurist who popularized the term “artificial general intelligence”(AGI) believes that AI is verging on an exponential
He made the announcement last week while closing out the 2024 Beneficial AI Summit and Unconference, which was partially sponsored by his own firm SingularityNET.
He said: “It seems quite plausible we could get to human-level AGI within, let’s say, the next three to eight years. Once you get to human-level AGI within a few years you could get a radically superhuman AGI”.
InfoWars reports: The man who is sometimes called the “father of AI” admitted that he could be wrong, but he went on to predict that the only impediment to a runaway, ultra-advanced AI – far more advanced than its human makers – would be if the bot’s ‘own conservatism’ advised caution.
‘There are known unknowns and probably unknown unknowns,” Goertzel said. “No one has created human-level artificial general intelligence [AGI] yet; nobody has a solid knowledge of when we’re going to get there.” But, unless the processing power, in Goertzel’s words, required a ‘quantum computer with a million qubits or something,’ an exponential escalation of AI struck him as inevitable. “Once you get to human-level AGI, within a few years you could get a radically superhuman AGI,” he said.
In recent years, Goertzel, well-known for his work on Sophia the Robot, the first robot ever to be granted legal citizenship, has been investigating a concept he calls “artificial superintelligence” (ASI), which he defines as an AI that’s so advanced that it matches all of the brain power and computing power of human civilization. According to him, three lines of converging evidence could support his thesis. First, he cited the updated work of Google’s long-time resident futurist and computer scientist Ray Kurzweil, who has developed a predictive model suggesting AGI will be achievable in 2029. Next, Goertzel referred to all the well-known recent improvements made to large language models (LLMs) within the past few years, which he pointed out have “woken up so much of the world to the potential of AI.” Finally, he turned to his infrastructure research designed to combine various types of AI infrastructure, which he calls “OpenCog Hyperon.”
The new infrastructure would marry AI, like LLMs and new forms of AI that might be focused on other areas of cognitive reasoning beyond language. It could be math, physics, or philosophy, to help create a more well-rounded true AGI. Goertzel’s “OpenCog Hyperon” has gotten the interest of others in the AI space, including Berkeley Artificial Intelligence Research (BAIR) which hosted an article he co-wrote with Databricks CTO Matei Zaharia and others last month.
The self-described panpsychist has suggested that researchers pursue the creation of a ‘benign superintelligence.’ Goertzel has also proposed an AI-based cryptocurrency rating agency capable of identifying scam tokens and coins.