Imagine this: you’re being gently awakened by the soft tones of your personal assistant just as you’re nearing the end of your last period of sleep.

A disjointed voice tells you about the emails you missed overnight and how they were answered in your absence. The same voice tells you that rain is expected this morning and advises you to put on your trench coat before leaving the house. As your car drives you to the office, your wristwatch announces that lunch has been pre-ordered for delivery from your local steakhouse because your iron levels recently dropped a little. is

Anticipating all your needs before you even have a chance to realize them yourself is one of the possibilities of modern artificial intelligence. Some of Canada’s top AI researchers believe it could create a utopia for mankind — if AI doesn’t wipe out our species first.

While neither new nor simple, the conversation around AI and how it will affect the way we live can be divided into three parts: whether superintelligence — an entity that Beyond human intelligence—to be created, how these entities might improve or both destroy life as we know it, and what we can do now to control the consequences.

But regardless, observers in the field say the issue should be among the top priorities of world leaders.

The race for super intelligence

For the average person, AI in today’s context can be characterized by asking a question at a device and hearing back within seconds. Or your mobile phone wallet opens when it sees your face.

These are the responses that arise after a human prompts for a single task, a common feature of artificial intelligence, or artificial narrow intelligence (ANI). The next step is AGI, or artificial general intelligence, which is still in development, but will give machines the ability to think and make decisions on their own and therefore be more productive. University of Wolverhampton In England

ASI, or superintelligence, will function beyond human levels and is just years away, according to many in the field, including British-Canadian computer scientist Geoffrey Hinton, who told CBC from his studio in Toronto. where he lives and currently serves as Professor Emeritus at the University of Toronto.

“If you want to know what it’s like to not have superintelligence, ask a chicken,” said Hinton, often referred to as one of the godfathers of AI. is appreciated.

look Has AI ruined us all? Computer scientist Jeffrey Hinton weighs:

Has AI ruined us all? Here’s what the ‘Godfather of AI’ says.

Geoffrey Hinton fears an AI takeover – but says there’s a way to stop it. The British-Canadian computer scientist known as the “godfather of AI” says governments should regulate how tech companies develop artificial intelligence to ensure That it is done safely.

“Almost all the leading researchers believe that we will achieve superintelligence. We will make things smarter than we are,” Hinton said. “I thought it would be 50 to 100 years. Now I think it’s probably five to 20 years before we get superintelligence. Maybe longer, but it’s coming faster than I thought. is.”

Jeff Clune, a computer science professor at the University of British Columbia and the Canada CIFAR AI Chair at the Vector Institute, an AI research nonprofit based in Toronto, echoes Hinton’s predictions regarding superintelligence.

“I certainly think there’s a chance, and a rare chance, that it could show up this year,” he said.

“We have entered an era in which superintelligence is possible with each passing month and that possibility will increase with each passing month.”

Eradicating disease, streamlining irrigation systems, and perfecting food distribution are just a few techniques that superintelligence humans could use to help solve the climate crisis and end world hunger. However, experts caution against underestimating the power of AI, for better or worse.

The reverse of AI

Promising super-intelligence, a sentient machine that creates images of HAL. 2001: A Space Odyssey or TerminatorSkyNet is considered inevitable, not necessarily a death sentence for all of mankind.

Clooney estimates that there may be a 30 to 35 percent chance that all is well in terms of humans maintaining control over superintelligence, meaning that in areas such as health care and education, our It can improve even more than imagination.

A still from the film 2001: A Space Odyssey shows a close-up of a large, red camera lens.
The camera eye of the HAL 9000, the artificial intelligence computer from Stanley Kubrick’s fictional 2001: A Space Odyssey, is what often comes to mind when people think of sentient machines. (Criteria/Morning Lemon)

“I would love to have a teacher who is extremely patient and can answer every question I have,” he said. “And in my experiences on this planet with humans, it’s rare, if not impossible, to find.”

He also says that superintelligence will help us “make death optional” through turbocharging science and eliminate everything from accidental death to cancer.

“Since the beginning of the scientific revolution, human scientific ingenuity has been constrained by time and resources,” he said.

“And if you have something smarter than us that you can make trillions of copies of in a supercomputer, you’re talking about a rate of scientific innovation that’s absolutely catalytic.”

Healthcare was one of the industries that Hinton agreed would benefit the most from an AI upgrade.

“In a few years we will have family doctors who have, literally, seen 100 million patients and know all the tests that have been done on you and your relatives,” Hinton told the BBC. Eliminating human error when it comes to diagnostics.

Oh 2018 Survey Commissioned by the Canadian Patient Safety Institute, it was shown that misdiagnosis of patients reported by Canadians Safety is at the top of the list of incidents.

“The combination of an AI system and a doctor is much better than a doctor dealing with difficult cases,” Hinton said. “And the system is only going to get better.”

Geoffrey Hinton, commonly known as one of them. "Godfathers of AI" He believes we will be able to maintain control over the superintelligence, but says no one can predict exactly how much autonomy we will retain.
British-Canadian computer scientist Geoffrey Hinton, known as the ‘godfather of AI’, believes that we will be able to maintain control over superintelligence, but says that any correct The method cannot predict how much autonomy we will retain. (Evan Mitsui/CBC)

The dangerous business of superintelligence

However, if humans fail to maintain control, this glowing prediction could be much more profound, although most who work in the AI ​​realm acknowledge that there are countless possibilities when artificial intelligence is involved. are

Hinton, whatever Won the Nobel Prize in Physics. Holiday made headlines last year when he told the BBC there was a 10 to 20 percent chance of AI making humans extinct in the next 30 years.

“We’ve never encountered anything more intelligent than ourselves before. And how many examples do you know of a less intelligent thing being controlled by a more intelligent thing?” Hinton asked on the BBC today Program

“There is a mother and a child. Evolution has done a lot to allow the child to control the mother, but this is the only example I know of,” he said.

Speaking with CBC News, Hinton expanded on her parent-child analogy.

“If you have kids, when they’re young enough, one day they’ll try and tie their own shoelaces. And if you’re a good parent, you let them try and you might help them. But you have to get it. And after a while you say, ‘Okay, I’m going to do it today.’ That’s how it will be between us and the superintelligences,” he said.

“There are going to be things that we do and superintelligences just get fed up with the fact that we’re so incompetent and just replace us.”

About 10 years ago, SpaceX founder and Tesla Motors CEO Elon Musk told American astrophysicist Neil deGrasse Tyson that he believed. AI will keep humans as pets..

Hinton says we will be kept the same way we keep lions.

“I don’t see why they wouldn’t. But we won’t control things anymore,” he said.

listen What this Nobel laureate fears about the future of AI:

As it happens.6:47 p.mThe ‘godfather of AI’ won the Nobel for working to develop the technology he now fears.

Jeffrey Hinton has spent the last year or so sounding the alarm about the technology he helped create. Now his seminal work on artificial intelligence has won him and colleague John Hopfeld the Nobel Prize in Physics. The University of Toronto computer scientist spoke to As It Happen host Nil Köksal.

And if humans aren’t considered worthy enough for entertainment, Hinton believes we could be extinct altogether, though he doesn’t believe it’s helpful to play the guessing game that humanity will see its end. How will it arrive?

“I don’t want to speculate about how they’re going to get rid of us. There’s a lot of ways they could do that. I mean, one obvious way is something biological that affects them like a virus. It won’t happen, but who knows?”

How can we keep control?

While predictions about the scope of this technology and its timeframe may vary, researchers are united in their belief that superintelligence is inevitable.

The question that remains is whether humans will be able to stay in control or not.

For Hinton, the answer lies in electing politicians who make regulating AI a high priority.

“What we should be doing is encouraging governments to force the big companies to do more research on how to keep these things safe when they make them,” he said.

look How governments can regulate AI:

default

Nobel laureate Geoffrey Hinton on how governments should regulate AI

The winner of this year’s Nobel Prize in Physics is Geoffrey Hinton, a British-Canadian known as the ‘Godfather of AI’. He talks with CBC chief political correspondent Rosemary Barton about how governments should regulate technology and its use in election campaigns.

However, Clone, who also serves as a senior research advisor at Google DeepMind, says many of the leading AI players have the right values ​​and are “trying to get it right.”

“What worries me a lot less than the companies that develop it are other countries trying to catch up and other organizations that I think are far less suspicious than well-known AI labs.”

A practical solution that Clone offers, like in the nuclear age, is to invite all major AI players into regular interactions. He believes that everyone working on this technology should contribute to ensure that it is developed safely.

“This is the biggest roll of the dice that humans have made in history and is bigger than the creation of nuclear weapons,” Klon said, adding that researchers around the world could slow down if they kept each other informed of their progress. . is needed

“The stakes are so high, if we get it right, we get tremendous upside. And if we get it wrong, we could be talking about the end of human civilization.”





Source link