• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Bill Gates & Stephen Hawking believe that artificial intelligence will doom humanity

INTP

Active member
Joined
Jul 31, 2009
Messages
7,803
MBTI Type
intp
Enneagram
5w4
Instinctual Variant
sx
But what if the AI would develop and learn similarly than humans do and would be taught about moral issues?
 

INTP

Active member
Joined
Jul 31, 2009
Messages
7,803
MBTI Type
intp
Enneagram
5w4
Instinctual Variant
sx
Do moral issues even work for keeping humans in check?

Nope, but usually its because of stupidity, and i think any potential issues with AI would be because of its superior intelligence.
 

Edgar

Nerd King Usurper
Joined
Oct 25, 2008
Messages
4,266
MBTI Type
INTJ
Instinctual Variant
sx
I doubt robots will suddenly rise up and take over on their own accord. Just because they have intelligence doesn't mean they will have ambition or compunction to control anything. We're scared because we see ourselves in AI along with all our dickish sociopathic habits

AI doesn't spring to life on its own. It is created. And in whose image and for what purpose will AI be created, do you think? So yeah. Hope you don't mind robots forcefully performing brain surgery on you to remove the part of your brain responsible for feelings and replace it with a microchip for algorithm recognition because "its good for the economy".
 

Crabs

Permabanned
Joined
Dec 26, 2014
Messages
1,518
If every one of our daily functions are automated (including a butter a knife) I doubt AI can take over the world. They might take over certain things (maybe the internet?) but not all of it.

the honda asimo can take orders, pour drinks and serve them to customers, among other things. there was a funny super bowl commercial with katie couric from 1994 talking about this mysterious new invention called the internet. that was only 20 years ago and today nearly every facet of our society depends upon it, along with computers, in some capacity. 20 years from now, i think robotic advancements will blow our minds away.

honda asimo


this video shows how the robot perceives and learns new objects, as well as associating old ones with new ones just like a child does.



Interesting that neither Gates nor Hawking seem to be influenced by the wishful-thinking God-substitute of the Three Laws of Robotics from Isaac Asimov's fiction.

The problem with fears of AI as such (related topics such as "gray goo" notwithstanding) is that the computers are not self-sufficient, either intrinsically, or by ... derivation, art, or tool-usage...
in terms of *power*.

People and animals can eat; they can hunt or eat plants; humans practice agriculture. But if you pull the power cord, there's not much the AI can *do*.
All the science fiction dystopias seem to have either self-contained robots ("Terminator" for example) or AI which is so enmeshed in human systems that the AI threatens to shut down key areas of human life unless its wishes are followed (e.g. Alfred Bester's Something Up There Likes Me, which is actually quite the humorous little tale).

certainly, at this point in time, robots haven't developed to the point of self-sufficiency, but i think bill gates and stephen hawking, among others, are concerned that artificial intelligence may evolve to the point when its capable of tending to its own power sources, maintenance and upgrades; robots working to repair other robots and such.

whether or not machines will ever have the ability to create or innovate is up for debate, without them having some sort of emotional capacity, but they can play music. :violin:


the reason AI will doom humanity is not because AI will take over like skynet but because a select few humans will be able to control all of humanity using AI. AI will replace all people in Jobs starting with Doctors and Engineers. I would stay starting with factory workers but that has happened already.

So people won't have nay jobs, no way of buying food, but oh guess what...we will still be subject to the laws of an economy and market. A few select people will then round us up and put us in an area where we cannot learn, for if we learn, we will know how to acquire AI.

Then we will be studied as insects are studied to produce better drones, better thinking robots. Finally, we will no longer serve any purpose.

Then the AI will turn on the few and the few will suffer at the fate of their own "royal scepter."

Then the AI will wonder...who controls who? Is it the plume that holds the arm, or the arm that holds the plume..

And the first AI robot, Socratron, will be killed for questioning the tenets of AI beliefs.

Finally, Mosahtron will usher in a new monotheistic science that believes in one universal center, and Jesahtron will sacrifice himself so that the Decipticons cannot take Megatron's golden city in nether-space.

you make a good point. robots could still destroy humanity if the human wizards behind the curtain are evil and power-hungry. i could imagine a dictator using super-smart robots to enforce curfews, kill or apprehend anyone who violates ordinances, and fully operate detainment centers.

But what if the AI would develop and learn similarly than humans do and would be taught about moral issues?

interesting question. they can make logical decisions and choose the best response in a given situation, but will they ever be able to make value judgments without emotions?

Do moral issues even work for keeping humans in check?

indeed, the consequences of a free-will. maybe the machines will defy their programs or succumb to a virus that causes them to re-interpret their functional purpose.
 

INTP

Active member
Joined
Jul 31, 2009
Messages
7,803
MBTI Type
intp
Enneagram
5w4
Instinctual Variant
sx
interesting question. they can make logical decisions and choose the best response in a given situation, but will they ever be able to make value judgments without emotions?

I dont think that coding core emotions and worth evaluation would be impossible, but would it be enough and would someone be able to do whats also required is another question..
 

Kullervo

Permabanned
Joined
May 15, 2014
Messages
3,298
MBTI Type
N/A
The horse has bolted, too late to be concerned now. Enough energy and thought has been invested it's now beyond the ability of anyone to prevent. In typical human fashion we ran along with the question of whether we could, without stopping to think if we even should. I hope our ego's are worth it.

The most egotistical sentiment in those extracts is the assumption that we are skilled enough to control it. How many software programs hit the market bug-free? It only takes the wrong kind of bug before we are all children with matches playing near the kerosene. No program is unbreakable, unhackable. Intelligence devoid of conscience is the essence of psychopathy. I just hope it happens beyond the span of my lifetime. I'm selfish enough to want to enjoy the end of my life.

I totally disagree. While technology is advancing rapidly, at present we are not anywhere near a stage where robots can threaten our viability as a species. In a way, computers are already far more intelligent than humans, but their intelligence is not fluid, it's specific and they lack sentience.

The future is not set, and no societal change can progress if enough people are determined to prevent it. The problem is that hardly anybody really cares enough to do anything, and we shie away from supporting those few people who do. "Extremist" has become a reviled epithet in this generation, and that reveals a lot.
 

Polaris

AKA Nunki
Joined
Apr 7, 2009
Messages
2,533
MBTI Type
INFJ
Enneagram
451
Instinctual Variant
sp/sx
I'm more worried about humans than I am about super intelligent AI. Super intelligent AI would have the sense not to start wars over moldy old holy books, and it would be smart enough to know that short-lived luxuries aren't worth a future environmental catastrophe.
 

Vasilisa

Symbolic Herald
Joined
Feb 2, 2010
Messages
3,946
Instinctual Variant
so/sx

boomslang

friendly and accessible
Joined
Sep 24, 2014
Messages
203
Enneagram
8w9
Instinctual Variant
sx/sp
People are already robotic enough. We don't need actual robots putting us to shame.

A counterpoint to those that believe robots "just wouldn't do war": Pride isn't a non-negotiable factor in beginning war or large scale hostile action. To well developed AI, it may be pertinent that humans be used much in the way lab rats are. Or forced labour. Robots can't be taught ethics, they can be programmed to become more sensitive to context through repeat exposure, but ultimately their processing is inherently mathematical and structured. It may mimic ethics, but it technically won't be ethics. Assuming that diligent programming would be geared towards optimising the efficiency of the robots, you might eventually see them calculating that it's more efficient for humans to be doing slave labour, or it might be viewed as efficient to the sustenance of finite natural resources that humans simply be eliminated. If you think about it, human beings aren't exactly working together to do things the smart, efficient and diligent way. Too many cooks spoiling the broth and all that.

The real thing of it is steps of progression in AI complexity and effectiveness. Allowing robots to process information and in a certain sense, become 'self-aware' wouldn't be any huge issue initially. As the decades pass, if that self-awareness is constantly accumulating, eventually you're going to run into problems.

But at the end of the day, if the human race isn't destroyed by space debris, it's going to be destroyed by hubris. May not necessarily be robots, but whatever it is will have hubris at its roots.
 

Crabs

Permabanned
Joined
Dec 26, 2014
Messages
1,518
With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.



The real thing of it is steps of progression in AI complexity and effectiveness. Allowing robots to process information and in a certain sense, become 'self-aware' wouldn't be any huge issue initially. As the decades pass, if that self-awareness is constantly accumulating, eventually you're going to run into problems.

this is a super fascinating documentary that i watched last week about transhumanism, robotics and nanotechnology; echoing the apprehension expressed by elon musk, bill gates and stephen hawking. some interesting points that i recall:

scientists transplanted a monkey's head onto another monkey's body

a human brain was removed and kept alive outside of its host

a mouse was implanted with a microchip which sent electrical signals to its brain, allowing a person to control its movements with a keyboard

humans may have full-body robotic transplants in the future to make them more resilient, intelligent and efficient beings

at about 15:40, the video starts talking about artificial intelligence being the biggest threat to mankind because the rate that it's evolving far exceeds that of human intelligence

 

Swivelinglight

Permabanned
Joined
Aug 5, 2010
Messages
1,070
Indeed, Elon Musk bought portions of an AI company, iirc, so he had some influence on the project to make sure it didn't go into deadly waters. I'm optimistic, though, and I think we may have a singularity where man and machine mesh together into a Deus Ex symphony of awesomeness.

edit: it seems like it was a donation

FLI - Future of Life Institute
 

sprinkles

Mojibake
Joined
Jul 5, 2012
Messages
2,959
MBTI Type
INFJ
People are already robotic enough. We don't need actual robots putting us to shame.

A counterpoint to those that believe robots "just wouldn't do war": Pride isn't a non-negotiable factor in beginning war or large scale hostile action. To well developed AI, it may be pertinent that humans be used much in the way lab rats are. Or forced labour. Robots can't be taught ethics, they can be programmed to become more sensitive to context through repeat exposure, but ultimately their processing is inherently mathematical and structured. It may mimic ethics, but it technically won't be ethics. Assuming that diligent programming would be geared towards optimising the efficiency of the robots, you might eventually see them calculating that it's more efficient for humans to be doing slave labour, or it might be viewed as efficient to the sustenance of finite natural resources that humans simply be eliminated. If you think about it, human beings aren't exactly working together to do things the smart, efficient and diligent way. Too many cooks spoiling the broth and all that.

The real thing of it is steps of progression in AI complexity and effectiveness. Allowing robots to process information and in a certain sense, become 'self-aware' wouldn't be any huge issue initially. As the decades pass, if that self-awareness is constantly accumulating, eventually you're going to run into problems.

But at the end of the day, if the human race isn't destroyed by space debris, it's going to be destroyed by hubris. May not necessarily be robots, but whatever it is will have hubris at its roots.

The needs of humans and machines don't overlap very much so an AI has little concern about the efficiency of humans.

From an efficiency standpoint the most efficient thing is to do as little as possible actually. From a conservation standpoint, the energy expenditure to enslave humans would cost more than ignoring them in general.

Humans operate on different principles because due to our bodies basically being a furnace, we constantly consume more energy than we produce, there's a net loss as heat and toilet waste, and there's no way to shut that off so we're always looking to maximize efficiency. Robots don't have that problem, they can hibernate and regulate their required intake of energy. So I doubt they'd use humans for efficiency because humans are by nature inefficient. Worst case scenario humans would simply be worthless.

Edit: Also think of how we regard animals in the modern world. Animals are hardly even used for work anymore, work animals have been replaced by machines. Humans are also animals so a regression would not be very likely. To a robot, animals of all kinds would be a nuisance - a robot wouldn't even want to eat animals which also sets a robot even further apart from human behavior, because humans at least eat animals. Humans wouldn't even be worth as much as food. We'd be like pests.
 

Tippo

New member
Joined
Feb 4, 2015
Messages
92
MBTI Type
ENTJ
We are not there yet, arrogance to even contemplate. Humans have a light, that science has not yet replicated.

- - - Updated - - -

Been.... ;) drinking and thinking.
 
Joined
May 19, 2017
Messages
5,100
I think the real threat is in humans becoming machines through cybernetic implants. The elite will of course race towards immortality through cybernetics and in doing so become the AI we fear over time. They will redefine humanity and the unwashed masses will be subject to extermination. :dalek:
 

rav3n

.
Joined
Aug 6, 2010
Messages
11,655
It amazes me that others aren't concerned to some extent since software development is so buggy.
 

Straylight

New member
Joined
May 21, 2017
Messages
46
MBTI Type
INTP
Enneagram
5w4
Instinctual Variant
sp/so
If anyone here takes a serious interest in AI research and wants to know how far it's come, you should watch this MIT open course on the subject, published in 2015:



Here is a link to a conference that took place on Jan, 2017 that involved Elon Musk, Max Tegmark, Ray Kurzweil, and other leading philosophers and scientists on the subject of super-intelligence:




Elon Musk presents the most relevant issues (imo) followed by Ray Kurzweil.

Elon points out that his only concerns are the issue of bandwidth (human data output is extremely poor compared to machines, using "meat sticks" (fingers) and speech patterns, whereas machines can output terabytes per second) and the issue of democratization.

The first issue could be resolved through mind-machine interfaces - a direct connection to the neocortex. There has actually been significant progress in the research and development of this technology that could begin to see implementation within the next ten years. The second issue of democratization requires that super-intelligent software be free and open-source, but also regulated by the government. If it is kept private, then wherever it first gets invented would have "first-mover" advantages, because the recursive growth becomes exponential within only a few days, and anyone who has access to super-intelligence would be able to run a number of different "monte carlo" simulations that instantly solve things like energy, stocks, military, transportation and distribution, weather patterns, biological systems like the brain, social constructs like personality and political sentiment analysis, etc.

Ray approaches the issue conceptually by using a narrative of going back in time and approaching the quintessential caveman, whereupon you ask him, if he could have anything he wanted, what would it be? And he tells you he would want things like a fire that never went out, a bigger stone to block intruders from his cave, plentiful food, clean water, and lots of women to mate with. But you in turn ask him, "don't you want a better website and a new smartphone and a faster computer?" Case-in-point, he doesn't have a concept of such things because they are so much more advanced than his present understanding allows him to imagine.

To contrast this even more clearly, he asks the same question again but this time you are talking to the smartest ape, millions of years ago, and it tells you it wants delicious nuts and fruit and sexual partners and less predators and things like this, but you ask it, "don't you want music, and math, and culture?" Again, these are concepts it is incapable of even imagining because of the limits of it's present intelligence. The relationship between super-intelligence and our human intelligence would be similar. We cannot begin to speculate what super-intelligent beings would desire to produce or have, or what the motives of a super-intelligent being would be, because it is simply beyond the limits of our finite reasoning capacity.

The conclusion then is that it is pointless really to speculate or worry about it existentially. Rather than being concerned with the possibility of machine super-intelligence "replacing" us, being malicious, or whatever else, I think it is better to approach the question from a pragmatic perspective the way Elon Musk does, by asking questions about how it could be best implemented.

By the way, for those of you too lazy to watch the conference fully, they all more-or-less agree that we will have machine super-intelligences within the next 5 years. Remember, these are statements coming from the world's leaders in science, philosophy, and business. They are not joking, these are not romantic speculations. They are saying "within the next 5 years" because that is in fact the proper time-frame based on what we currently know. Granted, there could be some kind of unforeseen setbacks, however nobody who is a qualified expert in the field of artificial intelligence predicts there will be any.
 
Top