• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Can AI Wipe Out The Human Race?

Mal12345

Permabanned
Joined
Apr 19, 2011
Messages
14,532
MBTI Type
IxTP
Enneagram
5w4
Instinctual Variant
sx/sp
http://www.dailymail.co.uk/sciencet...ing-warns-rise-robots-disastrous-mankind.html

Google has set up an ethics board to oversee its work in artificial intelligence.

The search giant has recently bought several robotics companies, along with Deep Mind, a British firm creating software that tries to help computers think like humans.

One of its founders warned artificial intelligence is 'number 1 risk for this century,' and believes it could play a part in human extinction

'Eventually, I think human extinction will probably occur, and technology will likely play a part in this,' DeepMind’s Shane Legg said in a recent interview.

Among all forms of technology that could wipe out the human species, he singled out artificial intelligence, or AI, as the 'number 1 risk for this century.'

The ethics board, revealed by web site The Information, is to ensure the projects are not abused.

Neuroscientist Demis Hassabis, 37, co-founded DeepMind Technologies just two years ago with the aim of trying to help computers think like humans.​
 

Mal12345

Permabanned
Joined
Apr 19, 2011
Messages
14,532
MBTI Type
IxTP
Enneagram
5w4
Instinctual Variant
sx/sp
Dont be ridiculous dude

Me ridiculous? Consider your avatar. Last season the Redskins came in just ahead of the joke team known as the Houston Texans. Even the talentless Raiders did better.
 

Ivy

Strongly Ambivalent
Joined
Apr 18, 2007
Messages
23,989
MBTI Type
INFP
Enneagram
6
Several posts moved to OT. Keep it civil people.
 

Fluffywolf

Nips away your dignity
Joined
Mar 31, 2009
Messages
9,581
MBTI Type
INTP
Enneagram
9
Instinctual Variant
sp/sx
Not really because AI can not be held responsible for its actions. The creators of AI would be responsible..
 

Mal12345

Permabanned
Joined
Apr 19, 2011
Messages
14,532
MBTI Type
IxTP
Enneagram
5w4
Instinctual Variant
sx/sp
Not really because AI can not be held responsible for its actions. The creators of AI would be responsible..

You know what they're saying. It's like asking, "Can atomic bombs wipe out the human race?" We all know who makes these bombs. Not aliens, and not some ancient civilization existing deep inside the Earth.

However, the AI they're talking about would be the ultimate product: a device that is as or more intelligent than humans. We're not talking about human mimicry, but a machine so advanced that it can no longer be distinguished from any other intelligent, self-aware being.
 
R

Riva

Guest
AI wouldn't have much common sense/common sense wouldn't be natural to AI. It would have to be programmed anyway.
 

Mal12345

Permabanned
Joined
Apr 19, 2011
Messages
14,532
MBTI Type
IxTP
Enneagram
5w4
Instinctual Variant
sx/sp
AI wouldn't have much common sense/common sense wouldn't be natural to AI. It would have to be programmed anyway.

Well yes, just as we're all "programmed" to learn from experience, and to learn the practices and ways of the cultures we all live in. The same would be true of AI.
 
G

Ginkgo

Guest
http://io9.com/stephen-hawking-says...utm_source=io9_facebook&utm_medium=socialflow

The world's most famous physicist is warning about the risks posed by machine superintelligence, saying that it could be the most significant thing to ever happen in human history — and possibly the last.

As we've discussed extensively here at io9, artificial superintelligence represents a potential existential threat to humanity, so it's good to see such a high profile scientist both understand the issue and do his part to get the word out.

Hawking, along with computer scientist Stuart Russell and physicists Max Tegmark and Frank Wilczek, says that the potential benefits could be huge, but we cannot predict what we might achieve when AI is magnified — both good and bad.

Writing in The Independent, the scientists warn:

Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence".

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.



Nano-technology + AI is a huge deal.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,986
With a group of despots supporting each other in the political sphere, shenanigans involving the safeguarding of nuclear weapons, and various resource shortfalls that we'll be facing in about four decades or so, I think we have a lot of competing trends that could destroy humankind first.
---

Unless our understanding of computer science and neuroscience are a lot further along than I am aware of, I think the destructive power AI holds is just about the same as other technologies.

We recently had to put a moratorium on research to understand how species-hopping viruses work, for instance. Carbon nanotubes, which are used in research in so many fields have found to have similar effects as asbestos--not world ending, but there are a lot of things like this, and malicious scientists (or ones complicit with a despot) can easily brew new chemical cocktails that are weapons of mass destruction.
----

What separates "AI" from other forms of computing anyways?

A not so well known fact is much of the circuitry already in most computer chips have been automatically constructed by algorithm for a very long time. This is because it would take human beings a ridiculous amount of time to design that much circuitry. The automation used is quite sophisticated (touching on may problems similar to undecidability and halting problems), but would probably not be considered "AI". Bioinformatics and "Big Data" in all realms also uses "machine learning" algorithms to discover patterns in genes, metabolic pathways, image correction, financial markets, politics, social networks, personal habits, and a whole lot more.

Algorithms exist for a lot of things that allow computers and robots to do things far better than humans. This has been the case since the advent of electronic computation. What is "AI" vs. just the uses of statistically informed software is very murky to me, and I have implemented many machine learning and "AI" algorithms myself.
 

Fluffywolf

Nips away your dignity
Joined
Mar 31, 2009
Messages
9,581
MBTI Type
INTP
Enneagram
9
Instinctual Variant
sp/sx
Yeah, I really dont see the issue. Unless you give AI the ability to manufacture and launch WMD's (realize how silly that sounds? :p ), I dont see how any kind of AI would ever be a threat.

On the other hand, the possible applications are endless. A sophisticated AI could sift through and organize data faster than thousands of people. So it could definately replace a lot of peoples jobs though. :p
 
Joined
Jun 6, 2007
Messages
7,312
MBTI Type
INTJ
I think it's certainly possible. As something designed by humans, AI is inherently imperfect, and could behave in unpredictable ways. AI is a wonderful thing to an extent, such as automating duties that would be drudgery for humans. But given too much power, AI could cause tremendous problems. I'm not sure we'll ever be able to program a machine to have the same judgment that humans have, and that could be all the difference.
 

Typh0n

clever fool
Joined
Feb 13, 2013
Messages
3,497
Instinctual Variant
sx/sp
I dont think that it sounds silly to assume AI could launch weapons so advanced we cant tell with it. AI can do more than humans could; the problem being that AI isnt AL(artificial life) and doesnt have flesh, hence it has no motives to act. It is simply programmed.
 
W

WhoCares

Guest
I really hope artificial intelligence doesn't think like humans, along with all it's irrational biases and distortions. What would be the point in that? We've got plenty of that going on already. But I do think AI will lead to the extinction of humans, and whether or not I think that is a bad thing is up for debate. Overall as a species I think we over-rate ourselves and have done a great deal of damage to our little ecosystem with our 'smartest chimp in the room' mentality. The end of humanity will in no way be the end of life for the universe and why shouldn't we come to our natural end anyway? Who knows, maybe it will be the best thing for the galaxy.

I'm a big subscriber to chaos theory and I really do think we are dumb enough to create something then lose control over it. I mean, we are dumb enough to genetically tamper with our own food supply before we even properly understand genetics. It's like giving a monkey a box of matches and a can of gasoline. Eventually the inevitable will happen, and not because the monkey understands the consequences of it's actions. Boom!
 

Oaky

Travelling mind
Joined
Jan 15, 2009
Messages
6,180
MBTI Type
INTJ
Enneagram
5w6
Instinctual Variant
sp/so
You know what they're saying. It's like asking, "Can atomic bombs wipe out the human race?" We all know who makes these bombs. Not aliens, and not some ancient civilization existing deep inside the Earth.

However, the AI they're talking about would be the ultimate product: a device that is as or more intelligent than humans. We're not talking about human mimicry, but a machine so advanced that it can no longer be distinguished from any other intelligent, self-aware being.
I'm afraid we're still not near it. Closer yes, but certainly not near. Technology has always implemented mathematics within it. The idea of sympathy and empathy in a human can only be intuitively understand with a particular intangible 'feel' to it. This has never been evident beyond the animal kingdom insofar as science knows. Through the use of neural networks and genetic algorithms a machine in theory may be able to mimic a human to an accurate level if programmed to do so but certainly not be able to think like one. In the matter of mimicking, artificial intelligence has no interest in doing 'good' or 'bad' as in cannot differentiate. It cannot instinctually make decisions based on fear, happiness nor anger. Again, only mimic based on visual sensory information. Correct psychology variables and inputs in neural networks are far far apart too. The information we have on neuroscience in connection to psychology is still premature and the amount within what we know already cannot simply be inputed into a neural network frame without the correct probabilities in certain areas. So you can try to have a computer that learns over time, but it will never see reason and never intuit understanding through unpredictable human nature. You also have to remember that the way it learns would have to be like a baby, to which they see interactions between two humans. The amount of data that have to go through the machine for this would have to include the visual recognition of the two humans, all the associations with them (mum, dad, neighbour, see them more, etc.) , the language, the tone of voice, facial expressions, the actions, all the items involved in the actions, the area, etc. As different to a baby who receives this far more quickly through the subconsciously, to develop a machine that could do this alone even with the only added variables I've mentioned is not yet possible.

Also, if I'm going to take the scifi nonsense seriously with AI wiping out the human race, I'd say that humans will always secure their technology when releasing it. Like probably, in some scifi world humans would develop an AI that can't learn 'bad', and can only do 'good', and this will be a force against the evil cyberbots or whatever. Just playing with imagination. A little silly to believe.
 

Oaky

Travelling mind
Joined
Jan 15, 2009
Messages
6,180
MBTI Type
INTJ
Enneagram
5w6
Instinctual Variant
sp/so
Oh, if we were to talk about missiles and nuclear weapons in warfare, it may be a different story altogether.
 

Avalon

New member
Joined
Oct 5, 2013
Messages
47
MBTI Type
INTP
I don't think AI would simply decide to exterminate humans on a whim. I believe there decisions would be based solely on mathematical probability, of whether humans pose a threat to its existence, there imaginary or calculated threat, might caused them to exterminate us. Perhaps humans of the future might provoke a war because of our fear of being exterminated, so we try to "unplug" them beforehand.
 

chubber

failed poetry slam career
Joined
Oct 18, 2013
Messages
4,413
MBTI Type
INTP
Enneagram
4w5
Instinctual Variant
sp/sx
Well I would think that depending on the personality of the AI, it could end up committing suicide on its own (by its own conclusion).
 
Top