User Tag List

First 1234 Last

Results 11 to 20 of 33

  1. #11
    Ginkgo
    Guest

    Default

    http://io9.com/stephen-hawking-says-...ium=socialflow

    The world's most famous physicist is warning about the risks posed by machine superintelligence, saying that it could be the most significant thing to ever happen in human history — and possibly the last.

    As we've discussed extensively here at io9, artificial superintelligence represents a potential existential threat to humanity, so it's good to see such a high profile scientist both understand the issue and do his part to get the word out.

    Hawking, along with computer scientist Stuart Russell and physicists Max Tegmark and Frank Wilczek, says that the potential benefits could be huge, but we cannot predict what we might achieve when AI is magnified — both good and bad.

    Writing in The Independent, the scientists warn:

    Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence".

    One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

    So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.



    Nano-technology + AI is a huge deal.

  2. #12

    Default

    With a group of despots supporting each other in the political sphere, shenanigans involving the safeguarding of nuclear weapons, and various resource shortfalls that we'll be facing in about four decades or so, I think we have a lot of competing trends that could destroy humankind first.
    ---

    Unless our understanding of computer science and neuroscience are a lot further along than I am aware of, I think the destructive power AI holds is just about the same as other technologies.

    We recently had to put a moratorium on research to understand how species-hopping viruses work, for instance. Carbon nanotubes, which are used in research in so many fields have found to have similar effects as asbestos--not world ending, but there are a lot of things like this, and malicious scientists (or ones complicit with a despot) can easily brew new chemical cocktails that are weapons of mass destruction.
    ----

    What separates "AI" from other forms of computing anyways?

    A not so well known fact is much of the circuitry already in most computer chips have been automatically constructed by algorithm for a very long time. This is because it would take human beings a ridiculous amount of time to design that much circuitry. The automation used is quite sophisticated (touching on may problems similar to undecidability and halting problems), but would probably not be considered "AI". Bioinformatics and "Big Data" in all realms also uses "machine learning" algorithms to discover patterns in genes, metabolic pathways, image correction, financial markets, politics, social networks, personal habits, and a whole lot more.

    Algorithms exist for a lot of things that allow computers and robots to do things far better than humans. This has been the case since the advent of electronic computation. What is "AI" vs. just the uses of statistically informed software is very murky to me, and I have implemented many machine learning and "AI" algorithms myself.

    Accept the past. Live for the present. Look forward to the future.
    Robot Fusion
    "As our island of knowledge grows, so does the shore of our ignorance." John Wheeler
    "[A] scientist looking at nonscientific problems is just as dumb as the next guy." Richard Feynman
    "[P]etabytes of [] data is not the same thing as understanding emergent mechanisms and structures." Jim Crutchfield

  3. #13
    Nips away your dignity Fluffywolf's Avatar
    Join Date
    Mar 2009
    MBTI
    INTP
    Enneagram
    9 sp/sx
    Posts
    9,422

    Default

    Yeah, I really dont see the issue. Unless you give AI the ability to manufacture and launch WMD's (realize how silly that sounds? :P ), I dont see how any kind of AI would ever be a threat.

    On the other hand, the possible applications are endless. A sophisticated AI could sift through and organize data faster than thousands of people. So it could definately replace a lot of peoples jobs though. :P
    ~Self-depricating Megalomaniacal Superwolf

  4. #14

    Default

    I think it's certainly possible. As something designed by humans, AI is inherently imperfect, and could behave in unpredictable ways. AI is a wonderful thing to an extent, such as automating duties that would be drudgery for humans. But given too much power, AI could cause tremendous problems. I'm not sure we'll ever be able to program a machine to have the same judgment that humans have, and that could be all the difference.
    Everybody have fun tonight. Everybody Wang Chung tonight.

    Johari
    /Nohari

  5. #15
    reflecting pool Typh0n's Avatar
    Join Date
    Feb 2013
    Socionics
    ILI Ni
    Posts
    3,090

    Default

    I dont think that it sounds silly to assume AI could launch weapons so advanced we cant tell with it. AI can do more than humans could; the problem being that AI isnt AL(artificial life) and doesnt have flesh, hence it has no motives to act. It is simply programmed.

  6. #16
    WhoCares
    Guest

    Default

    I really hope artificial intelligence doesn't think like humans, along with all it's irrational biases and distortions. What would be the point in that? We've got plenty of that going on already. But I do think AI will lead to the extinction of humans, and whether or not I think that is a bad thing is up for debate. Overall as a species I think we over-rate ourselves and have done a great deal of damage to our little ecosystem with our 'smartest chimp in the room' mentality. The end of humanity will in no way be the end of life for the universe and why shouldn't we come to our natural end anyway? Who knows, maybe it will be the best thing for the galaxy.

    I'm a big subscriber to chaos theory and I really do think we are dumb enough to create something then lose control over it. I mean, we are dumb enough to genetically tamper with our own food supply before we even properly understand genetics. It's like giving a monkey a box of matches and a can of gasoline. Eventually the inevitable will happen, and not because the monkey understands the consequences of it's actions. Boom!

  7. #17
    Listening Oaky's Avatar
    Join Date
    Jan 2009
    MBTI
    INTJ
    Enneagram
    5w6 sp/so
    Socionics
    SLI None
    Posts
    6,168

    Default

    Quote Originally Posted by Mal12345 View Post
    You know what they're saying. It's like asking, "Can atomic bombs wipe out the human race?" We all know who makes these bombs. Not aliens, and not some ancient civilization existing deep inside the Earth.

    However, the AI they're talking about would be the ultimate product: a device that is as or more intelligent than humans. We're not talking about human mimicry, but a machine so advanced that it can no longer be distinguished from any other intelligent, self-aware being.
    I'm afraid we're still not near it. Closer yes, but certainly not near. Technology has always implemented mathematics within it. The idea of sympathy and empathy in a human can only be intuitively understand with a particular intangible 'feel' to it. This has never been evident beyond the animal kingdom insofar as science knows. Through the use of neural networks and genetic algorithms a machine in theory may be able to mimic a human to an accurate level if programmed to do so but certainly not be able to think like one. In the matter of mimicking, artificial intelligence has no interest in doing 'good' or 'bad' as in cannot differentiate. It cannot instinctually make decisions based on fear, happiness nor anger. Again, only mimic based on visual sensory information. Correct psychology variables and inputs in neural networks are far far apart too. The information we have on neuroscience in connection to psychology is still premature and the amount within what we know already cannot simply be inputed into a neural network frame without the correct probabilities in certain areas. So you can try to have a computer that learns over time, but it will never see reason and never intuit understanding through unpredictable human nature. You also have to remember that the way it learns would have to be like a baby, to which they see interactions between two humans. The amount of data that have to go through the machine for this would have to include the visual recognition of the two humans, all the associations with them (mum, dad, neighbour, see them more, etc.) , the language, the tone of voice, facial expressions, the actions, all the items involved in the actions, the area, etc. As different to a baby who receives this far more quickly through the subconsciously, to develop a machine that could do this alone even with the only added variables I've mentioned is not yet possible.

    Also, if I'm going to take the scifi nonsense seriously with AI wiping out the human race, I'd say that humans will always secure their technology when releasing it. Like probably, in some scifi world humans would develop an AI that can't learn 'bad', and can only do 'good', and this will be a force against the evil cyberbots or whatever. Just playing with imagination. A little silly to believe.

  8. #18
    Listening Oaky's Avatar
    Join Date
    Jan 2009
    MBTI
    INTJ
    Enneagram
    5w6 sp/so
    Socionics
    SLI None
    Posts
    6,168

    Default

    Oh, if we were to talk about missiles and nuclear weapons in warfare, it may be a different story altogether.

  9. #19
    Member Avalon's Avatar
    Join Date
    Oct 2013
    MBTI
    INTP
    Socionics
    ILI-
    Posts
    47

    Default

    I don't think AI would simply decide to exterminate humans on a whim. I believe there decisions would be based solely on mathematical probability, of whether humans pose a threat to its existence, there imaginary or calculated threat, might caused them to exterminate us. Perhaps humans of the future might provoke a war because of our fear of being exterminated, so we try to "unplug" them beforehand.
    "sidere mens eadem mutato"

  10. #20
    failed poetry slam career chubber's Avatar
    Join Date
    Oct 2013
    MBTI
    INTJ
    Enneagram
    5w4 sp/sx
    Socionics
    ILI Te
    Posts
    4,221

    Default

    Well I would think that depending on the personality of the AI, it could end up committing suicide on its own (by its own conclusion).

Similar Threads

  1. Future of the Human Race
    By RaptorWizard in forum Politics, History, and Current Events
    Replies: 42
    Last Post: 04-01-2013, 07:21 AM
  2. Chinese origins different from the rest of the human race?
    By Survive & Stay Free in forum Politics, History, and Current Events
    Replies: 16
    Last Post: 04-15-2012, 10:20 PM
  3. Replies: 3
    Last Post: 06-17-2010, 05:24 PM
  4. Superman's critique on the human race?
    By BerberElla in forum The Bonfire
    Replies: 56
    Last Post: 12-28-2008, 04:17 PM
  5. If the human race disappeared, who would inherit the earth?
    By Geoff in forum Science, Technology, and Future Tech
    Replies: 61
    Last Post: 02-20-2008, 12:07 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
Single Sign On provided by vBSSO