User Tag List

First 23456 Last

Results 31 to 40 of 67

  1. #31
    friendly and accessible boomslang's Avatar
    Join Date
    Sep 2014
    Enneagram
    8w9 sx/sp
    Socionics
    LIE Ni
    Posts
    206

    Default

    People are already robotic enough. We don't need actual robots putting us to shame.

    A counterpoint to those that believe robots "just wouldn't do war": Pride isn't a non-negotiable factor in beginning war or large scale hostile action. To well developed AI, it may be pertinent that humans be used much in the way lab rats are. Or forced labour. Robots can't be taught ethics, they can be programmed to become more sensitive to context through repeat exposure, but ultimately their processing is inherently mathematical and structured. It may mimic ethics, but it technically won't be ethics. Assuming that diligent programming would be geared towards optimising the efficiency of the robots, you might eventually see them calculating that it's more efficient for humans to be doing slave labour, or it might be viewed as efficient to the sustenance of finite natural resources that humans simply be eliminated. If you think about it, human beings aren't exactly working together to do things the smart, efficient and diligent way. Too many cooks spoiling the broth and all that.

    The real thing of it is steps of progression in AI complexity and effectiveness. Allowing robots to process information and in a certain sense, become 'self-aware' wouldn't be any huge issue initially. As the decades pass, if that self-awareness is constantly accumulating, eventually you're going to run into problems.

    But at the end of the day, if the human race isn't destroyed by space debris, it's going to be destroyed by hubris. May not necessarily be robots, but whatever it is will have hubris at its roots.

  2. #32
    Pubic Enemy #1 Crabs's Avatar
    Join Date
    Dec 2014
    Posts
    1,252

    Default

    Quote Originally Posted by Vasilisa View Post
    With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.


    Quote Originally Posted by boomslang View Post

    The real thing of it is steps of progression in AI complexity and effectiveness. Allowing robots to process information and in a certain sense, become 'self-aware' wouldn't be any huge issue initially. As the decades pass, if that self-awareness is constantly accumulating, eventually you're going to run into problems.
    this is a super fascinating documentary that i watched last week about transhumanism, robotics and nanotechnology; echoing the apprehension expressed by elon musk, bill gates and stephen hawking. some interesting points that i recall:

    scientists transplanted a monkey's head onto another monkey's body

    a human brain was removed and kept alive outside of its host

    a mouse was implanted with a microchip which sent electrical signals to its brain, allowing a person to control its movements with a keyboard

    humans may have full-body robotic transplants in the future to make them more resilient, intelligent and efficient beings

    at about 15:40, the video starts talking about artificial intelligence being the biggest threat to mankind because the rate that it's evolving far exceeds that of human intelligence


  3. #33
    Senior Member
    Join Date
    Aug 2010
    Posts
    690

    Default

    Indeed, Elon Musk bought portions of an AI company, iirc, so he had some influence on the project to make sure it didn't go into deadly waters. I'm optimistic, though, and I think we may have a singularity where man and machine mesh together into a Deus Ex symphony of awesomeness.

    edit: it seems like it was a donation

    FLI - Future of Life Institute
    Likes Straylight liked this post

  4. #34
    Mojibake sprinkles's Avatar
    Join Date
    Jul 2012
    MBTI
    INFJ
    Posts
    2,968

    Default

    Quote Originally Posted by boomslang View Post
    People are already robotic enough. We don't need actual robots putting us to shame.

    A counterpoint to those that believe robots "just wouldn't do war": Pride isn't a non-negotiable factor in beginning war or large scale hostile action. To well developed AI, it may be pertinent that humans be used much in the way lab rats are. Or forced labour. Robots can't be taught ethics, they can be programmed to become more sensitive to context through repeat exposure, but ultimately their processing is inherently mathematical and structured. It may mimic ethics, but it technically won't be ethics. Assuming that diligent programming would be geared towards optimising the efficiency of the robots, you might eventually see them calculating that it's more efficient for humans to be doing slave labour, or it might be viewed as efficient to the sustenance of finite natural resources that humans simply be eliminated. If you think about it, human beings aren't exactly working together to do things the smart, efficient and diligent way. Too many cooks spoiling the broth and all that.

    The real thing of it is steps of progression in AI complexity and effectiveness. Allowing robots to process information and in a certain sense, become 'self-aware' wouldn't be any huge issue initially. As the decades pass, if that self-awareness is constantly accumulating, eventually you're going to run into problems.

    But at the end of the day, if the human race isn't destroyed by space debris, it's going to be destroyed by hubris. May not necessarily be robots, but whatever it is will have hubris at its roots.
    The needs of humans and machines don't overlap very much so an AI has little concern about the efficiency of humans.

    From an efficiency standpoint the most efficient thing is to do as little as possible actually. From a conservation standpoint, the energy expenditure to enslave humans would cost more than ignoring them in general.

    Humans operate on different principles because due to our bodies basically being a furnace, we constantly consume more energy than we produce, there's a net loss as heat and toilet waste, and there's no way to shut that off so we're always looking to maximize efficiency. Robots don't have that problem, they can hibernate and regulate their required intake of energy. So I doubt they'd use humans for efficiency because humans are by nature inefficient. Worst case scenario humans would simply be worthless.

    Edit: Also think of how we regard animals in the modern world. Animals are hardly even used for work anymore, work animals have been replaced by machines. Humans are also animals so a regression would not be very likely. To a robot, animals of all kinds would be a nuisance - a robot wouldn't even want to eat animals which also sets a robot even further apart from human behavior, because humans at least eat animals. Humans wouldn't even be worth as much as food. We'd be like pests.

  5. #35
    Member Tippo's Avatar
    Join Date
    Feb 2015
    MBTI
    ENTJ
    Posts
    94

    Default

    We are not there yet, arrogance to even contemplate. Humans have a light, that science has not yet replicated.

    - - - Updated - - -

    Been.... drinking and thinking.

  6. #36
    FRACTALICIOUS phobik's Avatar
    Join Date
    Apr 2009
    Posts
    7,368

    Default

    GP AI is here, a decade earlier



    To avoid criticism, do nothing, say nothing, be nothing.
    ~ Elbert Hubbard

    Music provides one of the clearest examples of a much deeper relation between mathematics and human experience.
    Likes labyrinthine, Straylight liked this post

  7. #37
    SpaceCadetGoldStarBrigade Population: 1's Avatar
    Join Date
    May 2017
    MBTI
    INFP
    Enneagram
    5w4 sp/sx
    Posts
    1,835

    Default

    I think the real threat is in humans becoming machines through cybernetic implants. The elite will of course race towards immortality through cybernetics and in doing so become the AI we fear over time. They will redefine humanity and the unwashed masses will be subject to extermination.
    To give real service you must add something which cannot be bought or measured with money, and that is sincerity and integrity. Douglas Adams

    Mornings are for coffee and contemplation. Jim Hopper

  8. #38
    nee andante bechimo's Avatar
    Join Date
    Aug 2010
    Posts
    8,022

    Default

    It amazes me that others aren't concerned to some extent since software development is so buggy.
    Likes JAVO liked this post

  9. #39
    Member Straylight's Avatar
    Join Date
    May 2017
    MBTI
    INTP
    Enneagram
    5w4 sp/so
    Socionics
    INTj Ne
    Posts
    48

    Default

    If anyone here takes a serious interest in AI research and wants to know how far it's come, you should watch this MIT open course on the subject, published in 2015:



    Here is a link to a conference that took place on Jan, 2017 that involved Elon Musk, Max Tegmark, Ray Kurzweil, and other leading philosophers and scientists on the subject of super-intelligence:




    Elon Musk presents the most relevant issues (imo) followed by Ray Kurzweil.

    Elon points out that his only concerns are the issue of bandwidth (human data output is extremely poor compared to machines, using "meat sticks" (fingers) and speech patterns, whereas machines can output terabytes per second) and the issue of democratization.

    The first issue could be resolved through mind-machine interfaces - a direct connection to the neocortex. There has actually been significant progress in the research and development of this technology that could begin to see implementation within the next ten years. The second issue of democratization requires that super-intelligent software be free and open-source, but also regulated by the government. If it is kept private, then wherever it first gets invented would have "first-mover" advantages, because the recursive growth becomes exponential within only a few days, and anyone who has access to super-intelligence would be able to run a number of different "monte carlo" simulations that instantly solve things like energy, stocks, military, transportation and distribution, weather patterns, biological systems like the brain, social constructs like personality and political sentiment analysis, etc.

    Ray approaches the issue conceptually by using a narrative of going back in time and approaching the quintessential caveman, whereupon you ask him, if he could have anything he wanted, what would it be? And he tells you he would want things like a fire that never went out, a bigger stone to block intruders from his cave, plentiful food, clean water, and lots of women to mate with. But you in turn ask him, "don't you want a better website and a new smartphone and a faster computer?" Case-in-point, he doesn't have a concept of such things because they are so much more advanced than his present understanding allows him to imagine.

    To contrast this even more clearly, he asks the same question again but this time you are talking to the smartest ape, millions of years ago, and it tells you it wants delicious nuts and fruit and sexual partners and less predators and things like this, but you ask it, "don't you want music, and math, and culture?" Again, these are concepts it is incapable of even imagining because of the limits of it's present intelligence. The relationship between super-intelligence and our human intelligence would be similar. We cannot begin to speculate what super-intelligent beings would desire to produce or have, or what the motives of a super-intelligent being would be, because it is simply beyond the limits of our finite reasoning capacity.

    The conclusion then is that it is pointless really to speculate or worry about it existentially. Rather than being concerned with the possibility of machine super-intelligence "replacing" us, being malicious, or whatever else, I think it is better to approach the question from a pragmatic perspective the way Elon Musk does, by asking questions about how it could be best implemented.

    By the way, for those of you too lazy to watch the conference fully, they all more-or-less agree that we will have machine super-intelligences within the next 5 years. Remember, these are statements coming from the world's leaders in science, philosophy, and business. They are not joking, these are not romantic speculations. They are saying "within the next 5 years" because that is in fact the proper time-frame based on what we currently know. Granted, there could be some kind of unforeseen setbacks, however nobody who is a qualified expert in the field of artificial intelligence predicts there will be any.
    Formerly known as "Abraxas" on Personality Cafe, now retired.
    Likes JAVO liked this post

  10. #40
    darkened dreams labyrinthine's Avatar
    Join Date
    Apr 2007
    MBTI
    isfp
    Enneagram
    4w5 sp/sx
    Posts
    8,586

    Default

    As long as they are nice to grandma
    Step into my metaphysical room of mirrors.
    Fear of reality creates myopic morality
    So I guess it means there is trouble until the robins come
    (from Blue Velvet)

    I want to be just like my mother, even if she is bat-shit crazy.

Similar Threads

  1. [Other] Carl Sagan and Stephen hawking sing about the cosmos.
    By ObeyBunny in forum The NT Rationale (ENTP, INTP, ENTJ, INTJ)
    Replies: 8
    Last Post: 08-12-2012, 09:51 PM
  2. Bill Gates
    By yenom in forum Popular Culture and Type
    Replies: 16
    Last Post: 03-31-2009, 05:45 AM
  3. Bill Gates frees swarm of mosquitos into crowd.
    By ajblaise in forum The Fluff Zone
    Replies: 5
    Last Post: 02-05-2009, 09:19 PM
  4. Bill Gates leaves Microsoft!
    By comicsgurl in forum Science, Technology, and Future Tech
    Replies: 7
    Last Post: 10-03-2008, 06:48 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
Single Sign On provided by vBSSO