User Tag List

Results 1 to 9 of 9

  1. #1
    Senior Member Mal12345's Avatar
    Join Date
    Apr 2011
    MBTI
    IxTP
    Enneagram
    5w4 sx/sp
    Socionics
    LII Ti
    Posts
    13,993

    Default Computers are getting Smarter

    http://www.dvice.com/2013-7-16/compu...four-year-olds

    The world's smartest computer has an IQ of 25.

    But when will computers match the creativity and mischievousness of a 4 year old?
    "Everyone has a plan till they get punched in the mouth." Mike Tyson
    “Culture?” says Paul McCartney. “This isn't culture. It's just a good laugh.”

  2. #2
    The Dark Lord The Wailing Specter's Avatar
    Join Date
    Jun 2013
    MBTI
    ENFP
    Enneagram
    6w7 sp/so
    Socionics
    ENFP Ne
    Posts
    3,267

    Default

    Quote Originally Posted by Mal+ View Post
    http://www.dvice.com/2013-7-16/compu...four-year-olds

    The world's smartest computer has an IQ of 25.

    But when will computers match the creativity and mischievousness of a 4 year old?
    Assuming exponential growth, not long…

  3. #3
    morose bourgeoisie
    Join Date
    Mar 2009
    MBTI
    INFP
    Posts
    3,859

    Default

    I think it will be a while before computers are creative. Probably after quantum computers become more common, but I susect that won't be enough.
    Creativity is often measured by the ability to conjoin (seemingly) unrelated data in new and useful ways. Computers are not good at that at all. My hunch is that the randomness of creativity is made possible by the multiple inputs in the human system, meaning the constant ebb and flow of neurotransmitters, hormones, random environmental triggers, etc; none of which is experienced by a computer.
    Maybe biologically based computers can have this type of randomness designed into their systems...like Blade Runner.

  4. #4
    Senior Member
    Join Date
    Dec 2008
    Posts
    4,226

    Default

    An attention-grabbing headline, nothing more.

    ConceptNet 4 did very well on a test of vocabulary and on a test of its ability to recognize similarities. “But ConceptNet 4 did dramatically worse than average on comprehension—the ‘why’ questions,”

    “If a child had scores that varied this much, it might be a symptom that something was wrong,” said Robert Sloan, professor and head of computer science at UIC, and one of the study’s authors.

    Programming a vocabulary into a computer system is relatively easy. I wouldn't call a spell-checker program 'intelligent' by any means. IBM's 'Deep Blue' beat Garry Kasparov at chess; is Deep Blue 'smart'? 'Watson' beat two Jeopardy! champions handily, is Watson 'smart'?

    No. Deep Blue knows how to play chess; that's all it knows how to do. Watson is a little more intriguing in that it processes language-based questions and comes up with answers but it does so by brute force, searching through its many millions worth of pages of data to find words that keep popping up in relation to the words in the question, and it spits out the words which it 'thinks' are the answer based on a probability algorithm. You could ask Watson "Who won the World Series in 1907?" and it will very easily give you an answer. Ask a two word long question and it will struggle to find an answer; it doesn't 'understand' context.

    Nor does ConceptNet 4. What makes it seem 'smart' is the part of its programming that is a glorified dictionary. It 'knows' more words than a four-year-old, nothing more.

  5. #5
    & Badger, Ratty and Toad Mole's Avatar
    Join Date
    Mar 2008
    Posts
    18,536
    Quote Originally Posted by Stanton Moore View Post
    I think it will be a while before computers are creative. Probably after quantum computers become more common, but I susect that won't be enough.
    Creativity is often measured by the ability to conjoin (seemingly) unrelated data in new and useful ways. Computers are not good at that at all. My hunch is that the randomness of creativity is made possible by the multiple inputs in the human system, meaning the constant ebb and flow of neurotransmitters, hormones, random environmental triggers, etc; none of which is experienced by a computer.
    Maybe biologically based computers can have this type of randomness designed into their systems...like Blade Runner.
    Creativity is over.

    Creativity is a function of literacy and literacy is now the content of the internet.

    And presence is the function of the internet.

    And so creativity has been subsumed by presence.

  6. #6
    redundant descriptor netzealot's Avatar
    Join Date
    Jan 2013
    MBTI
    ISTP
    Posts
    231

    Default

    Quote Originally Posted by 93JC View Post
    No. Deep Blue knows how to play chess; that's all it knows how to do. Watson is a little more intriguing in that it processes language-based questions and comes up with answers but it does so by brute force, searching through its many millions worth of pages of data to find words that keep popping up in relation to the words in the question, and it spits out the words which it 'thinks' are the answer based on a probability algorithm. You could ask Watson "Who won the World Series in 1907?" and it will very easily give you an answer. Ask a two word long question and it will struggle to find an answer; it doesn't 'understand' context.
    It is a misunderstanding about computer logic. Deep Blue doesn't 'know' anything... it operates pre-written functions, which have to be written based on things like what it means to win and the quantification of the available choices to create that condition in a massive array of if/then.

    Basically, computers will never be able to "think" of something that humans cannot... they merely perform pre-written logic faster and more efficiently and here is why: the very prefect case of AI would be one where the logic was reserve-engineered and quantified perfectly. Considering that that is the idealization of AI, the perfect case of "learning" AI would be one where the process of reverse-engineering/quantification is performed perfectly by a computer then re-integrated into it's own logic. All very possible, in theory.

    Let's say we had such learning-capable AI and pitted it against Gary Kasparov in it's infantile form (knowing nothing). Eventually, it would never grow to mimic what Deep Blue did because a computer cannot know why it wants to win a game of chess in order to teach itself how to win. Deep Blue was specifically programmed to reverse-engineer the conditions of winning a game of chess by humans who understand the purpose it is designed for. Asking whether computers are good or evil is like asking whether a car is more or less evil than a bicycle... it only does the same operation at a faster pace but without a destination in mind, they're static methods without a operator.

    With that said, in a universal sense the most we could ever teach AI about it's own purpose would be just that... to perform functions of intelligence faster. We may have some very capable computer technology in the future, and this technology could be used for any number of purposes (a robot army, even) but it will never become self-aware in a way that would lead to it's own dominion or be able to do anything on it's own without being inspired by some sort of human motive behind it.

  7. #7
    Senior Member
    Join Date
    Dec 2008
    Posts
    4,226

    Default

    Quote Originally Posted by LevelZeroHero View Post
    That is the misunderstanding about computer logic. Deep Blue doesn't 'know' anything... it operates pre-written functions, which have to be written based on things like what it means to win and the quantification of the available choices to create that condition in a massive array of if/then.
    True. I used the word 'know' for lack of a better one. A computer doesn't really know anything, it is programmed. Deep Blue was programmed with the possible moves one could make in chess, and having been programmed with the best strategies for winning chess it beat Garry Kasparov through mathematical brute force. It didn't "figure out" how to beat Garry Kasparov by what we might call intuition or 'smarts', it just did the preprogrammed math to calculate the moves that would give it the best probability of winning. The computer didn't do anything unless a human gave it instructions.

    So when this AI system scores 25 on an IQ test it is as I said the product of being a glorified dictionary, having been preprogrammed with a much greater vocabulary than a typical four-year-old has acquired. There's nothing special about that, nothing inherently 'smart' about that.


    (You might argue the same about people too. I know all sorts of random trivia: does that make me 'smart'? Or smarter than someone else? I would say no, it just means there's a lot of random junk facts in my memory. There's nothing special about it.)

  8. #8

    Default

    Quote Originally Posted by LevelZeroHero View Post
    That is the misunderstanding about computer logic. Deep Blue doesn't 'know' anything... it operates pre-written functions, which have to be written based on things like what it means to win and the quantification of the available choices to create that condition in a massive array of if/then.

    Basically, computers will never be able to "think" of something that humans cannot... they merely perform pre-written logic faster and more efficiently and here is why: the very prefect case of AI would be one where the logic was reserve-engineered and quantified perfectly. Considering that that is the idealization of AI, the perfect case of "learning" AI would be one where the process of reverse-engineering/quantification is performed perfectly by a computer then re-integrated into it's own logic. All very possible, in theory.

    Let's say we had such learning-capable AI and pitted it against Gary Kasparov in it's infantile form (knowing nothing). Eventually, it would never grow to mimic what Deep Blue did because a computer cannot know why it wants to win a game of chess in order to teach itself how to win. Deep Blue was specifically programmed to reverse-engineer the conditions of winning a game of chess by humans who understand the purpose it is designed for. Asking whether computers are good or evil is like asking whether a car is more or less evil than a bicycle... it only does the same operation at a faster pace but without a destination in mind, they're static methods without a operator.

    With that said, in a universal sense the most we could ever teach AI about it's own purpose would be just that... to perform functions of intelligence faster. We may have some very capable computer technology in the future, and this technology could be used for any number of purposes (a robot army, even) but it will never become self-aware in a way that would lead to it's own dominion or be able to do anything on it's own without being inspired by some sort of human motive behind it.
    This perhaps get philosophical, but who is to say that humans are not also just running pre-programmed instructions? Deep Blue beat Kasparov. It had a team of designers, programmers, and chess experts that created it in such a way that it beat Kasparov.

    To say that a computer 'knows' something doesn't seem like a misnomer to me. It's way of 'knowing' may be different from human ways of 'knowing,' but we may come to realize that human knowledge is also quite mechanical.

    We also call the information stored in our libraries and internet, 'knowledge'. The way computers 'know,' is similar to that.

    Beyond that there are dynamical systems that can produce chaotic behavior and randomness. These can be incorporated into computation.

    Accept the past. Live for the present. Look forward to the future.
    Robot Fusion
    "As our island of knowledge grows, so does the shore of our ignorance." John Wheeler
    "[A] scientist looking at nonscientific problems is just as dumb as the next guy." Richard Feynman
    "[P]etabytes of [] data is not the same thing as understanding emergent mechanisms and structures." Jim Crutchfield

  9. #9
    Senior Member INTP's Avatar
    Join Date
    Jul 2009
    MBTI
    intp
    Enneagram
    5w4 sx
    Posts
    7,823

    Default

    My ENTP friend is developing an artificial intelligence, shouldnt take long till it will take over the world.
    "Where wisdom reigns, there is no conflict between thinking and feeling."
    — C.G. Jung

    Read

Similar Threads

  1. Wallet thieves are getting faster!
    By edward the confessor in forum The Bonfire
    Replies: 5
    Last Post: 05-31-2009, 11:53 AM
  2. [MBTItm] New graduate. All my "mentors" are idiots, and looking to get on a power trip. Help!
    By mysavior in forum The NT Rationale (ENTP, INTP, ENTJ, INTJ)
    Replies: 10
    Last Post: 08-26-2008, 09:30 PM
  3. [NT] guys who are testing intellectually potential mates to get to know them
    By Cality in forum The NT Rationale (ENTP, INTP, ENTJ, INTJ)
    Replies: 45
    Last Post: 08-22-2008, 09:24 PM
  4. [INTJ] intj girls: where are you and how do i get you?
    By sketcheasy in forum The NT Rationale (ENTP, INTP, ENTJ, INTJ)
    Replies: 43
    Last Post: 08-09-2008, 01:34 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
Single Sign On provided by vBSSO