• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Computers are getting Smarter

Stanton Moore

morose bourgeoisie
Joined
Mar 4, 2009
Messages
3,900
MBTI Type
INFP
I think it will be a while before computers are creative. Probably after quantum computers become more common, but I susect that won't be enough.
Creativity is often measured by the ability to conjoin (seemingly) unrelated data in new and useful ways. Computers are not good at that at all. My hunch is that the randomness of creativity is made possible by the multiple inputs in the human system, meaning the constant ebb and flow of neurotransmitters, hormones, random environmental triggers, etc; none of which is experienced by a computer.
Maybe biologically based computers can have this type of randomness designed into their systems...like Blade Runner.
 

93JC

Active member
Joined
Dec 17, 2008
Messages
3,989
An attention-grabbing headline, nothing more.

ConceptNet 4 did very well on a test of vocabulary and on a test of its ability to recognize similarities. “But ConceptNet 4 did dramatically worse than average on comprehension—the ‘why’ questions,”

“If a child had scores that varied this much, it might be a symptom that something was wrong,” said Robert Sloan, professor and head of computer science at UIC, and one of the study’s authors.

Programming a vocabulary into a computer system is relatively easy. I wouldn't call a spell-checker program 'intelligent' by any means. IBM's 'Deep Blue' beat Garry Kasparov at chess; is Deep Blue 'smart'? 'Watson' beat two Jeopardy! champions handily, is Watson 'smart'?

No. Deep Blue knows how to play chess; that's all it knows how to do. Watson is a little more intriguing in that it processes language-based questions and comes up with answers but it does so by brute force, searching through its many millions worth of pages of data to find words that keep popping up in relation to the words in the question, and it spits out the words which it 'thinks' are the answer based on a probability algorithm. You could ask Watson "Who won the World Series in 1907?" and it will very easily give you an answer. Ask a two word long question and it will struggle to find an answer; it doesn't 'understand' context.

Nor does ConceptNet 4. What makes it seem 'smart' is the part of its programming that is a glorified dictionary. It 'knows' more words than a four-year-old, nothing more.
 

Mole

Permabanned
Joined
Mar 20, 2008
Messages
20,284
I think it will be a while before computers are creative. Probably after quantum computers become more common, but I susect that won't be enough.
Creativity is often measured by the ability to conjoin (seemingly) unrelated data in new and useful ways. Computers are not good at that at all. My hunch is that the randomness of creativity is made possible by the multiple inputs in the human system, meaning the constant ebb and flow of neurotransmitters, hormones, random environmental triggers, etc; none of which is experienced by a computer.
Maybe biologically based computers can have this type of randomness designed into their systems...like Blade Runner.

Creativity is over.

Creativity is a function of literacy and literacy is now the content of the internet.

And presence is the function of the internet.

And so creativity has been subsumed by presence.
 

netzealot

redundant descriptor
Joined
Jan 12, 2013
Messages
228
MBTI Type
ISTP
No. Deep Blue knows how to play chess; that's all it knows how to do. Watson is a little more intriguing in that it processes language-based questions and comes up with answers but it does so by brute force, searching through its many millions worth of pages of data to find words that keep popping up in relation to the words in the question, and it spits out the words which it 'thinks' are the answer based on a probability algorithm. You could ask Watson "Who won the World Series in 1907?" and it will very easily give you an answer. Ask a two word long question and it will struggle to find an answer; it doesn't 'understand' context.

It is a misunderstanding about computer logic. Deep Blue doesn't 'know' anything... it operates pre-written functions, which have to be written based on things like what it means to win and the quantification of the available choices to create that condition in a massive array of if/then.

Basically, computers will never be able to "think" of something that humans cannot... they merely perform pre-written logic faster and more efficiently and here is why: the very prefect case of AI would be one where the logic was reserve-engineered and quantified perfectly. Considering that that is the idealization of AI, the perfect case of "learning" AI would be one where the process of reverse-engineering/quantification is performed perfectly by a computer then re-integrated into it's own logic. All very possible, in theory.

Let's say we had such learning-capable AI and pitted it against Gary Kasparov in it's infantile form (knowing nothing). Eventually, it would never grow to mimic what Deep Blue did because a computer cannot know why it wants to win a game of chess in order to teach itself how to win. Deep Blue was specifically programmed to reverse-engineer the conditions of winning a game of chess by humans who understand the purpose it is designed for. Asking whether computers are good or evil is like asking whether a car is more or less evil than a bicycle... it only does the same operation at a faster pace but without a destination in mind, they're static methods without a operator.

With that said, in a universal sense the most we could ever teach AI about it's own purpose would be just that... to perform functions of intelligence faster. We may have some very capable computer technology in the future, and this technology could be used for any number of purposes (a robot army, even) but it will never become self-aware in a way that would lead to it's own dominion or be able to do anything on it's own without being inspired by some sort of human motive behind it.
 

93JC

Active member
Joined
Dec 17, 2008
Messages
3,989
That is the misunderstanding about computer logic. Deep Blue doesn't 'know' anything... it operates pre-written functions, which have to be written based on things like what it means to win and the quantification of the available choices to create that condition in a massive array of if/then.

True. I used the word 'know' for lack of a better one. A computer doesn't really know anything, it is programmed. Deep Blue was programmed with the possible moves one could make in chess, and having been programmed with the best strategies for winning chess it beat Garry Kasparov through mathematical brute force. It didn't "figure out" how to beat Garry Kasparov by what we might call intuition or 'smarts', it just did the preprogrammed math to calculate the moves that would give it the best probability of winning. The computer didn't do anything unless a human gave it instructions.

So when this AI system scores 25 on an IQ test it is as I said the product of being a glorified dictionary, having been preprogrammed with a much greater vocabulary than a typical four-year-old has acquired. There's nothing special about that, nothing inherently 'smart' about that.


(You might argue the same about people too. I know all sorts of random trivia: does that make me 'smart'? Or smarter than someone else? I would say no, it just means there's a lot of random junk facts in my memory. There's nothing special about it.)
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
That is the misunderstanding about computer logic. Deep Blue doesn't 'know' anything... it operates pre-written functions, which have to be written based on things like what it means to win and the quantification of the available choices to create that condition in a massive array of if/then.

Basically, computers will never be able to "think" of something that humans cannot... they merely perform pre-written logic faster and more efficiently and here is why: the very prefect case of AI would be one where the logic was reserve-engineered and quantified perfectly. Considering that that is the idealization of AI, the perfect case of "learning" AI would be one where the process of reverse-engineering/quantification is performed perfectly by a computer then re-integrated into it's own logic. All very possible, in theory.

Let's say we had such learning-capable AI and pitted it against Gary Kasparov in it's infantile form (knowing nothing). Eventually, it would never grow to mimic what Deep Blue did because a computer cannot know why it wants to win a game of chess in order to teach itself how to win. Deep Blue was specifically programmed to reverse-engineer the conditions of winning a game of chess by humans who understand the purpose it is designed for. Asking whether computers are good or evil is like asking whether a car is more or less evil than a bicycle... it only does the same operation at a faster pace but without a destination in mind, they're static methods without a operator.

With that said, in a universal sense the most we could ever teach AI about it's own purpose would be just that... to perform functions of intelligence faster. We may have some very capable computer technology in the future, and this technology could be used for any number of purposes (a robot army, even) but it will never become self-aware in a way that would lead to it's own dominion or be able to do anything on it's own without being inspired by some sort of human motive behind it.

This perhaps get philosophical, but who is to say that humans are not also just running pre-programmed instructions? Deep Blue beat Kasparov. It had a team of designers, programmers, and chess experts that created it in such a way that it beat Kasparov.

To say that a computer 'knows' something doesn't seem like a misnomer to me. It's way of 'knowing' may be different from human ways of 'knowing,' but we may come to realize that human knowledge is also quite mechanical.

We also call the information stored in our libraries and internet, 'knowledge'. The way computers 'know,' is similar to that.

Beyond that there are dynamical systems that can produce chaotic behavior and randomness. These can be incorporated into computation.
 

INTP

Active member
Joined
Jul 31, 2009
Messages
7,803
MBTI Type
intp
Enneagram
5w4
Instinctual Variant
sx
My ENTP friend is developing an artificial intelligence, shouldnt take long till it will take over the world.
 
Top