• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Brainlike Computers, Learning From Experience

Vasilisa

Symbolic Herald
Joined
Feb 2, 2010
Messages
3,946
Instinctual Variant
so/sx
Brainlike Computers, Learning From Experience
By JOHN MARKOFF
December 28, 2013
The New York Times

Excerpt:
PALO ALTO, Calif. — Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.

The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.

“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.

Conventional computers are limited by what they have been programmed to do. Computer vision systems, for example, only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation.

But last year, Google researchers were able to get a machine-learning algorithm, known as a neural network, to perform an identification task without supervision. The network scanned a database of 10 million images, and in doing so trained itself to recognize cats.

In June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately.

The new approach, used in both hardware and software, is being driven by the explosion of scientific knowledge about the brain. Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also its limitation, as scientists are far from fully understanding how brains function.

“We have no clue,” he said. “I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.”

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of 1s and 0s. They generally store that information separately in what is known, colloquially, as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

The data — for instance, temperatures for a climate model or letters for word processing — are shuttled in and out of the processor’s short-term memory while the computer carries out the programmed action. The result is then moved to its main memory.

The new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

They are not “programmed.” Rather the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” That generates a signal that travels to other components and, in reaction, changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions.

“Instead of bringing data to computation as we do today, we can now bring computation to data,” said Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort. “Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.”

< Read the Full Story >

 

Alea_iacta_est

New member
Joined
Dec 3, 2013
Messages
1,834
At what point will we have to give them natural rights, do you think? Or is true Artificial Intelligence a dream on the horizon that will be crushed by the weight of reality?
 

Mal12345

Permabanned
Joined
Apr 19, 2011
Messages
14,532
MBTI Type
IxTP
Enneagram
5w4
Instinctual Variant
sx/sp
At what point will we have to give them natural rights, do you think? Or is true Artificial Intelligence a dream on the horizon that will be crushed by the weight of reality?

How do you distinguish intelligence created by man from intelligence created by God or Natural Design?
 

Mal12345

Permabanned
Joined
Apr 19, 2011
Messages
14,532
MBTI Type
IxTP
Enneagram
5w4
Instinctual Variant
sx/sp
I don't like the term "intelligent design" there as it assumes God, so I stated it as Natural Design.
 

Alea_iacta_est

New member
Joined
Dec 3, 2013
Messages
1,834
How do you distinguish intelligence created by man from intelligence created by God or Natural Design?

That's an even better question for those that adhere to Religious Natural Design.

If god made us in his own image to live freely, will we make machines in our own image and allow them to live freely?
 

Mal12345

Permabanned
Joined
Apr 19, 2011
Messages
14,532
MBTI Type
IxTP
Enneagram
5w4
Instinctual Variant
sx/sp
That's an even better question for those that adhere to Religious Natural Design.

If god made us in his own image to live freely, will we make machines in our own image and allow them to live freely?

That's Intelligent Design, not Natural Design, as stated in my second post. Natural Design is to say that there is a design to nature that has no direct connection to God or gods. Either way, if it's intelligent and has free-will, whether artificially or not, then it should have rights.
 

Alea_iacta_est

New member
Joined
Dec 3, 2013
Messages
1,834
That's Intelligent Design, not Natural Design, as stated in my second post. Natural Design is to say that there is a design to nature that has no direct connection to God or gods. Either way, if it's intelligent and has free-will, whether artificially or not, then it should have rights.

I agree; after all, we are nothing but mortal computers.
 

Alea_iacta_est

New member
Joined
Dec 3, 2013
Messages
1,834
How do you know?

Because we made computers to do work that specific individuals couldn't (excepting the infamous human calculators that can beat computers at their own game of computation speed). We built them on principles of logic, which we attempt to use as they do but ultimately stand no match against the utterly unbiased, objective logic utilized by computers. We poured the logical side of our minds into the creation of computers much like we poured the emotional side of our minds into art. Ultimately however, the emotional portion of our minds is inherently systematic with the fluctuations of chemicals in our brain, much like the computers' reasoning. This would suggest that we are computers, or more likely, computers are human. (Assuming there are no constructs such as the soul or any other religious affects)
 

Mal12345

Permabanned
Joined
Apr 19, 2011
Messages
14,532
MBTI Type
IxTP
Enneagram
5w4
Instinctual Variant
sx/sp
Because we made computers to do work that specific individuals couldn't (excepting the infamous human calculators that can beat computers at their own game of computation speed). We built them on principles of logic, which we attempt to use as they do but ultimately stand no match against the utterly unbiased, objective logic utilized by computers. We poured the logical side of our minds into the creation of computers much like we poured the emotional side of our minds into art. Ultimately however, the emotional portion of our minds is inherently systematic with the fluctuations of chemicals in our brain, much like the computers' reasoning. This would suggest that we are computers, or more likely, computers are human. (Assuming there are no constructs such as the soul or any other religious affects)

That is incorrect, because the logical side of the human differs fundamentally from that of a computer. Yes, in a sense, logic is logic. But a computer's form of logic is binary-based, it is an invention of the human mind but it doesn't reflect the operation of a human mind.
 

Alea_iacta_est

New member
Joined
Dec 3, 2013
Messages
1,834
That is incorrect, because the logical side of the human differs fundamentally from that of a computer. Yes, in a sense, logic is logic. But a computer's form of logic is binary-based, it is an invention of the human mind but it doesn't reflect the operation of a human mind.

I yield then. Apologies for my incorrect assumption.
 

Mal12345

Permabanned
Joined
Apr 19, 2011
Messages
14,532
MBTI Type
IxTP
Enneagram
5w4
Instinctual Variant
sx/sp
I yield then. Apologies for my incorrect assumption.

So when or how can a binary-based computer ever be considered intelligent?

When is it possible to distinguish between something that appears to be intelligent and that which really is intelligent?

If its intellect is binary-based, then won't it always be artificial 'intelligence', that is, "intellect" only mimicking human intelligence (although without the admixture of emotions and moral valuations)?

Isn't the attempt to see computers as intelligent just anthropomorphizing them, as when theists anthropomorphize God as having a human form of intellect only with infinitely greater capacity?
 

Alea_iacta_est

New member
Joined
Dec 3, 2013
Messages
1,834
So when or how can a binary-based computer ever be considered intelligent?

When is it possible to distinguish between something that appears to be intelligent and that which really is intelligent?

If its intellect is binary-based, then won't it always be artificial 'intelligence', that is, "intellect" only mimicking human intelligence (although without the admixture of emotions and moral valuations)?

Isn't the attempt to see computers as intelligent just anthropomorphizing them, as when theists anthropomorphize God as having a human form of intellect only with infinitely greater capacity?

We don't have any evidence to discredit the claim that intelligence is only limited to that of the human nature, but nor do we have any evidence that suggests that intelligence isn't indicative of only the human-kind.

Would it be possible to manufacture a human brain in a computer without the usage of binary-based computing? I'm no expert in computer sciences.

It would be us attempting to humanize the intellect of computers, though we would have no true perspective as to whether or not the computer is actually intelligent simply because we only view intelligence through our own experience of it within ourselves.
 

Mal12345

Permabanned
Joined
Apr 19, 2011
Messages
14,532
MBTI Type
IxTP
Enneagram
5w4
Instinctual Variant
sx/sp
We don't have any evidence to discredit the claim that intelligence is only limited to that of the human nature, but nor do we have any evidence that suggests that intelligence isn't indicative of only the human-kind.

Would it be possible to manufacture a human brain in a computer without the usage of binary-based computing? I'm no expert in computer sciences.

It would be us attempting to humanize the intellect of computers, though we would have no true perspective as to whether or not the computer is actually intelligent simply because we only view intelligence through our own experience of it within ourselves.

It's like saying, "I've anthropomorphized it to such an extent that I can consider it to be intelligent." Like pointing out that a computer can drive a car, because it's programming and sensory input tell it when there's an obstacle in the way. Then, if you do this enough times with enough examples of various "intelligent" computers, one finally decides that, yes, this is real intelligence.
 

Qlip

Post Human Post
Joined
Jul 30, 2010
Messages
8,464
MBTI Type
ENFP
Enneagram
4w5
Instinctual Variant
sp/sx
[MENTION=13589]Mal12345[/MENTION]

So when or how can a binary-based computer ever be considered intelligent?
When is it possible to distinguish between something that appears to be intelligent and that which really is intelligent?


The generally agreed on test for this is the Turing Test.

The Turing test is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. In the original illustrative example, a human judge engages in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test.

The fact of the matter is that it's still a very open philosophical question that from a subjective conscious being's point of view that any other being has consciousness. So indistinguishability has to be a 'good enough' test.


If its intellect is binary-based, then won't it always be artificial 'intelligence', that is, "intellect" only mimicking human intelligence (although without the admixture of emotions and moral valuations)?

Concerning mimicry, see above about indistinguishability. As far as the possibility of computers being able to have intelligence and consciousness, the theory is based that everything in existence follows rules and is mathematically predictable. In other words, everything is representable by an equation, including us. Anything which can manipulate mathematical symbols has the ability to emulate those patterns, including consciousness and intelligence.
 

Mal12345

Permabanned
Joined
Apr 19, 2011
Messages
14,532
MBTI Type
IxTP
Enneagram
5w4
Instinctual Variant
sx/sp
[MENTION=13589]Mal12345[/MENTION]

So when or how can a binary-based computer ever be considered intelligent?
When is it possible to distinguish between something that appears to be intelligent and that which really is intelligent?


The generally agreed on test for this is the Turing Test.



The fact of the matter is that it's still a very open philosophical question that from a subjective conscious being's point of view that any other being has consciousness. So indistinguishability has to be a 'good enough' test.


If its intellect is binary-based, then won't it always be artificial 'intelligence', that is, "intellect" only mimicking human intelligence (although without the admixture of emotions and moral valuations)?

Concerning mimicry, see above about indistinguishability. As far as the possibility of computers being able to have intelligence and consciousness, the theory is based that everything in existence follows rules and is mathematically predictable. In other words, everything is representable by an equation, including us. Anything which can manipulate mathematical symbols has the ability to emulate those patterns, including consciousness and intelligence.

The Turing test begs the question of what intelligence is. Google brings up this definition of 'intelligence': "1. the ability to acquire and apply knowledge and skills." However necessary those two parameters are, they are not sufficient to define intelligence because it leaves out the important factor of *what* is acquiring and applying the skills. If it turns out the anything (not just a mind) can have knowledge, skills, and human-like intelligence, then by that definition such capacities can also belong to machines. Furthermore, as alea pointed out, machines can then be considered persons and granted the same rights as humans.
 

Qlip

Post Human Post
Joined
Jul 30, 2010
Messages
8,464
MBTI Type
ENFP
Enneagram
4w5
Instinctual Variant
sp/sx
The Turing test begs the question of what intelligence is. Google brings up this definition of 'intelligence': "1. the ability to acquire and apply knowledge and skills." However necessary those two parameters are, they are not sufficient to define intelligence because it leaves out the important factor of *what* is acquiring and applying the skills. If it turns out the anything (not just a mind) can have knowledge, skills, and human-like intelligence, then by that definition such capacities can also belong to machines. Furthermore, as alea pointed out, machines can then be considered persons and granted the same rights as humans.

What I posted does not contest that. I was just answering your questions with the current scientific views on such things.

As far as the factor of *what* is acquiring, that's entirely a political and cultural issue if such a situation became important.
 
G

garbage

Guest
So, basically neural nets on chips? Not a bad idea.

And, yeah, one of the most used standards for machine intelligence is "good enough to replicate a human, " which is often in turn whether or not a set of people can are handed a bunch of outputs from people and machines and not being able to discern which type of entity produced which output.
 
Top