• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

How to Grow a Mind: Statistics, Structure, and Abstraction

Vasilisa

Symbolic Herald
Joined
Feb 2, 2010
Messages
3,946
Instinctual Variant
so/sx
How to Grow a Mind: Statistics, Structure, and Abstraction pdf
Science
11 March 2011
by Joshua B. Tenenbaum,Charles Kemp,Thomas L. Griffiths, and Noah D. Goodman

Abstract
In coming to understand the world—in learning concepts, acquiring language, and grasping causal relations—our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?​
Excerpt: The Challenge: How Does the Mind Get So Much from So Little?
For scientists studying how humans come to understand their world, the central challenge is this: How do our minds get so much from so little? We build rich causal models, make strong generalizations, and construct powerful abstractions, whereas the input data are sparse, noisy, and ambiguous—in every way far too limited. A massive mismatch looms between the information coming in through our senses and the ouputs of cognition.

Consider the situation of a child learning the meanings of words. Any parent knows, and scientists have confirmed, that typical 2-yearolds can learn how to use a new word such as “horse” or “hairbrush” from seeing just a few examples. We know they grasp the meaning, not just the sound, because they generalize: They use the word appropriately (if not always perfectly) in new situations. Viewed as a computation on sensory input data, this is a remarkable feat. Within the infinite landscape of all possible objects, there is an infinite but still highly constrained subset that can be called “horses” and another for “hairbrushes.” How does a child grasp the boundaries of these subsets from seeing just one or a few examples of each? Adults face the challenge of learning entirely novel object concepts less often, but they can be just as good at it.

Generalization from sparse data is central in learning many aspects of language, such as syntactic constructions or morphological rules. It presents most starkly in causal learning: Every statistics class teaches that correlation does not imply causation, yet children routinely infer causal links from just a handful of events, far too small a sample to compute even a reliable correlation! Perhaps the deepest accomplishment of cognitive development is the construction of larger-scale systems of knowledge: intuitive theories of physics, psychology, or biology or rule systems for social structure or moral judgment. Building these systems takes years, much longer than learning a single new word or concept, but on this scale too the final product of learning far outstrips the data observed.

Philosophers have inquired into these puzzles for over two thousand years, most famously as “the problem of induction,” from Plato and Aristotle through Hume, Whewell, and Mill to Carnap, Quine, Goodman, and others in the 20th century. Only recently have these questions become accessible to science and engineering by viewing inductive learning as a species of computational problems and the human mind as a natural computer evolved for solving them. The proposed solutions are, in broad strokes, just what philosophers since Plato have suggested. If the mind goes beyond the data given, another source of information must make up the difference. Some more abstract background knowledge must generate and delimit the hypotheses learners consider, or meaningful generalization would be impossible. Psychologists and linguists speak of “constraints;” machine learning and artificial intelligence researchers, “inductive bias;” statisticians, “priors.”

This article reviews recent models of human learning and cognitive development arising at the intersection of these fields. What has come to be known as the “Bayesian” or “probabilistic” approach to reverse-engineering the mind has been heavily influenced by he engineering successes of Bayesian artificial intelligence and machine learning over the past two decades and, in return, has begun to inspire more powerful and more humanlike approaches to machine learning. As with “connectionist” or “neural network” models of cognition in the 1980s (the last moment when all these fields converged on a common paradigm for understanding the mind), the labels “Bayesian” or “probabilistic” are merely placeholders for a set of interrelated principles and theoretical claims. The key ideas can be thought of as proposals for how to answer three central questions:
  1. How does abstract knowledge guide learning and inference from sparse data?
  2. What forms does abstract knowledge take, across different domains and tasks?
  3. How is abstract knowledge itself acquired?
 

Octarine

The Eighth Colour
Joined
Oct 14, 2007
Messages
1,351
MBTI Type
Aeon
Enneagram
10w
Instinctual Variant
so
This is an excellent topic, I'm watching the lecture now.
 

Octarine

The Eighth Colour
Joined
Oct 14, 2007
Messages
1,351
MBTI Type
Aeon
Enneagram
10w
Instinctual Variant
so
I find it interesting to contrast our current approaches of top down vs bottom up modelling. The problem with traditional AI is the bottom up methodology that they used was quite alien compared to the neurological conception of the mind. Networking theory (including neural networks) took the field a long way, but even if we were able to provide the scale required, we still don't understand the substructure involved. There have still be promising attempts, such as the Blue brain project (modelling a rat brain).

But in terms of building brains, we already have the technology (at least in principle - we don't have the scale required) to image neuronal activity in 3D in real time.

The likelihood is that given the trends in the computer industry, we would be able to build a model of similar complexity to the human brain (on the neuronal level) by 2040. If we are able to correctly image and model the neuronal structure of real brains, along with giving it the correct sensory inputs, this could lead to the development of something that resembles a real brain.

But that doesn't mean we will necessarily understand how it works and I guess that is where the machine learning approaches discussed come in.

Rather than necessarily creating AI that resembles humans, this approach still has many applications. One of the most promising is using these models to do science. That is based on prior inputs, be able to devise and choose between competing hypotheses and then make further judgements based on the results. Given the ever increasing complexity of data sets, at some point this will be a major force in science, although some will not like it due to the black box style approach. But since when do we truly understand even simple scientific phenomena anyway? It is all abstraction and how different is trusting the machine learning approach different from trusting other humans practising the same things?

Additionally, one of the major benefits of this is to bridge the gap between our bottom up brain modelling techniques and our fuzzy and qualitative notions of psychology.

It is interesting for example, that the model that they used to predict "Human action understanding" did not assume that the behaviour of the object was 'rational', but merely probabilistically rational, subject to random variation as well as uncertainty about the level of information inputs.

In terms of the examples of childhood learning, what interests me is the actual development of the different sorts of sorting structures at different scales.

The transition from mutual exclusivity, to associations of multiple properties and similarity, to a tree structure and the separate notions of form and structure. These notions are likely to develop even if they aren't explicitly taught. I believe that given the right conditions, children could effectively teach themselves to read for example.

One of the questions discussed the traditional neural network approach - trees all the way, but I think that there is definitely some sorting that goes on over time that selects for 'form' type sorting hierarchies at higher scales and the tree like structures at lower scales. (There is also the newer idea of developing neural networks with 'hidden layers' which may permit this.)

I am also interested in how people treat discontinuities between the connections of the 'form' and treelike hierarchies. You would naively expect that people would try to eliminate as much discontinuities and contradictions in their cognitive model as much as possible. But this isn't necessarily how people function and it often isn't necessary.

Personally, I have had the goal of constructing a single hierarchical tree for my complete set of experiences. But it is hard to demonstrate this, except on a simplified qualitative level.
 

Coriolis

Si vis pacem, para bellum
Staff member
Joined
Apr 18, 2010
Messages
27,192
MBTI Type
INTJ
Enneagram
5w6
Instinctual Variant
sp/sx
Great link, Vasilisa - thanks for posting. Will read/watch when I have a chance.
 

entropie

Permabanned
Joined
Apr 24, 2008
Messages
16,767
MBTI Type
entp
Enneagram
783
Americans and statistics, the neverending story continues.. :) I'll have a look when I have my mind free from university stuff again in September

hirn-im-glas-270x300.jpg
 
Top