User Tag List

Results 1 to 6 of 6

  1. #1
    Symbolic Herald Vasilisa's Avatar
    Join Date
    Feb 2010
    Posts
    4,128

    Default How to Grow a Mind: Statistics, Structure, and Abstraction

    How to Grow a Mind: Statistics, Structure, and Abstraction pdf
    Science
    11 March 2011
    by Joshua B. Tenenbaum,Charles Kemp,Thomas L. Griffiths, and Noah D. Goodman

    Abstract
    In coming to understand the world—in learning concepts, acquiring language, and grasping causal relations—our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?
    Excerpt: The Challenge: How Does the Mind Get So Much from So Little?
    For scientists studying how humans come to understand their world, the central challenge is this: How do our minds get so much from so little? We build rich causal models, make strong generalizations, and construct powerful abstractions, whereas the input data are sparse, noisy, and ambiguous—in every way far too limited. A massive mismatch looms between the information coming in through our senses and the ouputs of cognition.

    Consider the situation of a child learning the meanings of words. Any parent knows, and scientists have confirmed, that typical 2-yearolds can learn how to use a new word such as “horse” or “hairbrush” from seeing just a few examples. We know they grasp the meaning, not just the sound, because they generalize: They use the word appropriately (if not always perfectly) in new situations. Viewed as a computation on sensory input data, this is a remarkable feat. Within the infinite landscape of all possible objects, there is an infinite but still highly constrained subset that can be called “horses” and another for “hairbrushes.” How does a child grasp the boundaries of these subsets from seeing just one or a few examples of each? Adults face the challenge of learning entirely novel object concepts less often, but they can be just as good at it.

    Generalization from sparse data is central in learning many aspects of language, such as syntactic constructions or morphological rules. It presents most starkly in causal learning: Every statistics class teaches that correlation does not imply causation, yet children routinely infer causal links from just a handful of events, far too small a sample to compute even a reliable correlation! Perhaps the deepest accomplishment of cognitive development is the construction of larger-scale systems of knowledge: intuitive theories of physics, psychology, or biology or rule systems for social structure or moral judgment. Building these systems takes years, much longer than learning a single new word or concept, but on this scale too the final product of learning far outstrips the data observed.

    Philosophers have inquired into these puzzles for over two thousand years, most famously as “the problem of induction,” from Plato and Aristotle through Hume, Whewell, and Mill to Carnap, Quine, Goodman, and others in the 20th century. Only recently have these questions become accessible to science and engineering by viewing inductive learning as a species of computational problems and the human mind as a natural computer evolved for solving them. The proposed solutions are, in broad strokes, just what philosophers since Plato have suggested. If the mind goes beyond the data given, another source of information must make up the difference. Some more abstract background knowledge must generate and delimit the hypotheses learners consider, or meaningful generalization would be impossible. Psychologists and linguists speak of “constraints;” machine learning and artificial intelligence researchers, “inductive bias;” statisticians, “priors.”

    This article reviews recent models of human learning and cognitive development arising at the intersection of these fields. What has come to be known as the “Bayesian” or “probabilistic” approach to reverse-engineering the mind has been heavily influenced by he engineering successes of Bayesian artificial intelligence and machine learning over the past two decades and, in return, has begun to inspire more powerful and more humanlike approaches to machine learning. As with “connectionist” or “neural network” models of cognition in the 1980s (the last moment when all these fields converged on a common paradigm for understanding the mind), the labels “Bayesian” or “probabilistic” are merely placeholders for a set of interrelated principles and theoretical claims. The key ideas can be thought of as proposals for how to answer three central questions:
    1. How does abstract knowledge guide learning and inference from sparse data?
    2. What forms does abstract knowledge take, across different domains and tasks?
    3. How is abstract knowledge itself acquired?

    the formless thing which gives things form!
    Found Forum Haiku Project


    Positive Spin | your feedback welcomed | Darker Criticism

  2. #2
    The Eighth Colour Octarine's Avatar
    Join Date
    Oct 2007
    MBTI
    Aeon
    Enneagram
    10w so
    Socionics
    LOL
    Posts
    1,366

    Default

    This is an excellent topic, I'm watching the lecture now.

  3. #3
    The Eighth Colour Octarine's Avatar
    Join Date
    Oct 2007
    MBTI
    Aeon
    Enneagram
    10w so
    Socionics
    LOL
    Posts
    1,366

    Default

    I find it interesting to contrast our current approaches of top down vs bottom up modelling. The problem with traditional AI is the bottom up methodology that they used was quite alien compared to the neurological conception of the mind. Networking theory (including neural networks) took the field a long way, but even if we were able to provide the scale required, we still don't understand the substructure involved. There have still be promising attempts, such as the Blue brain project (modelling a rat brain).

    But in terms of building brains, we already have the technology (at least in principle - we don't have the scale required) to image neuronal activity in 3D in real time.

    The likelihood is that given the trends in the computer industry, we would be able to build a model of similar complexity to the human brain (on the neuronal level) by 2040. If we are able to correctly image and model the neuronal structure of real brains, along with giving it the correct sensory inputs, this could lead to the development of something that resembles a real brain.

    But that doesn't mean we will necessarily understand how it works and I guess that is where the machine learning approaches discussed come in.

    Rather than necessarily creating AI that resembles humans, this approach still has many applications. One of the most promising is using these models to do science. That is based on prior inputs, be able to devise and choose between competing hypotheses and then make further judgements based on the results. Given the ever increasing complexity of data sets, at some point this will be a major force in science, although some will not like it due to the black box style approach. But since when do we truly understand even simple scientific phenomena anyway? It is all abstraction and how different is trusting the machine learning approach different from trusting other humans practising the same things?

    Additionally, one of the major benefits of this is to bridge the gap between our bottom up brain modelling techniques and our fuzzy and qualitative notions of psychology.

    It is interesting for example, that the model that they used to predict "Human action understanding" did not assume that the behaviour of the object was 'rational', but merely probabilistically rational, subject to random variation as well as uncertainty about the level of information inputs.

    In terms of the examples of childhood learning, what interests me is the actual development of the different sorts of sorting structures at different scales.

    The transition from mutual exclusivity, to associations of multiple properties and similarity, to a tree structure and the separate notions of form and structure. These notions are likely to develop even if they aren't explicitly taught. I believe that given the right conditions, children could effectively teach themselves to read for example.

    One of the questions discussed the traditional neural network approach - trees all the way, but I think that there is definitely some sorting that goes on over time that selects for 'form' type sorting hierarchies at higher scales and the tree like structures at lower scales. (There is also the newer idea of developing neural networks with 'hidden layers' which may permit this.)

    I am also interested in how people treat discontinuities between the connections of the 'form' and treelike hierarchies. You would naively expect that people would try to eliminate as much discontinuities and contradictions in their cognitive model as much as possible. But this isn't necessarily how people function and it often isn't necessary.

    Personally, I have had the goal of constructing a single hierarchical tree for my complete set of experiences. But it is hard to demonstrate this, except on a simplified qualitative level.

  4. #4
    Analytical Dreamer Coriolis's Avatar
    Join Date
    Apr 2010
    MBTI
    INTJ
    Enneagram
    5w6 sp/sx
    Posts
    17,522

    Default

    Great link, Vasilisa - thanks for posting. Will read/watch when I have a chance.
    I've been called a criminal, a terrorist, and a threat to the known universe. But everything you were told is a lie. The truth is, they've taken our freedom, our home, and our future. The time has come for all humanity to take a stand...

  5. #5
    resonance entropie's Avatar
    Join Date
    Apr 2008
    MBTI
    entp
    Enneagram
    783
    Posts
    16,761

    Default

    Americans and statistics, the neverending story continues.. I'll have a look when I have my mind free from university stuff again in September

    [URL]https://www.youtube.com/watch?v=tEBvftJUwDw&t=0s[/URL]

  6. #6
    Sniffles
    Guest

    Default

    Quote Originally Posted by entropie View Post
    Americans and statistics, the neverending story continues..
    We learned much from Gottfried Achenwall and his theories of Statistik.

Similar Threads

  1. [MBTItm] How to tell the difference between ISFP and INFP?
    By Giggly in forum The SP Arthouse (ESFP, ISFP, ESTP, ISTP)
    Replies: 66
    Last Post: 12-22-2016, 02:42 PM
  2. [JCF] How to find a balance between Ne and Si?
    By Fay in forum The NF Idyllic (ENFP, INFP, ENFJ, INFJ)
    Replies: 25
    Last Post: 11-07-2013, 03:05 AM
  3. Replies: 39
    Last Post: 06-23-2013, 01:28 PM
  4. How to designate the border between N and F/T ?
    By Virtual ghost in forum Myers-Briggs and Jungian Cognitive Functions
    Replies: 6
    Last Post: 02-05-2010, 09:20 AM
  5. [MBTItm] How to tell the difference between ISTJ and INTJ.
    By Giggly in forum The SJ Guardhouse (ESFJ, ISFJ, ESTJ, ISTJ)
    Replies: 60
    Last Post: 10-12-2009, 01:52 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
Single Sign On provided by vBSSO