• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Can the brain understand the brain?

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
For background:
Kolmogorov complexity - Wikipedia, the free encyclopedia
In algorithmic information theory (a subfield of computer science), the Kolmogorov complexity (also known as descriptive complexity, Kolmogorov-Chaitin complexity, stochastic complexity, algorithmic entropy, or program-size complexity) of an object such as a piece of text is a measure of the computational resources needed to specify the object.

PLoS Biology - Bridging Psychology and Mathematics: Can the Brain Understand the Brain?

We are told scientists are divided into experimentalists and theoreticians. The dialectic description of the dynamics of science, with one tribe gathering data and collecting evidence and another tribe providing form to these observations, has striking examples that argue for the importance of synthesis. The 16th century revolution, which settled the way in which we see the sky today, is probably one of the best examples of how comparatively ineffective each of these tribes can be in isolation. Tycho Brahe, the exquisite observer, who built, calibrated, and refined instruments to see in the sky what no one else could, collected the evidence to prove a theory that Copernicus had already stated years before (in a book he dedicated to the Pope). It was only many years later that Galileo established the bridge between theory and observation; he understood the data in terms of the theory and thereby cemented the revolution. Copernicus's statements, showed Galileo, were not only figments of his imagination; they were an adequate description of the universe as he and Brahe had observed it.

Since my first steps in biology, after a prompt departure from physics and mathematics, I have looked for such encounters between theory and experiment. I began studying the visual system and the series of fundamental works by Atneave (1954), Barlow (1960), and Atick (1992) on the relationship between our visual world and the way the brain processes it. Their research was based on a simple hypothesis: (1) the images that we see are highly redundant and (2) the optic nerve is a limited channel, thus the retina has to compress information. Compression, redundancy? How do such concepts relate to the biology of the brain?

In the middle of the last century, working on the problem of communications, languages, codes, and channels, Claude Shannon proposed a very elegant theory that formalized intuitive ideas on the essence (and the limits) of communications (Weaver and Shannon 1949). One of its key aspects was that, depending on the code and on the input, channels are not used optimally and thus do not carry all the information they potentially could. When we compress (zip) a file, we are actually rewriting it in a more efficient (though not necessarily more intelligible) language in which we can store the same amount of information in less space. This compression has, of course, a limit (we cannot convey the universe in a single point), and the mere existence of an optimal code is central to Shannon's idea. Attneave was probably the first to think that these ideas could help unravel how the brain worked, and in a long series of works relating these ideas to experimental data, it was shown that the retina's business is mainly to get rid of the redundancies in the visual world.

About four years ago, Jacob Feldman published a paper, similar in spirit, proposing a simple explanation for a long-standing problem in cognitive science about why some concepts are inherently more difficult to learn than others (Feldman 2000). An article whose first reference is to work carried out 50 years previously makes us suspect that an important gap is being filled. As in the previous experiments, Feldman borrowed a theory—he did not invent it—to explain longstanding and previously unexplained research in this area. Feldman's idea was based on a theory developed by Kolmogorov that established formal mathematical grounds to define and measure complexity. Kolmogorov's theory was focused on categories, which are just subsets of a universe, a bunch of exemplars that constitute a group. “Dogs” is a category in the universe of animals. Different statements can define the same category, for example, “big dogs” and “dogs that are not small” are the same group, and some information in the statement may be irrelevant because it does not aid in determining which elements belong or not. In the same way that Shannon spoke of a non-redundant code, Kolmogorov showed that categories could be defined with an optimal (non-redundant) statement. The length of this statement defines a measure of complexity termed Kolmogorov complexity.

To visualize the intuitive nature of his theory it helps to do a thought experiment. Imagine a set of objects defined by, say, three features: form, color, and size. And imagine, for simplicity, that each feature is binary, that is, there are only two possible cases. Size is big or small, color is yellow or red, and shape can be either a triangle or a circle. This defines, of course, a universe of eight objects. We can now define categories within this universe: for example, all the circles (a category of four exemplars), or all the big and yellow objects (two exemplars), or all the triangles that are not red (again two) (see Figure 1). We can also define a category by enumeration, for example, the small red triangle, the big yellow circle, and the small yellow circle (three exemplars). Some rules (and thus the groups defined by these rules) are intuitively easier to define than others. “All the circles” is an easier statement to make (and probably to remember) than “small circles and yellow big objects.” This notion of difficulty is what Kolmogorov's theory formalized, stating that complexity was the length of the shortest definition (among all the possible definitions) of a given set. From this thought experiment, we can understand the logic of Feldman's paper, which showed that. Kolmogrov complexity is very closely related to our intuitive notion of conceptual difficulty. Feldman presented subjects with all possible categories (of a fixed number of exemplars) in different universes and showed that the critical parameter to rank the difficulty of a given subset was its Kolmogrov complexity. Moreover by explicitly presenting all the members and the nonmembers of a category to naïve subjects, he showed that we can spontaneously reduce a category to its minimal form and remember it without any explicit instruction. Thus, what Feldman found, following the original ideas of Shepard, was that our psychological measure of complexity—our difficulty in defining and remembering a category or concept—is also determined by the Kolmogorov complexity that describes it.

[Figure at link]

This essay is, in a way, about how we avoid becoming Borges's character Funes, who could not understand repeated observations as exemplars of a common rule and thus could not synthesize and categorize. Simply, he could not think. Probably the most disappointing moment of Feldman's paper comes at the very end, where it deals with its (somehow unavoidable) recursive quest. Understanding why some concepts are difficult to learn may itself be difficult to learn. Modern mathematics, together with Kolmogorov complexity and information theory, has taught us another fundamental concept that may be relevant when trying to understand the logic of the mind. In a long series of paradoxes enumerated by Bertrand Russell, Kurt Goedel, and others, we learn that a formal system that looks at itself is bound to fail. At the very end of his paper, Feldman writes, “In a sense, this final conclusion [that psychological complexity is Boolean complexity] may seem negative: human conceptual difficulty reflects intrinsic mathematical complexity after all, rather than some idiosyncratic and uniquely human bias.” Who invented mathematics? The Martians? On the contrary, I believe this result supports a more naturalistic and less Platonic conception of mathematics. Formal ideas in mathematics are not arbitrary constructions of an arbitrary architecture; rather, they reflect the workings of the brain like a massive collective cognitive experiment. Mathematics does not only serve to help us understand biology; mathematics is biology. We are not less original if our thoughts resemble our mental constructions, we are just consistent. It is within this loop, this unavoidable recursion—mathematics understanding the logic of the brain—that we will have an opportunity to test, as some conspire, whether among all the wonders evolution has come out with, the ultimate might be a brain good enough to avoid the risk of understanding itself.

Thoughts?
 

Octarine

The Eighth Colour
Joined
Oct 14, 2007
Messages
1,351
MBTI Type
Aeon
Enneagram
10w
Instinctual Variant
so
This is an interesting thread, pity it didn't get much attention. I hope you don't mind me bumping some of your old threads.

As pointed out in a recent thread, to model ones own brain leads to an infinite regression. Pretty much due to the same reason as attempting to code a formal system within itself, for which we have the famous mathematical proof.

But modelling a brain in general is in principle possible. But the 'problem' is that regardless of whether we will have to model it on the neurological level or the quantum level, our methods of computation are always cruder than the natural system we are modelling. I think it follows that the brain that we model will be more chaotic than our own underlying understanding of how it works.

The compression analogy is very powerful. Both in terms of science - our models are less elegant/have more overhead than the phenomena we are studying.
But also, I actually see the mind itself as basically a compressed model of the outside world, at least that limited by our experiences and one that is constantly refined by our experiences. The brain is very specialised in terms of function. There are specialised circuits that perform algorithms of sorts for things like edge identification closely wired to our visual circuits and harmonic detection connected to our aural systems. All as a way to reduce the complexity of the information presented to the other areas of the brain. Of course this also leads to traps/paradoxes - auditory and visual illusions. In a more general sense, the way much of our cognition is based on generalisations that we apply to to basically everything - and of course there are traps associated with that too.

Not to belittle the field of AI, but we are a long way off being able to create an artificial mind with such 'efficiency' and the reality is that any breakthroughs will come from studying the mind in the first place.
 

King sns

New member
Joined
Nov 4, 2008
Messages
6,714
MBTI Type
enfp
Enneagram
6w7
Instinctual Variant
sp/sx
Not to belittle the field of AI, but we are a long way off being able to create an artificial mind with such 'efficiency' and the reality is that any breakthroughs will come from studying the mind in the first place.

Exactly, the brain is a two way street. (With lots of side streets, I guess. :laugh:) Feeding and returning, and refining information. We need to find a way to let the artificial intelligence not only store information taken in, but to link it all together to create solutions to problems. (Even simpler every day problems.) And, frankly, probably would require a lot more information and a full, in depth understanding of the brain to reach this point. Can the brain understand the brain? Hmmm.. I don't know. I'm also inclined to think that since the brain is developing with evolution, the information will be changing even if we ever do reach a point where we think we get it, until then, we will have some catching up to do.
 

Octarine

The Eighth Colour
Joined
Oct 14, 2007
Messages
1,351
MBTI Type
Aeon
Enneagram
10w
Instinctual Variant
so
We need to find a way to let the artificial intelligence not only store information taken in, but to link it all together to create solutions to problems. (Even simpler every day problems.)

Well the neural network type models are becoming more sophisticated in the past. Researchers are no longer considering the early symbolic/formalised (in the mathematical sense) approaches that they tried in the past.
But although we may think we understand the basic principles, we still don't know how to build one of the complexity of the brain (eg how the network links it all together), even if we had the computational technology to model it.
This is one of our best attempts so far:
http://en.wikipedia.org/wiki/Blue_Brain_Project

Of course this approach is quite different to that of the most visible successes of "AI" so far, for example the Watson supercomputer that beat human jeopardy champions recently.

http://en.wikipedia.org/wiki/IBM_Watson
 
Top