• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Induction and Deduction

LostInNerSpace

New member
Joined
Jan 25, 2008
Messages
1,027
MBTI Type
INTP
It seems to me that induction is really just deduction with hundreds (edit: not necessarily more than one) of hidden premises.

How is coming up with new information possible? There must be sets of rules, built through metaphor and experience that deductively lead to "novel" conclusions.

You may ask, well, how do we come up with the first premise? The answer is that it's probably genetically programmed in. Just like certain rules of language. All animals have premises about the world that they're born with.

One hidden premise in all induction is "the future resembles the past". The only way to justify this premise is with other inductive arguments which use the premise anyway. We could never come up with that premise ourselves -- no one ever questions it. It just "seems" obvious.

Sorry, those ideas were not presented in any sort of clear way... took a bunch of painkillers earlier, lol.

Thoughts?

Everything is relative. Even the most ridged seeming axiom can be viewed in a slightly different light. Take the speed of light for example. The speed of light, c, is only the speed of light in a vacuum. Vacuum is defined as absence of matter, but I bet that definition excludes dark matter. It is believed that dark matter accounts for most of the matter in the Universe. If the Sun can bend light, why not dark matter? The speed of light would be different passing though some other matter such as water, or some gas. Does dark matter have an effect on the speed of light?

I'm not a physicist. It was just to illustrate my point.

INTPs have what I would describe as massive multidimensional jigsaw puzzles in our heads. We are continually absorbing new knowledge and trying to fit that new knowledge into the puzzle. The fitting process is kind of random trial and error. The pieces can fit many different parts of the puzzle--hence multidimensional. The pieces can also form smaller independent puzzles that don't yet seem to fit anywhere in the main puzzle. Often, later on, we will find one or more suitable places to fit the smaller puzzles or other pieces that we could not fit before.

The whole puzzle is our big intuitive picture. It is our understanding of the natural world.

Any other INTPs agree with this assessment of how we think?
 

The_Liquid_Laser

Glowy Goopy Goodness
Joined
Jul 11, 2007
Messages
3,376
MBTI Type
ENTP
I guess I would say it's still a hidden premise because you wouldn't make that inference if you hadn't made similar guesses and been right in the past.

This doesn't necessarily have to be so. In an extreme case, say that you have never seen a jigsaw puzzle before. You come across a 500 piece puzzle with 499 pieces put together. Your mind sees what the picture looks like by filling in the missing piece. This is inductive reasoning, and it does not require past experience.
 

redacted

Well-known member
Joined
Nov 28, 2007
Messages
4,223
^Yeah it does. You need some sense of the concept of "completeness", which must either come from past experience or probably genetic programming.

Has to come from some premise.
 

LostInNerSpace

New member
Joined
Jan 25, 2008
Messages
1,027
MBTI Type
INTP
^Yeah it does. You need some sense of the concept of "completeness", which must either come from past experience or probably genetic programming.

Has to come from some premise.

What about an infinite jigsaw puzzle? There's no reason except for practically that jigsaw puzzles have to be finite. Isn't that just life? When we are born we have only basic genetic knowledge; the basic tools necessary to learn more.
 

The_Liquid_Laser

Glowy Goopy Goodness
Joined
Jul 11, 2007
Messages
3,376
MBTI Type
ENTP
^Yeah it does. You need some sense of the concept of "completeness", which must either come from past experience or probably genetic programming.

Has to come from some premise.

You are correct that there is an unconscious premise at work, but I am saying that the underlying assumption does not require a temporal component.
 

redacted

Well-known member
Joined
Nov 28, 2007
Messages
4,223
Hmmm.

How do you know when to apply that unconscious premise then? And when you do apply it, you are assuming that the premise will work because it has worked before in similar situations.

I guess this is just a semantic debate. And I'm not really invested in this point.

I think you all get what I'm saying; induction is just an informal form of deduction with hidden/unconscious/unstated premises.
 

The_Liquid_Laser

Glowy Goopy Goodness
Joined
Jul 11, 2007
Messages
3,376
MBTI Type
ENTP
I think you all get what I'm saying; induction is just an informal form of deduction with hidden/unconscious/unstated premises.

I agree. One of my major views on scientific thinking (or simply thinking in general) is that we are all carrying around a large amount of unstated assumptions when we approach a problem, experiment, or situation. An astute person will attempt to uncover as many of these assumptions as possible. Doing so will greatly clarify reasoning and results.
 

Orangey

Blah
Joined
Jun 26, 2008
Messages
6,354
MBTI Type
ESTP
Enneagram
6w5
All of my logic training comes from a mathematical context, so that is why I'm not familiar with it. Formal logic is very useful within the context of mathematics, but it's true that it's too rigorous to use formally among most people.

Ah, I see. I don't think it's the level of rigor, though, that impedes its applicability to natural language argument (ordinary argument). Sure that may be a factor that prevents most people from being able to use it, but it's my view that even if everyone had a firm grasp of the subject, its usefulness (in a natural language context) would still be limited. I've gone far enough off topic, though, so I will stop now :).
 

The_Liquid_Laser

Glowy Goopy Goodness
Joined
Jul 11, 2007
Messages
3,376
MBTI Type
ENTP
Ah, I see. I don't think it's the level of rigor, though, that impedes its applicability to natural language argument (ordinary argument). Sure that may be a factor that prevents most people from being able to use it, but it's my view that even if everyone had a firm grasp of the subject, its usefulness (in a natural language context) would still be limited. I've gone far enough off topic, though, so I will stop now :).

I don't think this is true from my experience. I find logic quite useful for my private reasoning, but I have to reword everything because most people don't know what I'm talking about otherwise. Also if I try to point out an error in someone else's reasoning they often don't know what I am talking about.
 

gotbeef

Permabanned
Joined
Sep 23, 2008
Messages
6
Deductive Reasoning

Neighbor 1: "Hi, there, new neighbor, it sure is a mighty nice day to be moving."
New Neighbor: "Yes, it is and people around here seem extremely friendly."
Neighbor 1: "So, what is it you do for a living?"
New Neighbor: "I am a professor at the University, I teach deductive reasoning."
Neighbor 1: "Deductive reasoning, what's that?"
New Neighbor: "Let me give you an example. I see you have a dog house out back. By that I deduce that you have a dog."
Neighbor 1: "That's right."
New Neighbor: "The fact that you have a dog, leads me to deduce that you have a family."
Neighbor 1: "Right again."
New Neighbor: "Since you have a family, I deduce that you have a wife."
Neighbor 1: "Correct."
New Neighbor: "And since you have a wife, I can deduce that you are heterosexual."
Neighbor 1: "Yup."
New Neighbor: "That is deductive reasoning."
Neighbor 1: "Cool."

Later that same day:

Neighbor 1: "Hey, I was talking to that new guy who moved in next door."
Neighbor 2: "Is he a nice guy?"
Neighbor 1: "Yes, and he has an interesting job."
Neighbor 2: "Oh, yeah, what does he do?"
Neighbor 1: "He is a professor of deductive reasoning at the University."
Neighbor 2: "Deductive reasoning, what is that?"
Neighbor 1: "Let me give you an example. Do you have a dog house?"
Neighbor 2: "No."
Neighbor 1: "Fag!"
 

LostInNerSpace

New member
Joined
Jan 25, 2008
Messages
1,027
MBTI Type
INTP
Neighbor 1: "Hi, there, new neighbor, it sure is a mighty nice day to be moving."
New Neighbor: "Yes, it is and people around here seem extremely friendly."
Neighbor 1: "So, what is it you do for a living?"
New Neighbor: "I am a professor at the University, I teach deductive reasoning."
Neighbor 1: "Deductive reasoning, what's that?"
New Neighbor: "Let me give you an example. I see you have a dog house out back. By that I deduce that you have a dog."
Neighbor 1: "That’s right."
New Neighbor: "The fact that you have a dog, leads me to deduce that you have a family."
Neighbor 1: "Right again."
New Neighbor: "Since you have a family, I deduce that you have a wife."
Neighbor 1: "Correct."
New Neighbor: "And since you have a wife, I can deduce that you are heterosexual."
Neighbor 1: "Yup."
New Neighbor: "That is deductive reasoning."
Neighbor 1: "Cool."

Later that same day:

Neighbor 1: "Hey, I was talking to that new guy who moved in next door."
Neighbor 2: "Is he a nice guy?"
Neighbor 1: "Yes, and he has an interesting job."
Neighbor 2: "Oh, yeah, what does he do?"
Neighbor 1: "He is a professor of deductive reasoning at the University."
Neighbor 2: "Deductive reasoning, what is that?"
Neighbor 1: "Let me give you an example. Do you have a dog house?"
Neighbor 2: "No."
Neighbor 1: "Fag!"


After looking more closely, I realized I was previously confusing deductive reasoning with inductive reasoning. I was saying deductive reasoning but meaning inductive reasoning, and saying inductive reasoning but meaning inference. I never was much good with labels. I think those adjustments should address the dog/fag problem. Inductive reasoning does not require all predicates to be true; deductive reasoning does.
 

gotbeef

Permabanned
Joined
Sep 23, 2008
Messages
6
Inductive Reasoning

Wiki has also an interesting primer on the Inductive Reasoning Method:

The problem of induction is the philosophical question of whether inductive reasoning is valid. That is, what is the justification for either:


  • generalizing about the properties of a class of objects based on some number of observations of particular instances of that class (for example, the inference that "all swans we have seen are white, and therefore all swans are white," before the discovery of black swans) or,
  • presupposing that a sequence of events in the future will occur as it always has in the past (for example, that the laws of physics will hold as they have always been observed to hold).

The problem calls into question all empirical claims made in everyday life or through the scientific method. Although the problem dates back to the Pyrrhonism of ancient philosophy, David Hume introduced it in the mid-18th century, with the most notable response provided by Karl Popper 2 centuries later.

Pyrrhonian skeptic Sextus Empiricus first questioned induction, reasoning that a universal rule could not be established from an incomplete set of particular instances. He wrote:

when they propose to establish the universal from the particulars by means of induction, they will effect this by a review of either all or some of the particulars. But if they review some, the induction will be insecure, since some of the particulars omitted in the induction may contravene the universal; while if they are to review all, they will be toiling at the impossible, since the particulars are infinite and indefinite

The focus upon the gap between the premises and conclusion present in the above passage appears different from Hume's focus upon the circular reasoning of induction. However, Weintraub claims in "The Philosophical Quarterly" that although Sextus' approach to the problem appears different, Hume's approach was actually an application of another argument raised by Sextus:

Those who claim for themselves to judge the truth are bound to possess a criterion of truth. This criterion, then, either is without a judge's approval or has been approved. But if it is without approval, whence comes it that it is truthworthy? For no matter of dispute is to be trusted without judging. And, if it has been approved, that which approves it, in turn, either has been approved or has not been approved, and so on ad infinitum.

Although the criterion argument applies to both deduction and induction, Weintraub believes that Sextus' argument "is precisely the strategy Hume invokes against induction: it cannot be justified, because the purported justification, being inductive, is circular." She concludes that "Hume's most important legacy is the supposition that the justification of induction is not analogous to that of deduction." She ends with a discussion of Hume's implicit sanction of the validity of deduction, which Hume describes as intuitive in a manner analogous to modern foundationalism.

In inductive reasoning, one makes a series of observations and infers a new claim based on them. For instance, from a series of observations that at sea-level (approximately 14psi) samples of water freeze at 0°C (32°F), it seems valid to infer that the next sample of water will do the same, or, in general, at sea-level water freezes at 0°C. That the next sample of water freezes under those conditions merely adds to the series of observations. First, it is not certain, regardless of the number of observations, that water always freezes at 0°C at sea-level. To be certain, it must be known that the law of nature is immutable. Second, the observations themselves do not establish the validity of inductive reasoning, except inductively. In other words, observations that inductive reasoning has worked in the past do not ensure that it will always work. This second problem is the problem of induction.

David Hume

David Hume described the problem in "An Enquiry concerning Human Understanding," §4, based on his epistemological framework. Here, "reason" refers to deductive reasoning and "induction" refers to inductive reasoning. First, Hume ponders the discovery of causal relations, which form the basis for what he refers to as "matters of fact." He argues that causal relations are found not by reason, but by induction. This is because for any cause, multiple effects are conceivable, and the actual effect cannot be determined by reasoning about the cause; instead, one must observe occurrences of the causal relation to discover that it holds. For example, when one thinks of "a billiard ball moving in a straight line toward another," one can conceive that the first ball bounces back with the second ball remaining at rest, the first ball stops and the second ball moves, or the first ball jumps over the second, etc. There is no reason to conclude any of these possibilities over the others. Only through previous observation can it be predicted, inductively, what will actually happen with the balls. In general, it is not necessary that causal relation in the future resemble causal relations in the past, as it is always conceivable otherwise; for Hume, this is because the negation of the claim does not lead to a contradiction.

Next, Hume ponders the justification of induction. If all matters of fact are based on causal relations, and all causal relations are found by induction, then induction must be shown to be valid somehow. He uses the fact that induction assumes a valid connection between the proposition "I have found that such an object has always been attended with such an effect" and the proposition "I foresee that other objects which are in appearance similar will be attended with similar effects." One connects these two propositions not by reason, but by induction. This claim is supported by the same reasoning as that for causal relations above, and by the observation that even rationally inexperienced or inferior people can infer, for example, that touching fire causes pain. Hume challenges other philosophers to come up with a (deductive) reason for the connection. If he is right, then the justification of induction can be only inductive. But this begs the question; as induction is based on an assumption of the connection, it cannot itself explain the connection. In this way, the problem of induction is not only concerned with the uncertainty of conclusions derived by induction, but doubts the very principle through which those uncertain conclusions are derived.

So how does Hume gets us out of the woods? Well, he says that although induction is not made by reason, we nonetheless perform it and improve from it. He proposes a descriptive explanation for the nature of induction in §5 of the Enquiry, titled "Skeptical solution of these doubts". It is by custom or habit that one draws the inductive connection described above, and "without the influence of custom we would be entirely ignorant of every matter of fact beyond what is immediately present to the memory and senses." The result of custom is belief, which is instinctual and much stronger than imagination alone. Rather than unproductive radical skepticism about everything, Hume said that he was actually advocating a practical skepticism based on common sense, wherein the inevitability of induction is accepted. Someone who insists on reason for certainty might, for instance, starve to death, as they would not infer the benefits of food based on previous observations of nutrition.
 

groupie

Permabanned
Joined
Sep 25, 2008
Messages
5
A Case of Math Induction

All horses are the same color

Suppose that we have a set of 5 horses. We wish to prove that they are all the same color. Suppose that we had a proof that all sets of 4 horses were the same color. If that were true, we could prove that all five horses are the same color by removing a horse to leave a group of 4 horses. Do this in two ways, and we have two different groups of four horses. By our supposed existing proof, since these are groups of 4, all horses in them must be the same color. For example, the first, second, third and fourth horses constitute a group of 4, and thus must all be the same color; and the second, third, fourth and fifth horses also constitute a group of 4 and thus must also all be the same color. For this to occur, all 5 horses in the group of five must be the same color.

But how are we to get a proof that all sets of 4 horses are the same color? We apply the same logic again. By the same process, a group of 4 horses could be broken down into groups of 3, and then a group of 3 horses could be broken down into groups of 2, and so on. Eventually we will reach a group size of 1, and it is obvious that all horses in a group of 1 horse must be the same color. By the same logic we can also increase the group size. A group of 5 horses can be increased to a group of 6, and so on upwards, so that all finite sized groups of horses must be the same color.

The argument above makes the implicit assumption that the two subsets of horses to which the induction assumption is applied have a common element. This is not true when n = 1, that is, when the original set only contains 2 horses. Indeed, let the 2 horses be horse A and horse B. When horse A is removed, it is true that the remaining horses in the set are the same color (only horse B remains). If horse B is removed instead, this leaves a different set containing only horse A, which may or may not be the same color as horse B. The problem in the argument is the assumption that because each of these 2 sets contains only one colour of horses, the original set also contained only one colour of horses. Because there are no common elements (horses) in the two sets, it is unknown whether the two horses share the same colour. The proof forms a falsidical paradox; it seems to show something manifestly false by valid reasoning, but in fact the reasoning is flawed. The horse paradox exposes the pitfalls arising from failure to consider special cases for which a general statement may be false.
 

tblood

Permabanned
Joined
Sep 28, 2008
Messages
25
Abductive Reasoning

"The Gods have certainty, whereas to us as men conjecture [only is possible].."
-Alcmaion von Kroton


Abduction, or inference to the best explanation, is a method of reasoning in which one chooses the hypothesis that would, if true, best explain the relevant evidence. Abductive reasoning starts from a set of accepted facts and infers their most likely, or best, explanations. The term abduction is also sometimes used to just mean the generation of hypotheses to explain observations or conclusions, but the former definition is more common both in philosophy and computing.

Deduction

allows deriving b as a consequence of a. In other words, deduction is the process of deriving the consequences of what is assumed. Given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. A deductive statement is based on accepted truths, e.g. All bachelors are unmarried men. It is true by definition and is independent of sense experience.

Induction

allows inferring a from multiple instantiations of b when a entails b. Induction is the process of inferring probable antecedents as a result of observing multiple consequents. An inductive statement requires perception for it to be true. For example, the statement 'it is snowing outside' is invalid until one looks or goes outside to see whether it is true or not. Induction requires sense experience.

Abduction

allows inferring a as an explanation of b. Because of this, abduction allows the precondition a to be inferred from the consequence b. Deduction and abduction thus differ in the direction in which a rule like "a entails b" is used for inference. As such abduction is formally equivalent to the logical fallacy "affirming the consequent" or post hoc ergo propter hoc, because there are multiple possible explanations for b. Unlike deduction and in some sense induction, abduction can produce results that are incorrect within its formal system. However, it can still be useful as a heuristic, especially when something is known about the likelihood of different causes for b.

Here it is an interesting book related to the subject:

0195189531.01._SX140_SCLZZZZZZZ_.jpg

Making Things Happen: A Theory of Causal Explanation
 

the.blanket.on.top

Permabanned
Joined
Sep 29, 2008
Messages
23
"The Gods have certainty, whereas to us as men conjecture [only is possible].."
-Alcmaion von Kroton


Abduction, or inference to the best explanation, is a method of reasoning in which one chooses the hypothesis that would, if true, best explain the relevant evidence. Abductive reasoning starts from a set of accepted facts and infers their most likely, or best, explanations. The term abduction is also sometimes used to just mean the generation of hypotheses to explain observations or conclusions, but the former definition is more common both in philosophy and computing.

Deduction

allows deriving b as a consequence of a. In other words, deduction is the process of deriving the consequences of what is assumed. Given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. A deductive statement is based on accepted truths, e.g. All bachelors are unmarried men. It is true by definition and is independent of sense experience.

Induction

allows inferring a from multiple instantiations of b when a entails b. Induction is the process of inferring probable antecedents as a result of observing multiple consequents. An inductive statement requires perception for it to be true. For example, the statement 'it is snowing outside' is invalid until one looks or goes outside to see whether it is true or not. Induction requires sense experience.

Abduction

allows inferring a as an explanation of b. Because of this, abduction allows the precondition a to be inferred from the consequence b. Deduction and abduction thus differ in the direction in which a rule like "a entails b" is used for inference. As such abduction is formally equivalent to the logical fallacy "affirming the consequent" or post hoc ergo propter hoc, because there are multiple possible explanations for b. Unlike deduction and in some sense induction, abduction can produce results that are incorrect within its formal system. However, it can still be useful as a heuristic, especially when something is known about the likelihood of different causes for b.

Here it is an interesting book related to the subject:

0195189531.01._SX140_SCLZZZZZZZ_.jpg

Making Things Happen: A Theory of Causal Explanation

Explanation of why things happen is one of humans' most important cognitive operations. Abductive inference that generates explanatory hypotheses is an inherently risky form of reasoning because of the possibility of alternative explanations. Inferring that Paul has influenza because it explains his fever, aches, and cough is risky because other diseases such as meningitis can cause the same symptoms. People should only accept an explanatory hypothesis if it is better than its competitors, a form of inference that philosophers call inference to the best explanation. A cognitive model could perform this kind of inference by taking into account 3 criteria for the best explanation: consilience, which is a measure of how much a hypothesis explains; simplicity, which is a measure of how few additional assumptions a hypothesis needs to carry out an explanation; and analogy, which favors hypotheses whose explanations are analogous to accepted ones.

p11kasparovbreakoutyu0.jpg

Garry Kasparov playing against Deep Blue, the first machine to win a chess match against a world champion.

In AI (artificial intelligence), the term "abduction" is often used to describe inference to the best explanation as well as the generation of hypotheses. In actual systems, these two processes can be continuous, for example in the PEIRCE tool for abductive inference described by Josephson and Josephson (primarily an engineering tool rather than a cognitive model). In ordinary life and in many areas of science the relation between what is explained and what does the explaining is usually looser than deduction. An alternative conception of this relation is provided by understanding an explanation as the application of a causal schema, which is a pattern that describes the relation between causes and effects.

Probabilistic Models

Another, more quantitative way of establishing a looser relation than deduction between explainers and their targets is to use probability theory. Salmon proposed that the key to explanation is statistical relevance, where a property B in a population A is relevant to a property C if the probability of B given A and C is different from the probability of B given A alone: P(B|A&C) ≠ P(B|A). Salmon later moved away from a statistical understanding of explanation toward a causal mechanism account, but other philosophers and artificial intelligence researchers have focused on probabilistic accounts of causality and explanation. The core idea here is that people explain why something happened by citing the factors that made it more probable than it would have been otherwise. The main computational method for modeling explanation probabilistically is Bayesian networks. A Bayesian network is a directed graph in which the nodes are statistical variables, the edges between them represent conditional probabilities, and no cycles are allowed: you cannot have A influencing B which influences A. Causal structure and probability are connected by the Markov assumption, which says that a variable A in a causal graph is independent of all other variables that are not its effects, conditional on its direct causes in the graph. Bayesian networks are convenient ways for representing causal relationships, as below.

grhce7.jpg

Causal map of a disease. In a Bayesian network, each node is a variable and the arrow indicates causality represented by conditional
probability.

Despite their computational and philosophical power, there are reasons to doubt the psychological relevance of Bayesian networks. First, there is abundant experimental evidence that reasoning with probabilities is not a natural part of people's inferential practice. Computing with Bayesian networks requires a very large number of conditional probabilities that people not working in statistics have had no chance to acquire. Second, there is no reason to believe that people have the sort of information about independence that is required to satisfy the Markov condition and to make inference in Bayesian networks computationally tractable. Third, although it is natural to represent causal knowledge as directed graphs, there are many scientific and everyday contexts in which such graphs should have cycles because of feedback loops. For example, marriage breakdown often occurs because of escalating negative affect, in which the negative emotions of one partner produces behaviors that increase negative emotions of the other which then produces behavior that increases the negative emotions of the first partner. Such feedback loops are also common in biochemical pathways needed to explain disease. Fourth, probability by itself is not adequate to capture people's understanding of causality. Hence it is not at all obvious that Bayesian networks are the best way to model explanation by human scientists. Even in statistically rich fields such as the social sciences, scientists rely on an intuitive, non-probabilistic sense of causality of the sort explained below.
 

the.blanket.on.top

Permabanned
Joined
Sep 29, 2008
Messages
23
Neural Network Models

One benefit of attempting neural analyses of explanation is that it becomes possible to incorporate multimodal aspects of cognitive processing that tend to be ignored from deductive, schematic, and probabilistic perspectives. In medicine, for example, doctors and researchers may employ visual hypotheses (say about the shape and location of a tumor) to explain observations that can be represented using sight, touch, and smell as well as words. Moreover, the process of abductive inference has emotional inputs and outputs, because it is usually initiated when an observation is found to be surprising or puzzling, and it often results in a sense of pleasure or satisfaction when a satisfactory hypothesis is used to generate an explanation. Here it is an outline of this:

48536361un1.jpg

The process of abductive inference

The framework used proposes three basic principles of neural computation:

- Neural representations are defined by a combination of nonlinear encoding and linear decoding.
- Transformations of neural representations are linearly decoded functions of variables that are represented by a neural population.
- Neural dynamics are characterized by considering neural representations as control theoretic state variables.

These principles are applied to a particular neural system by identifying the interconnectivity of its subsystems, neuron response functions, neuron tuning curves, subsystem functional relations, and overall system behavior. The complexity of a representation is constrained by the dimensionality of the neural population that represents it. In rough terms, a single dimension in such a representation can correspond to one discrete "aspect" of that representation (e.g., speed and direction are the dimensional components of the vector quantity velocity). A hierarchy of representational complexity thus follows from neural activity defined in terms of one-dimensional scalars; vectors, with a finite but arbitrarily large number of dimensions; or functions, which are essentially continuous indexings of vector elements, thus ranging over infinite dimensional spaces. This framework provides for arbitrary computations to be performed in biologically realistic neural populations, and has been successfully applied to phenomena as diverse as lamprey locomotion, path integration by rats, and the Wason card selection task. The Wason task model, in particular, is structured very similarly to the model of abductive inference discussed. Both employ holographic reduced representations, a high-dimensional form of distributed representation.

Holographic reduced representations (HRRs) combine the neurological plausibility of distributed representations with the ability to maintain complex, embedded structural relations in a computationally efficient manner. This ability is common in symbolic models and is often singled out as deficient in distributed connectionist frameworks. HRRs have the important advantage of fixed dimensionality: the combination of two n-dimensional HRRs produces another n-dimensional HRR, rather than the 2n or even n2 dimensionality one would obtain using tensor products. This avoids the explosive computational resource requirements of tensor products to represent arbitrary, complex structural relationships. HRR representations are constructed through the multiplicative circular convolution (denoted by x) and are decoded by the approximate inverse operation, circular correlation (denoted by #). In general if C = AxB is encoded, then C#A ≈ B and C#B ≈ A. The approximate nature of the unbinding process introduces a degree of noise, proportional to the complexity of the HRR encoding in question and in inverse proportion to the dimensionality of the HRR. As noise tolerance is a requirement of any neurologically plausible model, this loss of representation information is acceptable, and the "cleanup" method of recognizing encoded HRR vectors using the dot product can be used to find the vector that best fits what was decoded. Note that HRRs may also be combined by simple superposition (i.e., addition): P = QxR + XxY, where P#R ≈ Q, P#X ≈ Y, and so on. The operations required for convolution and correlation can be implemented in a recurrent connectionist network.

In brief, the new model of abductive inference involves several large, high-dimensional populations to represent the data stored via HRRs and learned HRR transformations (the main output of the model), and a smaller population representing emotional valence information (abduction only requires considering emotion scaling from surprise to satisfaction, and hence only needs a single dimension represented by as few as 100 neurons to represent emotional changes). The model is initialized with a base set of causal encodings consisting of 100-dimensional HRRs combined in the form

antecedent x 'a' + relation x causes + consequent x 'b',

as well as HRRs that represent the successful explanation of a target 'x' (expl x 'x'). For the purposes of this model, only 6 different "filler" values were used, representing three such causal rules ('a' causes 'b', 'c' causes 'd', and 'e' causes 'f'). The populations used have between 2000 and 3200 neurons each and are 100- or 200-dimensional, which is at the lower-end of what is required for accurate HRR cleanup. More rules and filler values would require larger and higher-dimensional neural populations, an expansion that is unnecessary for a simple demonstration of abduction using biologically plausible neurons.

Following detection of a surprising 'b', which could be an event, proposition, or any sensory or cognitive data that can be represented via neurons, the change in emotional valence spurs activity in the output population towards generating a hypothesized explanation. This process involves employing several neural populations (representing the memorized rules and HRR convolution/correlation operations) to find an antecedent involved in a causal relationship that has 'b' as the consequent. In terms of HRRs, this means producing (rule # antecedent) for [(rule # relation ≈ causes) and (rule # consequent ≈ 'b')]. This production is accomplished in the 2000-neuron, 100-dimensional output population by means of associative learning through recurrent connectivity and connection weight updating. As activity in this population settles, an HRR cleanup operation is performed to obtain the result of the learned transformation. Specifically, some answer is "chosen" if the cleanup result matches one encoded value significantly more than any of the others (i.e., is above some reasonable threshold value). After the successful generation of an explanatory hypothesis, the emotional valence signal is reversed from surprise (which drove the search for an explanation) to what can be considered pleasure or satisfaction derived from having arrived at a plausible explanation. This in turn induces the output population to produce a representation corresponding to the successful dispatch of the explanandum 'b': namely, the HRR explb= expl x 'b'. Upon settling, it can thus be said that the model has accepted the hypothesized cause obtained in the previous stage as a valid explanation for the target 'b'. Settling completes the abductive inference: emotional valence returns to a neutral level, which suspends learning in the output population and causes population firing to return to basal levels of activity.

The basic process of abduction outlined previously maps very well to the results obtained from the model. The output population generates a valid hypothesis when surprised (since "a causes b" is the best memorized rule available to handle surprising 'b'), and reversal of emotional valence corresponds to an acceptance of the hypothesis, and hence the successful explanation of 'b'. In sum, the model of abduction outlined here demonstrates how emotion can influence neural activity underlying a cognitive process. Emotional valence acts as a context gate that determines whether the output neural ensemble must conduct a search for some explanation for surprising input, or whether some generated hypothesis needs to be evaluated as a suitable explanation for the surprising input.

Models of Scientific Explanation
 

tblood

Permabanned
Joined
Sep 28, 2008
Messages
25
Abductive Reasoning as a Way of Worldmaking - Résumé/Abstract

Abductive Reasoning as a Way of Worldmaking

The author deals with the operational core of logic, i.e. its diverse procedures of inference, in order to show that logically false inferences may in fact be right because -- in contrast to logical rationality -- they actually enlarge our knowledge of the world. This does not only mean that logically true inferences say nothing about the world, but also that all our inferences are invented hypotheses the adequacy of which cannot be proved within logic but only pragmatically.

In conclusion the author demonstrates, through the relationship between rule-following and rationality, that it is most irrational to want to exclude the irrational: it may, at times, be most rational to think and infer irrationally. Focussing on the operational aspects of knowing as inferring does away with the hiatus between logic and life, cognition and the world (reality) -- or whatever other dualism one wants to invoke: knowing means inferring, inferring means rule-governed interpreting, interpreting is a constructive, synthetic act, and a construction that proves adequate (viable) in the world of experience, in life, in the praxis of living, is, to the constructivist mind, knowledge.

It is the practice of living which provides the orienting standards for constructivist thinking and its judgments of viability. The question of trüth is replaced by the question of viability, and viability depends on the (right) kind of experiential fit.
 

tblood

Permabanned
Joined
Sep 28, 2008
Messages
25
Charles Sanders Peirce (1839-1914), the founder of pragmatism, spent 4 decades on the investigation of induction and deduction, models of thinking well established in logic, and supplemented them by an inferential procedure which he called abduction. He distinguished 3 kinds of inference: deduction, induction, and abduction. The abductive form was first called hypothetical, then abductive, than retroductive, and only at a later stage abductive consistently. In a semiotic theory of cognition abduction plays a decisive role because only by abduction can we add to our knowledge of the world. Peirce introduces the new kind of inference as "reasoning a posteriori," thus setting it apart from deduction a priori, and he replaces the three classical terms, major premise, minor premise, and conclusion, by his own terms: rule, case, and result. In this way, the sequential order in which the premises and the conclusion are known may be taken into account. Thus each of the 3 statements of the classical syllogism could in principle take any of the 3 positions, whether they are rule, case, or conclusion. The minor premise (the second premise of the classical syllogism), for example, may become the inferred conclusion, as is the case in abduction. The major premise which contains the predicate may naturally also be formulated as a rule (law): All humans are mortal. If X is human, X is mortal. Here it is the most famous of all syllogisms, the Modus Barbara, the deductive model of inference of the figure, the best-known instance of which has to do with the immortal Socrates :)

Major Premise (rule): All humans are mortal...................(MaP)
Minor Premise (case): Socrates is human......................(SaM)
_____________________________________________________
Conclusion (result): Socrates is mortal..........................(SaP)

The categorical syllogism relates 3 concepts, S (subject), P (predicate), and M (middle), in 3 statements (major premise, minor premise, conclusion) in order to examine their validity. In Peirce's view, the goal of all inferential thinking is to discover something we do not know and thus enlarge our knowledge by considering something we do know.

FORMS OF INFERENCE

fnzu8.jpg

Boxes with continuous lines contain that which is presupposed as given/true; boxes with dotted lines hypotheses that are inferred.

In logic as well as in the philosophy of science a valid deduction is considered to be truth-conserving; if the premises are true, the conclusion must be true, too. The price to be paid for this necessary truth, however, is that the information content of the conclusion is already implicitly contained in the premises. The "mortality of Socrates," the conclusion supplied by Modus Barbara, is nothing new, it was completely contained in the premises. Deduction, therefore, is not synthetic (content-increasing), does not lead to new knowledge. It is analytically true (redundant) and has, therefore, been considered to be merely an "explanatory statement" in the more recent discussion. Deductive thinking proceeds from the general (the rule), through the subsumption of the singular case under the rule, to the assertion of the particular (the result), as the arrows in the figure indicate. In the case of induction the premises (the initial basis) are observational statements, and an inferred conclusion (e.g., a hypothetical rule: All M are P) is considered to be content-increasing, but not truth-conserving because the inference is only a hypothesis that cannot be proved with ultimate certainty. Induction -- the converse of deduction -- progresses from the particular to the general. Therefore the arrows point "from the bottom to the top."

For a long time, Peirce classified induction as a synthetic inference until he had an insight of the greatest relevance to the philosophy of science, namely, that a valid induction already presupposes as a hypothesis the law or the general rule (M is P) which it is supposed to infer, in the first place. For Peirce inductive inferences, must satisfy 2 conditions in order to be valid: the sample must be a random selection from the underlying totality, and the specific characteristic that is to be examined by means of the sample must have been defined before the sample is drawn. The significance of this requirement, called "pre-designation" by Peirce, for the definition of inductive inference is that the predicate P must already be known before the sample (S', S'', S''') is selected from the totality (M). If however, the property to be examined must be defined before the sample is selected, this is only possible on the basis of a conjecture that the property exists in the corresponding totality before the inductive inference is made. How else could the property be known in advance of sample selection? Valid induction, therefore, already presupposes as a hypothesis the conclusion that is to be inferred. More precisely, inductive reasoning is based on a given hypothesis (M is P) and then, by means of samples (S', s"), seeks to establish the relative frequency (p) of the property (P) in the totality (M) with regard to that hypothesis... The condition that the property to be examined must be pre-designated in advance of sample selection makes Peirce conclude explicitly that induction cannot lead to new discoveries. This could mean that the scientist is bound to know already (implicitly) what he does not, in fact, know that he knows.

As it is logically excluded that there can be knowledge before knowing, the cognizing subjects must invent hypotheses on their own before any experience or experimentation takes place. Peirce's logical analysis shows, on the one hand, that induction does not belong among the synthetic forms of inference that, in one way or another, may enlarge our knowledge of the world. On the other hand, any kind of induction is dependent upon hypotheses which must have been constructed beforehand by cognizing subjects. And this process of construction is abductive, as far as its logical form is concerned. If, however, neither induction nor deduction enlarge our knowledge of the world, then abduction as the only knowledge-generating mechanism needs to become the central focus of discussion.
 

tblood

Permabanned
Joined
Sep 28, 2008
Messages
25
The abductive mode of inference involves 2 steps: In a first step a "phenomenon" to be explained or understood is presented (1), in Peirce's terminology, a "result," which is the derived conclusion in the classical schema; then, a second step introduces an available or newly constructed hypothesis (rule/law) (2) by means of which the case (3) is abducted. For Peirce, the function of abductive inference is forming an explanatory hypothesis. It is the only logical operation which introduces a new idea. Deduction proves that something must be; induction shows that something actually is operative; abduction merely suggests that something may be. It enables us to bridge the traditional gap between the arts and the sciences because it can be used as a model both of explanation and of understanding. Presented as an inverted modus ponens the abductive schema looks like this:

mpio7.jpg


As it can be seen, this form of reasoning is from consequent to antecedent, from effects to causes, a fallacy in (traditional) logic (a fallacia consequentis). The causal formulation of the explanation of the phenomenon that the road is wet would run as follows: The road is wet because it is raining (because it has rained). The rain is the cause inferred from the effect (the consequent). Abduction, i.e., inferring causes from effects, represents an explanatory principle which, though logically invalid, may still be confirmed inductively. The (potential) confirmation of such hypotheses (logical inferences or theories) says nothing about reality (ontology) in the sense of a representation or mapping but only that the hypotheses are functioning. Therefore, the structures of our logic(s) or our theories do not mirror the structures of things, nor are they derived (deduced) from them. With the primary act of the formulation of such a hypothesis a paradigm, a rule, a method of measurement is provisionally laid down by means of which we then "measure" or "compare" what we conceive of as "nature." Only in the second act arise the notions of "correct/incorrect," fitting/non-fitting, rule-conforming/rule-breaking, etc.

The actual constructive act consists in the apriori specification of a "measuring method" by the cognizing subject (the scientific community) because the world cannot determine for us directly what kind of measuring method or paradigm we must use. And just as the measuring method must be specified before measuring can take place, so induction must be directed/governed/controlled by a hypothesis that has been constructed abductively beforehand. The world, or nature, thus functions as a selection mechanism, as a constraint, which determines whether our hypotheses fit or fail. In the latter case, the scientific community is forced to change the theories, paradigms, or conceptual systems in such ways as they will allow the derivation of viable hypotheses that enlarge our knowledge of the world in the constructivist sense. Viable hypotheses, therefore, do not admit of positive statements about how the world really is, but only negative ones to the effect that other hypotheses do not work. We can, on the other hand, also draw the conclusion that, for a reality different from ours, we would need other "standards," other systems of categories, etc. to be able to orientate ourselves in it because the ones we have would not work. If an abductive inference establishes itself in the scientific community as a new paradigm (a new explanation of a certain phenomenon, like Kepler's hypothesis of the elliptical orbit of the planet Mars), then the logic of the corresponding conceptual system has changed.

DIAGNOSTIC INFERENCES ARE ABDUCTIVE, TOO

fcdn3.jpg

An example of a medical diagnosis

Every diagnostic statement by a medical expert functions as an abductive inference.
 

reason

New member
Joined
Apr 26, 2007
Messages
1,209
MBTI Type
ESFJ
Abductive inference is a curious notion. It is often characterised as being an inference to the best explanation, however, the best explanation is always true. Therefore, an abductive inference would be one where the conclusion is true regardless of the premises--a strage kind of inference indeed.
 
Top