After looking more closely, I realized I was previously confusing deductive reasoning with inductive reasoning. I was saying deductive reasoning but meaning inductive reasoning, and saying inductive reasoning but meaning inference. I never was much good with labels. I think those adjustments should address the dog/fag problem. Inductive reasoning does not require all predicates to be true; deductive reasoning does.
User Tag List
Thread: Induction and Deduction

09232008, 08:22 PM #51

09232008, 08:27 PM #52
 Join Date
 Sep 2008
 Posts
 6
Inductive Reasoning
Wiki has also an interesting primer on the Inductive Reasoning Method:
The problem of induction is the philosophical question of whether inductive reasoning is valid. That is, what is the justification for either:
 generalizing about the properties of a class of objects based on some number of observations of particular instances of that class (for example, the inference that "all swans we have seen are white, and therefore all swans are white," before the discovery of black swans) or,
 presupposing that a sequence of events in the future will occur as it always has in the past (for example, that the laws of physics will hold as they have always been observed to hold).
The problem calls into question all empirical claims made in everyday life or through the scientific method. Although the problem dates back to the Pyrrhonism of ancient philosophy, David Hume introduced it in the mid18th century, with the most notable response provided by Karl Popper 2 centuries later.
Pyrrhonian skeptic Sextus Empiricus first questioned induction, reasoning that a universal rule could not be established from an incomplete set of particular instances. He wrote:
when they propose to establish the universal from the particulars by means of induction, they will effect this by a review of either all or some of the particulars. But if they review some, the induction will be insecure, since some of the particulars omitted in the induction may contravene the universal; while if they are to review all, they will be toiling at the impossible, since the particulars are infinite and indefinite
Those who claim for themselves to judge the truth are bound to possess a criterion of truth. This criterion, then, either is without a judge's approval or has been approved. But if it is without approval, whence comes it that it is truthworthy? For no matter of dispute is to be trusted without judging. And, if it has been approved, that which approves it, in turn, either has been approved or has not been approved, and so on ad infinitum.
In inductive reasoning, one makes a series of observations and infers a new claim based on them. For instance, from a series of observations that at sealevel (approximately 14psi) samples of water freeze at 0°C (32°F), it seems valid to infer that the next sample of water will do the same, or, in general, at sealevel water freezes at 0°C. That the next sample of water freezes under those conditions merely adds to the series of observations. First, it is not certain, regardless of the number of observations, that water always freezes at 0°C at sealevel. To be certain, it must be known that the law of nature is immutable. Second, the observations themselves do not establish the validity of inductive reasoning, except inductively. In other words, observations that inductive reasoning has worked in the past do not ensure that it will always work. This second problem is the problem of induction.
David Hume
David Hume described the problem in "An Enquiry concerning Human Understanding," §4, based on his epistemological framework. Here, "reason" refers to deductive reasoning and "induction" refers to inductive reasoning. First, Hume ponders the discovery of causal relations, which form the basis for what he refers to as "matters of fact." He argues that causal relations are found not by reason, but by induction. This is because for any cause, multiple effects are conceivable, and the actual effect cannot be determined by reasoning about the cause; instead, one must observe occurrences of the causal relation to discover that it holds. For example, when one thinks of "a billiard ball moving in a straight line toward another," one can conceive that the first ball bounces back with the second ball remaining at rest, the first ball stops and the second ball moves, or the first ball jumps over the second, etc. There is no reason to conclude any of these possibilities over the others. Only through previous observation can it be predicted, inductively, what will actually happen with the balls. In general, it is not necessary that causal relation in the future resemble causal relations in the past, as it is always conceivable otherwise; for Hume, this is because the negation of the claim does not lead to a contradiction.
Next, Hume ponders the justification of induction. If all matters of fact are based on causal relations, and all causal relations are found by induction, then induction must be shown to be valid somehow. He uses the fact that induction assumes a valid connection between the proposition "I have found that such an object has always been attended with such an effect" and the proposition "I foresee that other objects which are in appearance similar will be attended with similar effects." One connects these two propositions not by reason, but by induction. This claim is supported by the same reasoning as that for causal relations above, and by the observation that even rationally inexperienced or inferior people can infer, for example, that touching fire causes pain. Hume challenges other philosophers to come up with a (deductive) reason for the connection. If he is right, then the justification of induction can be only inductive. But this begs the question; as induction is based on an assumption of the connection, it cannot itself explain the connection. In this way, the problem of induction is not only concerned with the uncertainty of conclusions derived by induction, but doubts the very principle through which those uncertain conclusions are derived.
So how does Hume gets us out of the woods? Well, he says that although induction is not made by reason, we nonetheless perform it and improve from it. He proposes a descriptive explanation for the nature of induction in §5 of the Enquiry, titled "Skeptical solution of these doubts". It is by custom or habit that one draws the inductive connection described above, and "without the influence of custom we would be entirely ignorant of every matter of fact beyond what is immediately present to the memory and senses." The result of custom is belief, which is instinctual and much stronger than imagination alone. Rather than unproductive radical skepticism about everything, Hume said that he was actually advocating a practical skepticism based on common sense, wherein the inevitability of induction is accepted. Someone who insists on reason for certainty might, for instance, starve to death, as they would not infer the benefits of food based on previous observations of nutrition.

09252008, 02:06 PM #53
 Join Date
 Sep 2008
 Posts
 5
A Case of Math Induction
All horses are the same color
Suppose that we have a set of 5 horses. We wish to prove that they are all the same color. Suppose that we had a proof that all sets of 4 horses were the same color. If that were true, we could prove that all five horses are the same color by removing a horse to leave a group of 4 horses. Do this in two ways, and we have two different groups of four horses. By our supposed existing proof, since these are groups of 4, all horses in them must be the same color. For example, the first, second, third and fourth horses constitute a group of 4, and thus must all be the same color; and the second, third, fourth and fifth horses also constitute a group of 4 and thus must also all be the same color. For this to occur, all 5 horses in the group of five must be the same color.
But how are we to get a proof that all sets of 4 horses are the same color? We apply the same logic again. By the same process, a group of 4 horses could be broken down into groups of 3, and then a group of 3 horses could be broken down into groups of 2, and so on. Eventually we will reach a group size of 1, and it is obvious that all horses in a group of 1 horse must be the same color. By the same logic we can also increase the group size. A group of 5 horses can be increased to a group of 6, and so on upwards, so that all finite sized groups of horses must be the same color.
The argument above makes the implicit assumption that the two subsets of horses to which the induction assumption is applied have a common element. This is not true when n = 1, that is, when the original set only contains 2 horses. Indeed, let the 2 horses be horse A and horse B. When horse A is removed, it is true that the remaining horses in the set are the same color (only horse B remains). If horse B is removed instead, this leaves a different set containing only horse A, which may or may not be the same color as horse B. The problem in the argument is the assumption that because each of these 2 sets contains only one colour of horses, the original set also contained only one colour of horses. Because there are no common elements (horses) in the two sets, it is unknown whether the two horses share the same colour. The proof forms a falsidical paradox; it seems to show something manifestly false by valid reasoning, but in fact the reasoning is flawed. The horse paradox exposes the pitfalls arising from failure to consider special cases for which a general statement may be false.

09282008, 05:09 PM #54
 Join Date
 Sep 2008
 Posts
 25
Abductive Reasoning
"The Gods have certainty, whereas to us as men conjecture [only is possible].."
Alcmaion von Kroton
Abduction, or inference to the best explanation, is a method of reasoning in which one chooses the hypothesis that would, if true, best explain the relevant evidence. Abductive reasoning starts from a set of accepted facts and infers their most likely, or best, explanations. The term abduction is also sometimes used to just mean the generation of hypotheses to explain observations or conclusions, but the former definition is more common both in philosophy and computing.
Deduction
allows deriving b as a consequence of a. In other words, deduction is the process of deriving the consequences of what is assumed. Given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. A deductive statement is based on accepted truths, e.g. All bachelors are unmarried men. It is true by definition and is independent of sense experience.
Induction
allows inferring a from multiple instantiations of b when a entails b. Induction is the process of inferring probable antecedents as a result of observing multiple consequents. An inductive statement requires perception for it to be true. For example, the statement 'it is snowing outside' is invalid until one looks or goes outside to see whether it is true or not. Induction requires sense experience.
Abduction
allows inferring a as an explanation of b. Because of this, abduction allows the precondition a to be inferred from the consequence b. Deduction and abduction thus differ in the direction in which a rule like "a entails b" is used for inference. As such abduction is formally equivalent to the logical fallacy "affirming the consequent" or post hoc ergo propter hoc, because there are multiple possible explanations for b. Unlike deduction and in some sense induction, abduction can produce results that are incorrect within its formal system. However, it can still be useful as a heuristic, especially when something is known about the likelihood of different causes for b.
Here it is an interesting book related to the subject:
Making Things Happen: A Theory of Causal Explanation

09292008, 08:19 PM #55
 Join Date
 Sep 2008
 Posts
 23
Explanation of why things happen is one of humans' most important cognitive operations. Abductive inference that generates explanatory hypotheses is an inherently risky form of reasoning because of the possibility of alternative explanations. Inferring that Paul has influenza because it explains his fever, aches, and cough is risky because other diseases such as meningitis can cause the same symptoms. People should only accept an explanatory hypothesis if it is better than its competitors, a form of inference that philosophers call inference to the best explanation. A cognitive model could perform this kind of inference by taking into account 3 criteria for the best explanation: consilience, which is a measure of how much a hypothesis explains; simplicity, which is a measure of how few additional assumptions a hypothesis needs to carry out an explanation; and analogy, which favors hypotheses whose explanations are analogous to accepted ones.
Garry Kasparov playing against Deep Blue, the first machine to win a chess match against a world champion.
In AI (artificial intelligence), the term "abduction" is often used to describe inference to the best explanation as well as the generation of hypotheses. In actual systems, these two processes can be continuous, for example in the PEIRCE tool for abductive inference described by Josephson and Josephson (primarily an engineering tool rather than a cognitive model). In ordinary life and in many areas of science the relation between what is explained and what does the explaining is usually looser than deduction. An alternative conception of this relation is provided by understanding an explanation as the application of a causal schema, which is a pattern that describes the relation between causes and effects.
Probabilistic Models
Another, more quantitative way of establishing a looser relation than deduction between explainers and their targets is to use probability theory. Salmon proposed that the key to explanation is statistical relevance, where a property B in a population A is relevant to a property C if the probability of B given A and C is different from the probability of B given A alone: P(BA&C) ? P(BA). Salmon later moved away from a statistical understanding of explanation toward a causal mechanism account, but other philosophers and artificial intelligence researchers have focused on probabilistic accounts of causality and explanation. The core idea here is that people explain why something happened by citing the factors that made it more probable than it would have been otherwise. The main computational method for modeling explanation probabilistically is Bayesian networks. A Bayesian network is a directed graph in which the nodes are statistical variables, the edges between them represent conditional probabilities, and no cycles are allowed: you cannot have A influencing B which influences A. Causal structure and probability are connected by the Markov assumption, which says that a variable A in a causal graph is independent of all other variables that are not its effects, conditional on its direct causes in the graph. Bayesian networks are convenient ways for representing causal relationships, as below.
Causal map of a disease. In a Bayesian network, each node is a variable and the arrow indicates causality represented by conditional
probability.
Despite their computational and philosophical power, there are reasons to doubt the psychological relevance of Bayesian networks. First, there is abundant experimental evidence that reasoning with probabilities is not a natural part of people's inferential practice. Computing with Bayesian networks requires a very large number of conditional probabilities that people not working in statistics have had no chance to acquire. Second, there is no reason to believe that people have the sort of information about independence that is required to satisfy the Markov condition and to make inference in Bayesian networks computationally tractable. Third, although it is natural to represent causal knowledge as directed graphs, there are many scientific and everyday contexts in which such graphs should have cycles because of feedback loops. For example, marriage breakdown often occurs because of escalating negative affect, in which the negative emotions of one partner produces behaviors that increase negative emotions of the other which then produces behavior that increases the negative emotions of the first partner. Such feedback loops are also common in biochemical pathways needed to explain disease. Fourth, probability by itself is not adequate to capture people's understanding of causality. Hence it is not at all obvious that Bayesian networks are the best way to model explanation by human scientists. Even in statistically rich fields such as the social sciences, scientists rely on an intuitive, nonprobabilistic sense of causality of the sort explained below.

09292008, 08:52 PM #56
 Join Date
 Sep 2008
 Posts
 23
Neural Network Models
One benefit of attempting neural analyses of explanation is that it becomes possible to incorporate multimodal aspects of cognitive processing that tend to be ignored from deductive, schematic, and probabilistic perspectives. In medicine, for example, doctors and researchers may employ visual hypotheses (say about the shape and location of a tumor) to explain observations that can be represented using sight, touch, and smell as well as words. Moreover, the process of abductive inference has emotional inputs and outputs, because it is usually initiated when an observation is found to be surprising or puzzling, and it often results in a sense of pleasure or satisfaction when a satisfactory hypothesis is used to generate an explanation. Here it is an outline of this:
The process of abductive inference
The framework used proposes three basic principles of neural computation:
 Neural representations are defined by a combination of nonlinear encoding and linear decoding.
 Transformations of neural representations are linearly decoded functions of variables that are represented by a neural population.
 Neural dynamics are characterized by considering neural representations as control theoretic state variables.
These principles are applied to a particular neural system by identifying the interconnectivity of its subsystems, neuron response functions, neuron tuning curves, subsystem functional relations, and overall system behavior. The complexity of a representation is constrained by the dimensionality of the neural population that represents it. In rough terms, a single dimension in such a representation can correspond to one discrete "aspect" of that representation (e.g., speed and direction are the dimensional components of the vector quantity velocity). A hierarchy of representational complexity thus follows from neural activity defined in terms of onedimensional scalars; vectors, with a finite but arbitrarily large number of dimensions; or functions, which are essentially continuous indexings of vector elements, thus ranging over infinite dimensional spaces. This framework provides for arbitrary computations to be performed in biologically realistic neural populations, and has been successfully applied to phenomena as diverse as lamprey locomotion, path integration by rats, and the Wason card selection task. The Wason task model, in particular, is structured very similarly to the model of abductive inference discussed. Both employ holographic reduced representations, a highdimensional form of distributed representation.
Holographic reduced representations (HRRs) combine the neurological plausibility of distributed representations with the ability to maintain complex, embedded structural relations in a computationally efficient manner. This ability is common in symbolic models and is often singled out as deficient in distributed connectionist frameworks. HRRs have the important advantage of fixed dimensionality: the combination of two ndimensional HRRs produces another ndimensional HRR, rather than the 2n or even n2 dimensionality one would obtain using tensor products. This avoids the explosive computational resource requirements of tensor products to represent arbitrary, complex structural relationships. HRR representations are constructed through the multiplicative circular convolution (denoted by x) and are decoded by the approximate inverse operation, circular correlation (denoted by #). In general if C = AxB is encoded, then C#A ? B and C#B ? A. The approximate nature of the unbinding process introduces a degree of noise, proportional to the complexity of the HRR encoding in question and in inverse proportion to the dimensionality of the HRR. As noise tolerance is a requirement of any neurologically plausible model, this loss of representation information is acceptable, and the "cleanup" method of recognizing encoded HRR vectors using the dot product can be used to find the vector that best fits what was decoded. Note that HRRs may also be combined by simple superposition (i.e., addition): P = QxR + XxY, where P#R ? Q, P#X ? Y, and so on. The operations required for convolution and correlation can be implemented in a recurrent connectionist network.
In brief, the new model of abductive inference involves several large, highdimensional populations to represent the data stored via HRRs and learned HRR transformations (the main output of the model), and a smaller population representing emotional valence information (abduction only requires considering emotion scaling from surprise to satisfaction, and hence only needs a single dimension represented by as few as 100 neurons to represent emotional changes). The model is initialized with a base set of causal encodings consisting of 100dimensional HRRs combined in the form
antecedent x 'a' + relation x causes + consequent x 'b',
as well as HRRs that represent the successful explanation of a target 'x' (expl x 'x'). For the purposes of this model, only 6 different "filler" values were used, representing three such causal rules ('a' causes 'b', 'c' causes 'd', and 'e' causes 'f'). The populations used have between 2000 and 3200 neurons each and are 100 or 200dimensional, which is at the lowerend of what is required for accurate HRR cleanup. More rules and filler values would require larger and higherdimensional neural populations, an expansion that is unnecessary for a simple demonstration of abduction using biologically plausible neurons.
Following detection of a surprising 'b', which could be an event, proposition, or any sensory or cognitive data that can be represented via neurons, the change in emotional valence spurs activity in the output population towards generating a hypothesized explanation. This process involves employing several neural populations (representing the memorized rules and HRR convolution/correlation operations) to find an antecedent involved in a causal relationship that has 'b' as the consequent. In terms of HRRs, this means producing (rule # antecedent) for [(rule # relation ? causes) and (rule # consequent ? 'b')]. This production is accomplished in the 2000neuron, 100dimensional output population by means of associative learning through recurrent connectivity and connection weight updating. As activity in this population settles, an HRR cleanup operation is performed to obtain the result of the learned transformation. Specifically, some answer is "chosen" if the cleanup result matches one encoded value significantly more than any of the others (i.e., is above some reasonable threshold value). After the successful generation of an explanatory hypothesis, the emotional valence signal is reversed from surprise (which drove the search for an explanation) to what can be considered pleasure or satisfaction derived from having arrived at a plausible explanation. This in turn induces the output population to produce a representation corresponding to the successful dispatch of the explanandum 'b': namely, the HRR explb= expl x 'b'. Upon settling, it can thus be said that the model has accepted the hypothesized cause obtained in the previous stage as a valid explanation for the target 'b'. Settling completes the abductive inference: emotional valence returns to a neutral level, which suspends learning in the output population and causes population firing to return to basal levels of activity.
The basic process of abduction outlined previously maps very well to the results obtained from the model. The output population generates a valid hypothesis when surprised (since "a causes b" is the best memorized rule available to handle surprising 'b'), and reversal of emotional valence corresponds to an acceptance of the hypothesis, and hence the successful explanation of 'b'. In sum, the model of abduction outlined here demonstrates how emotion can influence neural activity underlying a cognitive process. Emotional valence acts as a context gate that determines whether the output neural ensemble must conduct a search for some explanation for surprising input, or whether some generated hypothesis needs to be evaluated as a suitable explanation for the surprising input.
Models of Scientific Explanation

09302008, 02:11 PM #57
 Join Date
 Sep 2008
 Posts
 25
Abductive Reasoning as a Way of Worldmaking  Résumé/Abstract
Abductive Reasoning as a Way of Worldmaking
The author deals with the operational core of logic, i.e. its diverse procedures of inference, in order to show that logically false inferences may in fact be right because  in contrast to logical rationality  they actually enlarge our knowledge of the world. This does not only mean that logically true inferences say nothing about the world, but also that all our inferences are invented hypotheses the adequacy of which cannot be proved within logic but only pragmatically.
In conclusion the author demonstrates, through the relationship between rulefollowing and rationality, that it is most irrational to want to exclude the irrational: it may, at times, be most rational to think and infer irrationally. Focussing on the operational aspects of knowing as inferring does away with the hiatus between logic and life, cognition and the world (reality)  or whatever other dualism one wants to invoke: knowing means inferring, inferring means rulegoverned interpreting, interpreting is a constructive, synthetic act, and a construction that proves adequate (viable) in the world of experience, in life, in the praxis of living, is, to the constructivist mind, knowledge.
It is the practice of living which provides the orienting standards for constructivist thinking and its judgments of viability. The question of trüth is replaced by the question of viability, and viability depends on the (right) kind of experiential fit.

09302008, 02:44 PM #58
 Join Date
 Sep 2008
 Posts
 25
Charles Sanders Peirce (18391914), the founder of pragmatism, spent 4 decades on the investigation of induction and deduction, models of thinking well established in logic, and supplemented them by an inferential procedure which he called abduction. He distinguished 3 kinds of inference: deduction, induction, and abduction. The abductive form was first called hypothetical, then abductive, than retroductive, and only at a later stage abductive consistently. In a semiotic theory of cognition abduction plays a decisive role because only by abduction can we add to our knowledge of the world. Peirce introduces the new kind of inference as "reasoning a posteriori," thus setting it apart from deduction a priori, and he replaces the three classical terms, major premise, minor premise, and conclusion, by his own terms: rule, case, and result. In this way, the sequential order in which the premises and the conclusion are known may be taken into account. Thus each of the 3 statements of the classical syllogism could in principle take any of the 3 positions, whether they are rule, case, or conclusion. The minor premise (the second premise of the classical syllogism), for example, may become the inferred conclusion, as is the case in abduction. The major premise which contains the predicate may naturally also be formulated as a rule (law): All humans are mortal. If X is human, X is mortal. Here it is the most famous of all syllogisms, the Modus Barbara, the deductive model of inference of the figure, the bestknown instance of which has to do with the immortal Socrates
Major Premise (rule): All humans are mortal...................(MaP)
Minor Premise (case): Socrates is human......................(SaM)
__________________________________________________ ___
Conclusion (result): Socrates is mortal..........................(SaP)
The categorical syllogism relates 3 concepts, S (subject), P (predicate), and M (middle), in 3 statements (major premise, minor premise, conclusion) in order to examine their validity. In Peirce's view, the goal of all inferential thinking is to discover something we do not know and thus enlarge our knowledge by considering something we do know.
FORMS OF INFERENCE
Boxes with continuous lines contain that which is presupposed as given/true; boxes with dotted lines hypotheses that are inferred.
In logic as well as in the philosophy of science a valid deduction is considered to be truthconserving; if the premises are true, the conclusion must be true, too. The price to be paid for this necessary truth, however, is that the information content of the conclusion is already implicitly contained in the premises. The "mortality of Socrates," the conclusion supplied by Modus Barbara, is nothing new, it was completely contained in the premises. Deduction, therefore, is not synthetic (contentincreasing), does not lead to new knowledge. It is analytically true (redundant) and has, therefore, been considered to be merely an "explanatory statement" in the more recent discussion. Deductive thinking proceeds from the general (the rule), through the subsumption of the singular case under the rule, to the assertion of the particular (the result), as the arrows in the figure indicate. In the case of induction the premises (the initial basis) are observational statements, and an inferred conclusion (e.g., a hypothetical rule: All M are P) is considered to be contentincreasing, but not truthconserving because the inference is only a hypothesis that cannot be proved with ultimate certainty. Induction  the converse of deduction  progresses from the particular to the general. Therefore the arrows point "from the bottom to the top."
For a long time, Peirce classified induction as a synthetic inference until he had an insight of the greatest relevance to the philosophy of science, namely, that a valid induction already presupposes as a hypothesis the law or the general rule (M is P) which it is supposed to infer, in the first place. For Peirce inductive inferences, must satisfy 2 conditions in order to be valid: the sample must be a random selection from the underlying totality, and the specific characteristic that is to be examined by means of the sample must have been defined before the sample is drawn. The significance of this requirement, called "predesignation" by Peirce, for the definition of inductive inference is that the predicate P must already be known before the sample (S', S'', S''') is selected from the totality (M). If however, the property to be examined must be defined before the sample is selected, this is only possible on the basis of a conjecture that the property exists in the corresponding totality before the inductive inference is made. How else could the property be known in advance of sample selection? Valid induction, therefore, already presupposes as a hypothesis the conclusion that is to be inferred. More precisely, inductive reasoning is based on a given hypothesis (M is P) and then, by means of samples (S', s"), seeks to establish the relative frequency (p) of the property (P) in the totality (M) with regard to that hypothesis... The condition that the property to be examined must be predesignated in advance of sample selection makes Peirce conclude explicitly that induction cannot lead to new discoveries. This could mean that the scientist is bound to know already (implicitly) what he does not, in fact, know that he knows.
As it is logically excluded that there can be knowledge before knowing, the cognizing subjects must invent hypotheses on their own before any experience or experimentation takes place. Peirce's logical analysis shows, on the one hand, that induction does not belong among the synthetic forms of inference that, in one way or another, may enlarge our knowledge of the world. On the other hand, any kind of induction is dependent upon hypotheses which must have been constructed beforehand by cognizing subjects. And this process of construction is abductive, as far as its logical form is concerned. If, however, neither induction nor deduction enlarge our knowledge of the world, then abduction as the only knowledgegenerating mechanism needs to become the central focus of discussion.

09302008, 03:34 PM #59
 Join Date
 Sep 2008
 Posts
 25
The abductive mode of inference involves 2 steps: In a first step a "phenomenon" to be explained or understood is presented (1), in Peirce's terminology, a "result," which is the derived conclusion in the classical schema; then, a second step introduces an available or newly constructed hypothesis (rule/law) (2) by means of which the case (3) is abducted. For Peirce, the function of abductive inference is forming an explanatory hypothesis. It is the only logical operation which introduces a new idea. Deduction proves that something must be; induction shows that something actually is operative; abduction merely suggests that something may be. It enables us to bridge the traditional gap between the arts and the sciences because it can be used as a model both of explanation and of understanding. Presented as an inverted modus ponens the abductive schema looks like this:
As it can be seen, this form of reasoning is from consequent to antecedent, from effects to causes, a fallacy in (traditional) logic (a fallacia consequentis). The causal formulation of the explanation of the phenomenon that the road is wet would run as follows: The road is wet because it is raining (because it has rained). The rain is the cause inferred from the effect (the consequent). Abduction, i.e., inferring causes from effects, represents an explanatory principle which, though logically invalid, may still be confirmed inductively. The (potential) confirmation of such hypotheses (logical inferences or theories) says nothing about reality (ontology) in the sense of a representation or mapping but only that the hypotheses are functioning. Therefore, the structures of our logic(s) or our theories do not mirror the structures of things, nor are they derived (deduced) from them. With the primary act of the formulation of such a hypothesis a paradigm, a rule, a method of measurement is provisionally laid down by means of which we then "measure" or "compare" what we conceive of as "nature." Only in the second act arise the notions of "correct/incorrect," fitting/nonfitting, ruleconforming/rulebreaking, etc.
The actual constructive act consists in the apriori specification of a "measuring method" by the cognizing subject (the scientific community) because the world cannot determine for us directly what kind of measuring method or paradigm we must use. And just as the measuring method must be specified before measuring can take place, so induction must be directed/governed/controlled by a hypothesis that has been constructed abductively beforehand. The world, or nature, thus functions as a selection mechanism, as a constraint, which determines whether our hypotheses fit or fail. In the latter case, the scientific community is forced to change the theories, paradigms, or conceptual systems in such ways as they will allow the derivation of viable hypotheses that enlarge our knowledge of the world in the constructivist sense. Viable hypotheses, therefore, do not admit of positive statements about how the world really is, but only negative ones to the effect that other hypotheses do not work. We can, on the other hand, also draw the conclusion that, for a reality different from ours, we would need other "standards," other systems of categories, etc. to be able to orientate ourselves in it because the ones we have would not work. If an abductive inference establishes itself in the scientific community as a new paradigm (a new explanation of a certain phenomenon, like Kepler's hypothesis of the elliptical orbit of the planet Mars), then the logic of the corresponding conceptual system has changed.
DIAGNOSTIC INFERENCES ARE ABDUCTIVE, TOO
An example of a medical diagnosis
Every diagnostic statement by a medical expert functions as an abductive inference.

09302008, 03:55 PM #60
Abductive inference is a curious notion. It is often characterised as being an inference to the best explanation, however, the best explanation is always true. Therefore, an abductive inference would be one where the conclusion is true regardless of the premisesa strage kind of inference indeed.
A criticism that can be brought against everything ought not to be brought against anything.
Similar Threads

The Banned and The Damned
By Haight in forum Official DecreesReplies: 331Last Post: 11302017, 07:12 PM 
Do Ti and Te map onto deduction vs. induction?
By funtensity in forum MyersBriggs and Jungian Cognitive FunctionsReplies: 5Last Post: 11232013, 03:15 PM 
Do skills of observation/cold reading and deduction come naturally?
By Illmatic in forum General PsychologyReplies: 9Last Post: 10102011, 08:56 PM 
How can i develop my skills of observation and deduction?
By Illmatic in forum General PsychologyReplies: 23Last Post: 09012011, 05:25 AM 
[NT] Probability Relations and Induction
By Provoker in forum The NT Rationale (ENTP, INTP, ENTJ, INTJ)Replies: 51Last Post: 09302009, 06:54 PM