• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Empiricism vs. Logic

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
I am not proposing a dichotomy.

Nevertheless, there does seem to be some hostility from people who pride themselves on empiricism towards the value of logical argument (from axioms to theorems).

I want to discuss this. I believe this is the correct forum. The philosophy forum would not invite as any empiricists.

People seem to believe that the heart of science is in its empiricism. I think the empirical approach is important. But I see a lot of people dismiss the logical-philosophical aspects of science due to the "lack of empiricism."

If you are one of these people, can you explain your position?

Consider the success of geometry (in all its forms, not just Euclidean). What makes it so successful?

Consider the failure of empirical approaches to things like the stock market. What makes them so unsuccessful? Can the approaches taken by the Santa Fe Institute help in this regard?
 
Last edited:

BlueScreen

Fail 2.0
Joined
Nov 8, 2008
Messages
2,668
MBTI Type
YMCA
Axioms and theorems are perfectly okay to use if you understand what they are and what they mean. Some idea of the real world is required for this. If you had science without experiments then the logic would be in fantasy world. If you had science without logic, then we'd probably be in the stone age. I'm not more for either. The balance depends on the context.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
I agree with this. A balance is needed.

However, it just seems to me, that for many scientists, the approach of using axioms and theorems is simply dismissed as not being useful because what they are studying is "too complicated."

I am, in particular, looking at the approaches used in life and social sciences. Why are statistical approaches so heavily favored over deductive approaches?

We have the computing power now to do some amazingly complex simulations based on first principles. Examples include studying the formation of the solar system from the myriad of particles in the vicinity and the simulation of circuits with 100s of millions of transistors. There are tricks used here to make things tractable, but there may be tricks available in other fields as well. Computational chemistry has also been a great success, though I know less about this.

Yet, most of what I see is computing power thrown at data-mining like activities. I would like some insight into knowing why that is.

Certainly, our "first principles" could be way off, but if the assumptions are kept small number, there is less of a chance that we are correct by simply having enough knobs to tweak.
 

Mole

Permabanned
Joined
Mar 20, 2008
Messages
20,284
Usually a mathematical discovery is made but only finds a practical use fifty or a hundred years later.

And interesting exception to this is non-scalar networks, for the mathematics was invented in 1997 but a used was found one thousand years earlier by the Dominican Order.

However the Dominicans did use non-scalar networks successfully to hunt heretics, even without the mathematics.

An interesting relationship between logic and the empirical.
 

spin-1/2-nuclei

New member
Joined
May 2, 2010
Messages
381
MBTI Type
INTJ
I agree with this. A balance is needed.

However, it just seems to me, that for many scientists, the approach of using axioms and theorems is simply dismissed as not being useful because what they are studying is "too complicated."

In my research we do an even mix of theorizing, computation, and experimentation. I spend as much time working out the physics of a computational model as I do testing that model in the lab via real world experiments. These computations and theories we use to formulate our ideas are all based on first principles, but unfortunately we do not have the technology to take a completely ab initio approach, for some of the things I do we'd be looking at an entire life time of processing time for me to get just a few of my calculations. So we rely heavily on empirical knowledge for some aspects of our research since we have no other way to verify the validity of our assumptions.

So it really isn't useful for a scientist to do anything but verify that there is a cause and effect relationship with everything. There honestly isn't any room for philosophy as a stand alone in our work. Something can be very logical and still lead you to the wrong conclusion without carefully analyzing the cause and effect at each step. If the logic is based off of incorrect assumptions then you will get the wrong answer every time (quite logically I might add). So in the complex sciences we have to test the validity of our assumptions through empirical methods (when the variables are too complex for ab initio computational methods) and sometimes believe it or not, even though our assumptions are based on first principles and we follow air tight logic, we don't get what we expect when we run the experiments and usually when we look at the individual system we are testing we find one variable that causes something about an assumption based on a first principle to be wrong. Not because the theory is wrong, but because when you have over a thousand variables it is sometimes difficult to apply first principles to the system as a whole in any kind of useful way.

An example might be that typically you would expect functional group A to react very quickly with functional group B, only in the system we are using for some reason despite what was predicted from first principles no reaction occurs in real life, so a crystal structure is taken of the starting material and NMRs are taken of the reaction intermediates and from that we find that the molecule adopts an unexpected geometry in the transition state which makes any reaction between group A and group B impossible. We'd be out millions of dollars if we'd based an entire research project on the assumption that those two groups would react, when they clearly do not (even though theoretically they should when taken out of the context of that specific system) - and this is why cause and effect are so important to most areas of science.
 

The_Liquid_Laser

Glowy Goopy Goodness
Joined
Jul 11, 2007
Messages
3,376
MBTI Type
ENTP
I think it's important to understand the strengths of logic vs. the strenghts of data and then use them both appropriately. The advantages of logic are that logic is very resource cheap and its conclusions are absolutely certain, and it's easier to remain impartial when using logic. The advantages of data are that it is tied to the real world, and it allows us to understand the details of a phenomena instead of just the general principles.

So I find that it's better to rely as much on logic early on as you possibly can. Think about the problem so that you can get to the heart of the question that you really want to answer. You can use equations, graphs, Venn diagrams, etc... to really understand the problem without having to spend much in the way of resources. Then construct a hypothesis, so that you can clearly spell out the possible conclusions before you see the data. I.e. "If the data shows A then my hypothesis is correct, but if the data shows B, then I must conclude the opposite...." This sort of thing prevents confirmation bias from happening.

Then you can collect data, perform experiments, etc.... This sort of thing tests your hypothesis and fills out the details that you might not have thought of to begin with. But really if you look at the approach I'm advocating, it relies on using logic as much as possible. That might be my personal bias, but I think that approach gives a way to reach dependable conclusions while making the most of resources.
 

erm

Permabanned
Joined
Jun 19, 2007
Messages
1,652
MBTI Type
INFP
Enneagram
5
I don't think logic, as humans use it, can take into account nearly enough variables to be truly accurate, nor can it account for all unknown and possible variables. It can never quite get precise enough predictions even if it is accurate and simply cannot take into account the full scope of what is occurring.

As such, it seems sensible to rely on empiricism as an anchor. So logic can both inspire and directly form frameworks from which the next experiment is performed, which it does with great efficiency.

As people have already said, this balance seems to work. One without the other would quickly become much less efficient in making new discoveries.

However, it just seems to me, that for many scientists, the approach of using axioms and theorems is simply dismissed as not being useful because what they are studying is "too complicated."

I think that varies a lot depending on which field one is referring to.

Certainly, our "first principles" could be way off, but if the assumptions are kept small number, there is less of a chance that we are correct by simply having enough knobs to tweak.

I think that's precisely why Empiricism is so heavily relied upon in most fields. Since even 'first principles' can be wrong.

Interestingly enough, I think psychology does this the most. Possibly because the odds of any principles, first or no, being wrong are very high in relation to something as complex and full of bias as human behaviour. Thus you see psychologists relying very heavily on empirical research, and only with great hesitation extrapolating past the raw data.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
Alright, we have some people participating now.

Usually a mathematical discovery is made but only finds a practical use fifty or a hundred years later.

And interesting exception to this is non-scalar networks, for the mathematics was invented in 1997 but a used was found one thousand years earlier by the Dominican Order.

However the Dominicans did use non-scalar networks successfully to hunt heretics, even without the mathematics.

An interesting relationship between logic and the empirical.

I had the impression that it was the empirical work that tends to show up first.

In my research we do an even mix of theorizing, computation, and experimentation. I spend as much time working out the physics of a computational model as I do testing that model in the lab via real world experiments.

Cool. That's the type of work I would really be interested in doing. As I said in my first post. I am not proposing a dichotomy. I know both approaches are needed.

These computations and theories we use to formulate our ideas are all based on first principles, but unfortunately we do not have the technology to take a completely ab initio approach, for some of the things I do we'd be looking at an entire life time of processing time for me to get just a few of my calculations.

I don't know the particulars of your case. However, I have found that many science departments don't take advantage of the engineering resources available at their school.

My B.Sc.'s are in Computer Engineering and Discrete Mathematics, and that makes me generally skeptical when people say that calculations would take a lifetime. I have regularly seen algorithm improvements that give 1000x the performance. In addition, parallel algorithms can often achieve super-linear scaling with the number of processors if you can properly make use of the caches available in the machine.

Another resource that I think is not explored enough is the use of custom hardware to do calculations. Again, if the science departments were to engage with the computer engineers co-located with them, there may be enough impetus for the engineers to develop custom HW (using FPGAs most likely, but maybe even custom silicon).

I also see some potential for mixing analog computation along with the digital computation to get simulation results. These are of course engineering projects, and the science departments would likely need to move forward using the technology currently available. However, launching engineering projects may allow a return to previously intractable calculations much sooner than just waiting for the next generation of Microprocessors.

So we rely heavily on empirical knowledge for some aspects of our research since we have no other way to verify the validity of our assumptions.

I figured as much. Ultimately, that is the only way to check initial assumptions.

So it really isn't useful for a scientist to do anything but verify that there is a cause and effect relationship with everything. There honestly isn't any room for philosophy as a stand alone in our work.

Could you elaborate on what you mean by this?

Something can be very logical and still lead you to the wrong conclusion without carefully analyzing the cause and effect at each step. If the logic is based off of incorrect assumptions then you will get the wrong answer every time (quite logically I might add).

Certainly, I think empirical testing is indispensable. What I believe is that it may be worthwhile to revisit and correct the assumptions once they are proven wrong.

So in the complex sciences we have to test the validity of our assumptions through empirical methods (when the variables are too complex for ab initio computational methods) and sometimes believe it or not, even though our assumptions are based on first principles and we follow air tight logic, we don't get what we expect when we run the experiments and usually when we look at the individual system we are testing we find one variable that causes something about an assumption based on a first principle to be wrong. Not because the theory is wrong, but because when you have over a thousand variables it is sometimes difficult to apply first principles to the system as a whole in any kind of useful way.

This confuses me. It seems like if you get incorrect results either you assumptions are wrong or the calculations/deductions are (even if is due to some variable being incorrectly handled or rounding error). In fact, that is a logical consequence...essentially a proof by contradiction.

An example might be that typically you would expect functional group A to react very quickly with functional group B, only in the system we are using for some reason despite what was predicted from first principles no reaction occurs in real life, so a crystal structure is taken of the starting material and NMRs are taken of the reaction intermediates and from that we find that the molecule adopts an unexpected geometry in the transition state which makes any reaction between group A and group B impossible.

It seems to me, here, you either have a mystery pointing to missing/unknown first principles (why does the molecule adopt an unexpected geometry in the transition state?), or an improper input of ALL the relevant known first principles (the principles that lead to the unexpected geometry were missing from the original model).

We'd be out millions of dollars if we'd based an entire research project on the assumption that those two groups would react, when they clearly do not (even though theoretically they should when taken out of the context of that specific system) - and this is why cause and effect are so important to most areas of science.

Again, I believe empiricism is indispensable and, of course, you have to do what is practical, in terms of grants, etc. But it seems like, in the case mentioned, unexplored avenues of research are staring you in the face.

We had a Nobel laureate speak at our school, and he said what was essential is the tracking down of what is unexplained. If your model says one thing should happen and experiment shows another thing happening, that is something unexplained.

I suppose the unexpected geometry is a partial explanation. But, if computation from first principles misses that the geometry in question should occur, that, by definition, means the computation was wrong. By logic, it also means that there was an incorrect "axiom" or an incorrect "deduction."

I don't think logic, as humans use it, can take into account nearly enough variables to be truly accurate, nor can it account for all unknown and possible variables. It can never quite get precise enough predictions even if it is accurate and simply cannot take into account the full scope of what is occurring.

Forever is a long time. Can we really say we'll never take into account the full-scope of thing?

Even accepting that to be true, correcting our theories when they disagree with experiment has lead to a great many new discoveries and predictions.
 

erm

Permabanned
Joined
Jun 19, 2007
Messages
1,652
MBTI Type
INFP
Enneagram
5
Forever is a long time. Can we really say we'll never take into account the full-scope of thing?

Of course it might eventually happen, but I thought far off future possibilities were irrelevant to the why of what is happening now.

Incompleteness theorems, Bell's theorem and similar may lend weight to the idea we will never take the full scope into account. Really though, the mere fact that there appears to be no way to know whether all factors have been taken into account, suggests that one will always have to favour Empiricism over Rationalism to some degree.

Even accepting that to be true, correcting our theories when they disagree with experiment has lead to a great many new discoveries and predictions.

That is the foundation of my point. It's the empirical evidence which acts as the foundation of the new theories. If the theory and the evidence disagree, the evidence is always considered the correct one, providing all the basic parameters of reliability have been met. In the end, scientific hypothesis, theses, theorems and such are all trying to predict the empirical, since the empirical is the reality, not just a representation (even if philosophical Idealism were true).

So whilst a balance of both works best, given human nature, empiricism remains the foundation.
 

spin-1/2-nuclei

New member
Joined
May 2, 2010
Messages
381
MBTI Type
INTJ
I don't know the particulars of your case. However, I have found that many science departments don't take advantage of the engineering resources available at their school.

Our lab collaborates with mathematicians and engineers all of the time. They often develop our custom software, instrumentation, etc. In regards to your comments regarding the error examples I gave, the issue really is not with the assumptions, because in a vacuum the assumptions would be correct, but when applied to the system mitigating factors produce unexpected results. Sometimes you will come across the only system ever encountered in the entire world where functional group A and B don't react.

Imagine you have a ball of yarn, you can make assumptions about the physical and chemical properties of the yarn based on first principles, but if I ask you to stand at the top of a building and drop that ball of yarn and predict exactly what the structure of the ball of yarn will be when it hits the ground, this becomes increasingly more difficult. Especially when the weather, building, height, ground surface, and on and on keep changing. So while the ball of yarn remains the same and all of the first principle assumptions related to a ball of yarn in a vacuum remain the same, the second you place that ball of yarn into the system the variables change and it sometimes isn't even possible to identify all of the variables without empirical investigation.

This is why philosophy doesn't have much of a stand alone place in science. Theorizing is extremely important, but without the elusive theory of everything we still need empirical evidence to back up cause and effect associations. You must have cause and effect in science because it is essential to be sure that one thing causes another when building upon those concepts.

Ab initio calculations give us the best predictions but they are time consuming and therefore not always practical so semiempirical approaches are often necessary. I've provided a link you might find useful below.

"How long do you expect it to take? If the world were perfect, you would tell your PC (voice input of course) to give you the exact solution to the Schrödinger equation and go on with your life. However, often ab initio calculations would be so time consuming that it would take a decade to do a single calculation, if you even had a machine with enough memory and disk space." - Introduction to Computational Chemistry

This article might also be useful - Computational chemistry - Wikipedia, the free encyclopedia
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
That might be my personal bias, but I think that approach gives a way to reach dependable conclusions while making the most of resources.

I have a similar mindset. It is perhaps our mathematical training :D


Incompleteness theorems, Bell's theorem and similar may lend weight to the idea we will never take the full scope into account.

I think people invoke these theorems without understanding what they actually say. But a discussion of these things would be a bit off-topic (although interesting).

Really though, the mere fact that there appears to be no way to know whether all factors have been taken into account, suggests that one will always have to favour Empiricism over Rationalism to some degree.

I think the number of factors being so large is an important point. But I think throwing our hands up and saying "intractable" as a knee-jerk reaction is a bit too much.

That is the foundation of my point. It's the empirical evidence which acts as the foundation of the new theories. If the theory and the evidence disagree, the evidence is always considered the correct one, providing all the basic parameters of reliability have been met. In the end, scientific hypothesis, theses, theorems and such are all trying to predict the empirical, since the empirical is the reality, not just a representation (even if philosophical Idealism were true).

So whilst a balance of both works best, given human nature, empiricism remains the foundation.

What is a "foundational" is very much based on how you look at things. Even to make a simple empirical observation, you are inherently making a myriad of assumptions. Delving into these assumptions (like what space and time actually are) has yielded some great discoveries.

Our lab collaborates with mathematicians and engineers all of the time. They often develop our custom software, instrumentation, etc.

No custom hardware though?

In regards to your comments regarding the error examples I gave, the issue really is not with the assumptions, because in a vacuum the assumptions would be correct, but when applied to the system mitigating factors produce unexpected results. Sometimes you will come across the only system ever encountered in the entire world where functional group A and B don't react.

Imagine you have a ball of yarn, you can make assumptions about the physical and chemical properties of the yarn based on first principles, but if I ask you to stand at the top of a building and drop that ball of yarn and predict exactly what the structure of the ball of yarn will be when it hits the ground, this becomes increasingly more difficult. Especially when the weather, building, height, ground surface, and on and on keep changing. So while the ball of yarn remains the same and all of the first principle assumptions related to a ball of yarn in a vacuum remain the same, the second you place that ball of yarn into the system the variables change and it sometimes isn't even possible to identify all of the variables without empirical investigation.

I get what you are saying, and it may be a matter of semantics, but in the case of the ball of yarn--getting the weather parameters wrong IS getting a set of assumption wrong. Just as having numerical errors build up IS getting the calculation wrong.

This is why philosophy doesn't have much of a stand alone place in science. Theorizing is extremely important, but without the elusive theory of everything we still need empirical evidence to back up cause and effect associations. You must have cause and effect in science because it is essential to be sure that one thing causes another when building upon those concepts.

I am unclear on what you mean by "stand alone place." It seems to me that nothing can have a stand alone place in science.

Also, why does cause and effect being important (and I agree with you there) reduce the importance of philosophy in science?

To me, the rethinking of what is meant by space and time in relativity, and what is meant by measurement in quantum physics are both extremely philosophical...and yet still shed light on cause and effect.

Ab initio calculations give us the best predictions but they are time consuming and therefore not always practical so semiempirical approaches are often necessary. I've provided a link you might find useful below.

"How long do you expect it to take? If the world were perfect, you would tell your PC (voice input of course) to give you the exact solution to the Schrödinger equation and go on with your life. However, often ab initio calculations would be so time consuming that it would take a decade to do a single calculation, if you even had a machine with enough memory and disk space." - Introduction to Computational Chemistry

This article might also be useful - Computational chemistry - Wikipedia, the free encyclopedia

That was interesting reading. Thanks.

Perhaps, I am naive about what is possible. But you yourself said that ab initio calculations tend to give the best results. I think it is important to reflect on why that is, and to know what it is you are giving up as you go further away from these sort of calculations.

A couple of statements in the link you gave stood out to me.

"Although most chemists avoid the true paper & pencil type of theoretical chemistry, keep in mind that this is what many Nobel prizes have been awarded for."

"What approximations are being made? Which are significant? This is how you avoid looking like a complete fool, when you successfully perform a calculation that is complete garbage. An example would be trying to find out about vibrational motions that are very anharmonic, when the calculation uses a harmonic oscillator approximation."

Frankly, the "How to do a computational research project" section in the link read as being fairly philosophical.
 

erm

Permabanned
Joined
Jun 19, 2007
Messages
1,652
MBTI Type
INFP
Enneagram
5
What is a "foundational" is very much based on how you look at things. Even to make a simple empirical observation, you are inherently making a myriad of assumptions. Delving into these assumptions (like what space and time actually are) has yielded some great discoveries.

Actually I think, in raw form, empiricism makes no assumptions. It naturally gets put together with assumptions, yes. I would classify those assumptions as anti-empiricist, since it is later empirical research that usually proves them wrong. Let us not delve into a semantic disagreement here though:

If you mean the empiricist assumptions behind science, then I would agree. There are many assumptions science makes in its practice, that could be flawed, and many in the past which have been flawed. Locality, consistent space and time, and similar assumptions have come under fire recently, for example. They have come under fire mostly due to the empiricism as is about to be defined, however.

If you mean the fundamental form of empiricism, simple observation, nothing more, than I disagree. As pure empiricism is straight observation. If one sees an image, than one acknowledges the perception of an image. To take it any further, is to move past this form of empiricism. This form is the closest we come to reality, it's a simple argument to suggest that it is reality. Any rationalisation, such as logical inference, which moves beyond that, is inherently more flawed than the original observation.

An example of this difference:

Raw empiricism: I appear to drop the stone, it appears to make circular ripples in the water.

Scientific empiricism: I dropped the stone, it made circular ripples in the water. It will do so again. (there's probably many more assumptions in there as well)

To address your other point, I think scientists favour their form of empiricism because it is closer to raw empiricism than anything else they wield.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
What I am saying is that just as there are a myriad of variables that need to be accounted for when formulating a theory, there are a myriad of assumptions made when formulating an experiment or making an observation.

It is not possible to make even a simple observation without the whole weight of your own world view and the assumptions behind it coloring what you observe.

Think of the Michelson-Morely experiment, an incredibly well designed experiment, but they assumed the existence of an aether. So when they got the results they got, they could not make sense of the results. It took a philosopher-scientist to finally shed light on what was at play.

In other words, there is no such thing as "raw empiricism."
 

erm

Permabanned
Joined
Jun 19, 2007
Messages
1,652
MBTI Type
INFP
Enneagram
5
I agree with all but:

In other words, there is no such thing as "raw empiricism."

That's a hasty claim.

Raw empiricism does happen, it's just extrapolated on top with assumptions, like you say. I'd be inclined to agree if you simply meant a human can't isolate raw empiricism.

It's that raw empiricism that is the foundation of science since its early days as natural philosophy. It hasn't left that foundation, for good reason. Referring back to that raw data is the closest it gets to truth.

The further those steps are from the raw data, the more likely human error has stepped in. Yes deduction is truth preserving, but each seemingly deductive step is an area that could be victim to human error. It can also be argued that deduction originates from raw empiricism, and that deduction rarely takes place over induction in reality.

Humans may naturally be seperated from raw empiricism, but that does not change the fact that the less steps they are from it, the less assumptions they have made, and so the more likely that they are correct.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
It's that raw empiricism that is the foundation of science since its early days as natural philosophy. It hasn't left that foundation, for good reason. Referring back to that raw data is the closest it gets to truth.

Let's leave it at that then.

The further those steps are from the raw data, the more likely human error has stepped in. Yes deduction is truth preserving, but each seemingly deductive step is an area that could be victim to human error. It can also be argued that deduction originates from raw empiricism, and that deduction rarely takes place over induction in reality.

Logic and mathematics is the most confident knowledge we have. They are pre-empirical. They are the "language" in which empirical results must be expressed. You measure this or that to be a certain value (mathematical statements). This or that did or did not happen (logical propositions). You can't have human empiricism without logic and math; there would be know way to write down or talk about what you observed in precise terms without them.

I also believe that good "induction" is based off of statistical techniques, which are in turn based off of probability models, which is in turn based on set theory. In other words, accurate induction (through statistical means) is actually a deductive process in disguise.

Humans may naturally be seperated from raw empiricism, but that does not change the fact that the less steps they are from it, the less assumptions they have made, and so the more likely that they are correct.

I disagree. Geometry is quite abstracted from the real world, as are equations like those of Quantum Electrodynamics. Nevertheless, it is this that we compare experiment to, and the equations are what are used in designing experiments in the first place.

They are many steps removed from raw data. But these are much more reliable than the "trend lines" and "supports and resistances" (taken directly off of the data from stock charts) that stock traders use.
 

erm

Permabanned
Joined
Jun 19, 2007
Messages
1,652
MBTI Type
INFP
Enneagram
5
Logic and mathematics is the most confident knowledge we have. They are pre-empirical. They are the "language" in which empirical results must be expressed. You measure this or that to be a certain value (mathematical statements). This or that did or did not happen (logical propositions). You can't have human empiricism without logic and math; there would be know way to write down or talk about what you observed in precise terms without them.

See now we'd have a debate that probably moves outside the scope of this thread. As I would argue that logic and mathematics are pre-rational. That it is only empirical observations that confirm and create logic and mathematics as they are.

It's a tough debate to enter, with little in the way of intellectual rewards (I find). As I don't think there's any certainty to be had on either side.

I also believe that good "induction" is based off of statistical techniques, which are in turn based off of probability models, which is in turn based on set theory. In other words, accurate induction (through statistical means) is actually a deductive process in disguise.

I agree that deduction is happening "underneath" induction. However, there are always assumptions behind inductive reasoning.

So dealing in pure probability may seem to eliminate assumptions, but in reality one can never get a truly accurate probability. Let alone the inherent inaccuracy of truth values lower than 1.

I disagree. Geometry is quite abstracted from the real world, as are equations like those of Quantum Electrodynamics. Nevertheless, it is this that we compare experiment to, and the equations are what are used in designing experiments in the first place.

I don't see how anything can be abstracted from the "real" world. Geometry, I would say, is a series of patterns humans have found. It is very much grounded in the "real" world in this sense.

I tend to see the idea that abstract concepts have any sort of existence beyond the "real" world as the product of an illusion. Not to say it isn't true, merely that there is no evidence for it, so the rational position is one of ignorance on the matter.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
See now we'd have a debate that probably moves outside the scope of this thread. As I would argue that logic and mathematics are pre-rational. That it is only empirical observations that confirm and create logic and mathematics as they are.

It's a tough debate to enter, with little in the way of intellectual rewards (I find). As I don't think there's any certainty to be had on either side.

You should read Where Mathematics Comes From. It makes a compelling case that Mathematics basically comes from how we are wired as human beings, and through analogy.

If you are stating that analogy to real world situations are what leads to mathematics, I would agree. But the analogy and analogies to analogies and so on, of mathematics, are quite far removed from the raw data. Nevertheless, these analogies many steps away from the raw data are more reliable than the lines you can draw directly on stock charts.


I agree that deduction is happening "underneath" induction. However, there are always assumptions behind inductive reasoning.

So dealing in pure probability may seem to eliminate assumptions, but in reality one can never get a truly accurate probability. Let alone the inherent inaccuracy of truth values lower than 1.

We're in agreement here. I thought in the previous post you we advocating induction over deduction.

I don't see how anything can be abstracted from the "real" world. Geometry, I would say, is a series of patterns humans have found. It is very much grounded in the "real" world.

It is certainly more abstract than directly drawing lines or raw-data.

I tend to see the idea that abstract concepts have any sort of existence beyond the "real" world as the product of an illusion. Not to say it isn't true, merely that there is no evidence for it, so the rational position is one of ignorance on the matter.

Well, yeah. This discussion can go deep down a rabbit-hole. How do we even know a real world exists? We really can never have proof one way or another. Idealism vs. materialism vs. dualism, etc.
 

erm

Permabanned
Joined
Jun 19, 2007
Messages
1,652
MBTI Type
INFP
Enneagram
5
You should read Where Mathematics Comes From. It makes a compelling case that Mathematics basically comes from how we are wired as human beings, and through analogy.

If you are stating that analogy to real world situations are what leads to mathematics, I would agree. But the analogy and analogies to analogies and so on, of mathematics, are quite far removed from the raw data. Nevertheless, these analogies many steps away from the raw data are more reliable than the lines you can draw directly on stock charts.

I could give reading lists of people who have argued that our own brain wiring is because of empirical observation.

This highlights why I think this is a fruitless path. As it ultimately comes down to a basic truth, such as A=A, being confirmed both rationally (logically) and empirically. What we debate about is a real chicken or egg dilemma, where which came first is at least very hard to distinguish, if not impossible (unlike in the case of a chicken or an egg).

It's mirrored in arguing whether logic came first, or the universe. Which one created the other, or did a third party make both?

I was hoping to sidetrack that path by saying empiricism, in raw form, accounts for all variables, whereas rationalism does not (because it depends on humans). Like you said though, such empiricism is a long shot away from what humans are actually capable of. It ultimately leads me to suspect that there is not as much difference between Empiricism and Rationalism as we like to think. One could propose raw rationalism, much like I have proposed raw empiricism, for example (which may be what you are doing, I don't know).
 

spin-1/2-nuclei

New member
Joined
May 2, 2010
Messages
381
MBTI Type
INTJ
"Although most chemists avoid the true paper & pencil type of theoretical chemistry, keep in mind that this is what many Nobel prizes have been awarded for."

"What approximations are being made? Which are significant? This is how you avoid looking like a complete fool, when you successfully perform a calculation that is complete garbage. An example would be trying to find out about vibrational motions that are very anharmonic, when the calculation uses a harmonic oscillator approximation."

Frankly, the "How to do a computational research project" section in the link read as being fairly philosophical.

The custom hardware is covered under instrumentation, a lot of what we do requires custom instrumentation for measuring our experiments.

My lab isn't comprised of "most chemists" in fact many people in my lab are theoretical physicists and we have quite a few chemical engineers. We are not strictly a physics or chemistry lab, we have a very large research group, many post docs, and multiple collaborations not just at this university but with many universities. It isn't uncommon for me or one of my lab mates to be at another university's physics, math, or chemistry department for months at a time working on new software, instrumentation, chemistry or whatever, most of us have dual advisors in two different departments. There are many areas of science that are extremely interdisciplinary now, so this is not that uncommon.

What you are confusing is unknown variables and assumptions. It isn't possible to know every variable when you attempt these things outside of the vacuum. When we place drugs etc in the human body unexpected things often occur, not because our assumptions are wrong but because these drugs do not exist in a static environment.

You can't know what everyone's combination of health problems, diet, environment, use of personal hygiene products, weight, exercise habits, illegal drug use, etc will be. It isn't possible to theorize or philosophize every possible system a drug will encounter. You can't think your way into knowing every possible unintended dietary interaction or every possible environment that a person might live in or every disease and possible combination of disease, the processing power you would need to do this just for one individual would be incomprehensible....

So, while logic and first principles can tell us that a certain drug treats cancer in the vacuum or even on average in vivo, we can't know that this drug will be useless if a person has diabetes and drinks grape fruit juice in the morning, but only if they're an avid runner as well.

I can't know before I design a polymer that molecular motion will be stunted if I crosslink it with X and place it in this environment where there is a significant amount of humidity and condensation due to an improperly maintained heating/cooling system whereby hydrogen bonding occurs whenever the humidity rises above 50% but only if the temperature is also above 27C and only if there is a 60% or less mixture of the crosslinking material.

I can't know that an imaging dye will be quenched or that interference will occur via some other random medication or enzyme in the body if a certain patient also suffers from disease X and is taking Y prescription drug at Z concentrations. Whereas when everybody else is using the dye there are no observed problems with the photophysics or biomolecular functions.

This kind of information comes from manufacturing tests in the case of materials, clinical trials, etc. So while all of the assumptions that were based on theorizing and first principles turned out to be correct and the material, drug, imaging dye works in a vacuum - you end up with certain systems in which these products do not work, not because the assumptions about them are wrong but because it is impossible to define all of the possible permutations in all of the variables or even to define all of the variables in every possible system.

When thinking of ab intio calculations since I use them all of the time and when compared to my semi empirical projects... this is the best way I can describe it. My one and only solely ab initio project has been mine since I started my PhD and will be some other physics or theoretical student's problem when I leave here because I will not have enough time to finish the calculations before I graduate. I have had many many many semiemprical projects that have already gone on to publication and real world application, whilst my ab initio project is still just an idea and a collection of theoretical data points. So for me what is being lost when moving away from solely ab initio based projects is nothing in comparison to the progress, understanding, and real world use that is gained from going with the more practical semiempirical approach whenever the projects permits it.

I should also mention that it is typically the people that rely solely on theory without any experimentation to back it up that end up looking like fools. They place far too much confidence in their ability to accurately define all variables in every possible system that will be encountered, and those of us who realize our ability to do that is limited and instead test our theories via experimentation end up avoiding embarrassing retractions, loss of money, and the unenviable position of being responsible for one's ego holding back scientific progress.

I should also note that even ab initio projects can be proven wrong, inadequate, or incomplete when placed into unintended systems. The only difference is they now get to spend another 20 or 30 years trying to revisit their predictions only to have it all foiled by yet another exception to the rule on down the line. Our first principles describe best what takes place in a static environment, when placed into real world use a lot of the theory breaks down, not because it is wrong, but because other unintended interactions (also easily explained by current first principle theories) interfere.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
I could give reading lists of people who have argued that our own brain wiring is because of empirical observation.

This highlights why I think this is a fruitless path. As it ultimately comes down to a basic truth, such as A=A, being confirmed both rationally (logically) and empirically. What we debate about is a real chicken or egg dilemma, where which came first is at least very hard to distinguish, if not impossible (unlike in the case of a chicken or an egg).

It's mirrored in arguing whether logic came first, or the universe. Which one created the other, or did a third party make both?

I was hoping to sidetrack that path by saying empiricism, in raw form, accounts for all variables, whereas rationalism does not (because it depends on humans). Like you said though, such empiricism is a long shot away from what humans are actually capable of. It ultimately leads me to suspect that there is not as much difference between Empiricism and Rationalism as we like to think. One could propose raw rationalism, much like I have proposed raw empiricism, for example (which may be what you are doing, I don't know).

I think we have come to a point of mutual understanding, then.

I was not proposing a "raw rationalism" at all. I was simply proposing an alternate view of things.

What you are confusing is unknown variables and assumptions. It isn't possible to know every variable when you attempt these things outside of the vacuum. When we place drugs etc in the human body unexpected things often occur, not because our assumptions are wrong but because these drugs do not exist in a static environment.

I think, at this point, it is a matter of semantics. Trust me when I say I understand what you are saying.

What I am saying, is that from a strict logic (as in formal logic) standpoint, what variables you leave as unknown (even if you leave them at some statistically accurate value), are "starting points" (call them what you want) which are wrong for the particular situation for which they are wrong. This is almost a tautology.

In other words, you have not proven the laws of formal logic wrong by finding an example where a model fails. What you have done is proven that the model fails. Perhaps I am picking on a semantic point, but I hope you understand what I mean.

I'll take your word on the relative intractability of ab initio projects in your field. I did not intend to imply that anyone is a fool, I was just quoting the part about assumptions which happened to contain that language.

I still wonder, though, if there aren't other techniques which are being left unexplored, that would allow previously intractable problems, to be tackled. For instance, the techniques that come from complexity and chaos theory.
 
Top