Alright, we have some people participating now.
Usually a mathematical discovery is made but only finds a practical use fifty or a hundred years later.
And interesting exception to this is non-scalar networks, for the mathematics was invented in 1997 but a used was found one thousand years earlier by the Dominican Order.
However the Dominicans did use non-scalar networks successfully to hunt heretics, even without the mathematics.
An interesting relationship between logic and the empirical.
I had the impression that it was the empirical work that tends to show up first.
In my research we do an even mix of theorizing, computation, and experimentation. I spend as much time working out the physics of a computational model as I do testing that model in the lab via real world experiments.
Cool. That's the type of work I would really be interested in doing. As I said in my first post. I am not proposing a dichotomy. I know both approaches are needed.
These computations and theories we use to formulate our ideas are all based on first principles, but unfortunately we do not have the technology to take a completely ab initio approach, for some of the things I do we'd be looking at an entire life time of processing time for me to get just a few of my calculations.
I don't know the particulars of your case. However, I have found that many science departments don't take advantage of the engineering resources available at their school.
My B.Sc.'s are in Computer Engineering and Discrete Mathematics, and that makes me generally skeptical when people say that calculations would take a lifetime. I have regularly seen algorithm improvements that give 1000x the performance. In addition, parallel algorithms can often achieve super-linear scaling with the number of processors if you can properly make use of the caches available in the machine.
Another resource that I think is not explored enough is the use of custom hardware to do calculations. Again, if the science departments were to engage with the computer engineers co-located with them, there may be enough impetus for the engineers to develop custom HW (using FPGAs most likely, but maybe even custom silicon).
I also see some potential for mixing analog computation along with the digital computation to get simulation results. These are of course engineering projects, and the science departments would likely need to move forward using the technology currently available. However, launching engineering projects may allow a return to previously intractable calculations much sooner than just waiting for the next generation of Microprocessors.
So we rely heavily on empirical knowledge for some aspects of our research since we have no other way to verify the validity of our assumptions.
I figured as much. Ultimately, that is the only way to check initial assumptions.
So it really isn't useful for a scientist to do anything but verify that there is a cause and effect relationship with everything. There honestly isn't any room for philosophy as a stand alone in our work.
Could you elaborate on what you mean by this?
Something can be very logical and still lead you to the wrong conclusion without carefully analyzing the cause and effect at each step. If the logic is based off of incorrect assumptions then you will get the wrong answer every time (quite logically I might add).
Certainly, I think empirical testing is indispensable. What I believe is that it may be worthwhile to revisit and correct the assumptions once they are proven wrong.
So in the complex sciences we have to test the validity of our assumptions through empirical methods (when the variables are too complex for ab initio computational methods) and sometimes believe it or not, even though our assumptions are based on first principles and we follow air tight logic, we don't get what we expect when we run the experiments and usually when we look at the individual system we are testing we find one variable that causes something about an assumption based on a first principle to be wrong. Not because the theory is wrong, but because when you have over a thousand variables it is sometimes difficult to apply first principles to the system as a whole in any kind of useful way.
This confuses me. It seems like if you get incorrect results either you assumptions are wrong or the calculations/deductions are (even if is due to some variable being incorrectly handled or rounding error). In fact, that is a logical consequence...essentially a proof by contradiction.
An example might be that typically you would expect functional group A to react very quickly with functional group B, only in the system we are using for some reason despite what was predicted from first principles no reaction occurs in real life, so a crystal structure is taken of the starting material and NMRs are taken of the reaction intermediates and from that we find that the molecule adopts an unexpected geometry in the transition state which makes any reaction between group A and group B impossible.
It seems to me, here, you either have a mystery pointing to missing/unknown first principles (why does the molecule adopt an unexpected geometry in the transition state?), or an improper input of ALL the relevant known first principles (the principles that lead to the unexpected geometry were missing from the original model).
We'd be out millions of dollars if we'd based an entire research project on the assumption that those two groups would react, when they clearly do not (even though theoretically they should when taken out of the context of that specific system) - and this is why cause and effect are so important to most areas of science.
Again, I believe empiricism is indispensable and, of course, you have to do what is practical, in terms of grants, etc. But it seems like, in the case mentioned, unexplored avenues of research are staring you in the face.
We had a Nobel laureate speak at our school, and he said what was essential is the tracking down of what is unexplained. If your model says one thing should happen and experiment shows another thing happening, that is something unexplained.
I suppose the unexpected geometry is a partial explanation. But, if computation from first principles misses that the geometry in question should occur, that, by definition, means the computation was wrong. By logic, it also means that there was an incorrect "axiom" or an incorrect "deduction."
I don't think logic, as humans use it, can take into account nearly enough variables to be truly accurate, nor can it account for all unknown and possible variables. It can never quite get precise enough predictions even if it is accurate and simply cannot take into account the full scope of what is occurring.
Forever is a long time. Can we really say we'll never take into account the full-scope of thing?
Even accepting that to be true, correcting our theories when they disagree with experiment has lead to a great many new discoveries and predictions.