I thought you were in a physics field where you are effectively modelling rather absolute outcomes (that despite random variances in the physics that affect the objects, the actual physics is pretty much the end all of everything). People are horribly removed from the nature of the universe, if you will
I am an engineer, I make use of physics, but I'm not a physicist. I am a consumer of science, not a producer (for the most part).
I suppose the case for people having free will is much more sound than for particles. Also, even if free will didn't exist, it is obvious that people are much more complicated systems than man-made systems.
The exact correlations are maintained within the dataset but psychology deals in pools of traits. it is critical to see where the pool is coming from to understand the results.
Consider a case of;
10000 that test 1% ADD (100). The 10,000 group is 50/50 J/P.
You take the 100 ADD patients and give them the MBTI. 100% of them test J. It is not accurate to say that Js are likely to be ADD (ie: there are 5000 Js, but only 2% of them are ADD... while it is quite accurate to say that ADDs are Js.
Within that dataset, people say ADD -> J, and J -> ADD... in reality, ADD-> is very strong while J->ADD is very weak.
Of course you are right - the ADDs are all contained in the set of Js and there is a correlation... so it is predictive, if very weakly so. Problem is, people see numbers like IQ being correlated to high income when it is actually higher income being related to IQ. It's simply a barrier of entry. High IQ sure helps but it isn't all that predictive.
I shorthand that into predictive and not predictive, although both are truly predictive - it's just a matter of strength.
I think I understand. It's like having a test for a rare disease that is 99% accurate, meaning 99% of the people who have the disease test positive, while 99% of the people who don't have the disease test negative. But if the the disease has an incidence rate of 1 in 1 million, then only 1 in 10000 (roughly) of the people who test positive will have the disease.
For example, it is very rare that upper managers have IQs below 110. However, the majority of 110 IQs do not become upper managers. People, however, assume that the two are equal. In this case the explanation is about barriers of entry, similar in academics and such.
Yes, technically it is a bit fuzzy, but I'm not a professional doing research. People misinterpret the results by not looking at the fundamental biases in the selection.
Correlations are correlations, and reliable predictions are reliable predictions. If the research is done with enough care, I have no qualms with that.
But what I am seriously skeptical of is the
innateness of intelligence. I know studies have been done to control for "environment", but I find the interpretation of "environment" to be too narrowly defined. I think the work done on "mental set" will reaveal the more powerful environmental factors (and perhaps explain the Flynn Effect, as well). I believe nutrition is one variable of the environment that does influence IQ, but I haven't checked on that for a while.
But beyond that, how do researchers control for "will", motivation, and proper coaching. I know that research on high performance and excellence (
K. Anders Ericsson, and others) has showed "deliberate practice" as being a key determining variable.
Not to say there is no such thing as innate intelligence, but I think it is over played.
If I understand this correct and translating this to IQ tests... you are saying, take the WAIS (for example), create a new test, give it to 1000 odd people, check to see if it is g loaded, and if not sufficiently so, write a new test and do it again? Eventually the random test giving will generate a random result that emulates a sufficiently loaded g score?
Not exactly. I fas referring to how the g-factor is actually called out. As far as I know, what is called "g" is a form of grouping of scores on various IQ tests. In other words, it is a "factor" generated from IQ tests to begin with. Then testing for g-loading in tests smack of circular reasoning, and you could have used countless data-sets to get your original "g" to begin with.
Every time I try digging into the origins of "g" and how it was originally constructed, I've come up empty. But since this is one of your interests, perhaps you can point me to some good sources.
So are marks. SATs. GREs. So are sports, running... everything is. Skills are important. It's just another form of test. Not everyone can score 1600 (is it 2400 now?) scores on the SATs, etc and that determines your entrance into school. And not surprisingly, those that score lower on SATs tend not to do well in school.
Incidentally, I know many people who spent months and months studying for GREs (skipping classes and assignments even) who raised their initial scores from the 1800 range on practice tests to 2400 (and achieved that on the real test). This was back when the analytical section was more "puzzle" based and had a possible score of 800.
On a general note, you may find
this paper interesting reading.