Heh, statistically, crimes that use profiling are far more likely to be solved. The argument over resource allocation (ie: is it worth spending those resources to solve a small number of crimes) isn't the same thing. "It does work, but at what cost?" is different than "Does it work?"
Cost cannot be so wryly dismissed. The ends don't justify the means. I certainly believe statistical methods work. I hope I haven't given the impression that I believe they do not. The question is how well can we trust what is currently being done? (Which is certainly constrained by cost.)
Correlations imply relationships. Why is it relevent if it is cause-effect? The relationship still exists.
Correlations could just be coincidence. There are coincidences you know. Also, there could be a common reason for seeing both things that correlate (like an unknown bias in the sample chosen, or testing procedure used), etc. You can say the "relationship" exists in such cases, but I think you are stretching the word "relationship".
Cause-Effect relationships are fundamental. You also implicitly assume cause-effect relationships in some of your examples.
Absolutely, if that is possible. If we could perfectly measure someone in a lab, that'd be great, but it doesn't work that way.
I think it does work that way (not in a lab, but directly) more often than not. In the hiring example, that would be doing background checks, having future coworkers meet and greet them to see how well they get along, testing their critical thinking skills directly, and in context of the job they would be doing, and have experts examine their thinking process in very hard (but on the job) situation to see if they would perform well.
However, I was distilling the argument, not suggesting use. If, for example, the IQ of Ns is dramatically higher than Ss, there is a relationship between the two. If the IQ of the sub-trait "Open to ideas" is correlated to IQ, then there is a relationship between the two. And vice versa.
See, in this case, if you wanted people with high IQs, why not test for IQ, instead of checking for N vs. S or Openness-to-Experience?
The goal is understanding how things connect together... The whole point is to define relationships, not just use it as you describe above.
We can find all the biases in our testing procedures we want. But we really need to determine weather what we see is an artifact of testing or something real. In the case of the N vs. S, Open-to-Experience thing, could the fact that you are giving written test material have anything to do with why the "relationship" exists? Knowing cause-effect relationships is the vital goal, not mapping coincidences or biases in our own testing procedures/sampling methods.
It defines a relationship between them. 75% of the time, adding (z) will lead to

in (x).
Here you are assuming a cause-effect relationship. (z) causes

in (x). To establish this properly, you would need a good control group, and a simple way to avoid selection bias, etc. Can you point me to a study that shows this sort of care? Say in your following example?
For example, verbal IQ scores are correlated to attention given to children at a young age. In most cases, the attention has a relationship to the child developing verbal skills. As a parent, you have to decide if you will give your child attention or not; statistically, you can't be sure if you'll have an influence, but it's likely you will.
Again, here how do you know that it isn't true that intelligent parents pay more attention to kids and also have kids that have high IQ? Was there really a controlled experiment done, where, say the IQs of both parents were controlled and the only thing that varied was attention given to kids by their parents? If so, please direct me to it. What factor analysis seems to be, is a DOE (Design-of-Experiment) style analysis done to get the hypothesis before real rigorous testing can be done. But I do But I do not think the rigorous follow up is possible within most cost constraints.
To create an actual situation where this is relevant, you can add a low-influence factor that you have to decide between. Measuring it after isn't helpful at all. And going with your gut doesn't average out any better than the odds would say it does. People are less objective in close situations than they are in normal situations; and even so, in normal situations, people tend to be less accurate than statistics.
I think you are referring to situations where we are measuring impersonal things like the odds in gambling and so on. I agree with you there (even though there are select individuals, I would trust for having keen number sense). But I think human beings are much better at figuring how well someone will do a particular job (especially people who are good at doing that job, and somewhat introspective/philosophical about it) than a test would be. I know there was a time when personality/IQ test were given during job applications, but I don’t know too many companies that do that now. I think they realized that it wasn't yielding the results the desired. I know Google is a conspicuous exception (but they were just trying to be incredibly selective, and having anyway to automatically weed candidates is a time saver, some-people weed out by length of resume, or the weather they follow the format, etc.).
Just because we live in an era where we can't measure the level of complexity (your version of fundamental) to 100% predict the outcome of every action doesn't mean that it doesn't serve a purpose. Prediction is always fuzzy. Your physics background uses absolutes - ellipses are perfectly defined in theory - but chaos still exists in practice.
Aah, the standard cop-out. You guessed my background wrong. I am not a physicist (but I think you are characterizing the “absolute” nature of their jobs incorrectly as well). Most of an engineer's job is design (with no math/stat involved). We make trade-offs by feel all the time. The idea is to achieve a "balance" in design requirements. It is plenty fuzzy. We do make "figures of merit" etc. But usually designs aimed at maximizing such figures-of-merits end up blowing up some other facet of the design that makes it unfeasible. That’s why there aren’t too many computer programs around that auto-design things(others than tiny subsets of the design).
The ellipses I mentioned are confidence ellipses (you can think of them as two dimensional confidence intervals). There is a 90% confidence ellipsoid, a 95% confidence ellipsoid, etc. The 100% confidence interval is never used because it tells us nothing. You will not
always be right. The point I was making, is that the telemetry data should be trusted more than the static statistical model.
Let me give you an example... a real one from my life, this morning. I have two buses I can take to work every morning. I know that one is faster than the other - I've measured it. I know that one bus is often full and skips my stop, making up for the fact it is faster. I walk out this morning, and I feel that I should take the slower bus. While taking it, the faster bus, not full, passes me.
Now, I felt that way, but I'm not sure which one I should take. So I start timing them. Statistically, I will arrive earlier if I take the faster bus, despite the fact it might be notably slower sometimes... but my gut was wrong because I mis-measure the extra time waiting as longer than it really is. That bias is actually normal. People often act when not acting is the best choice, just as people are inherently unable to understand risk outside of "the middle space".
Just because the situation is chaotic and I might not "always" know the right decision, on average, I can make the decision that will work for me. I can't measure the location of the buses perfectly, so I do the next best thing.
Is that useless?
What if I was to start recording the date, weather, exact time, day of the week... then peform factor analysis to know what factors may change the schedule/traffic/riders...? For example, that during the summer, the faster bus is the right choice (because summer vacation means less riders), but in winter it is the slow bus that is better (since the fast bus comes from a mountain). I don't need to know the cause - it would show clumps around dates, which I could use blind.
Just because you're gut is wrong, doesn't mean an expert's gut will be wrong. Have you read
Blink?
Amazon.com: Blink: The Power of Thinking Without Thinking: Books: Malcolm Gladwell And quite frankly the simple, common sense, timing (maybe you do it a few times), is rather simple and concrete. Judging someone’s intelligence, is a completely different exercise.
Oh, and good luck with your factor analysis on the bus-picking thing. I'm sure you'll find it worthwhile

.Think of the minutes over you life time you will have saved over just picking the fast bus al the time, after one of the most rudimentary calculations.
If statistically, wealth divergence is an accurate predictor of behaviour in a given situation, one can inference a generalised behaviour.
Again you are inferring cause-effect relationship when you generalize like this. What if all the people in the town you are setting up shop in are really honest, and close knit, and the founder is well liked by the towns people? Again, looking at the specific circumstances of the situation you are in will largely make your statistical observations irrelevant.
I understand that this isn't fundamental enough for you, but since we cannot measure fundamentals accurately enough individually - the individual does measure this through their own statistical model and it is generally not accurate - the relationship is well known and is only being measured to see how valid that gut feeling is.
But you can measure fundamentals in this situation. First, there is the background check. Then you could place each person (easily done in an interview) in a situation, where they could steal something valuable, seemingly with impunity, and see if he or she does.
re: inculate ethics, it is shown that mitigating factors include harsh punishments and % chance of being caught. Conditioning requires the consistent negative feedback during the "training period". It is possible to control for and experiments are done all the time. It's a lot tougher with people, yes, and fuzzier, but this is about factors.
What I am saying is that when it comes to people, it is not about factors, but about the people themselves.
Simply put, you asser that inculating ethics would have an effect. You can not measure the fundamentals, but you could measure it by taking many people, splitting them into two groups, then measuring the influence between the training/conditioning and the lack of. How else can you know how effective something is? Gut instinct? Why would that be better?
If you did this study, it may or may not prove effective. If it doesn’t how do we know it isn’t because of the particular training method we used was poor. If it does, how do we know, that we didn’t have sample bias?
What I am saying is that it is much simpler (maybe “gut instinct” was too colloquial a term) to simply have good judges of character, judge the character of the people who are being hired. Note: the judges need to be people who are good at judging character, with a long career of being able to judge character--perhaps seasoned police, or FBI interrogators.
Yes, although there are other factors that do matter. Overpaying still tends to follow the same wealth-gap curve, although strangely enough, does cause people to work harder too!
It makes me work less hard. It’s like id doesn’t matter what I do, so why put in extra effort.
One should use what is most likely to be predictively accurate. I haven't found personal judgments very good so far, or at least, it has been shown that people always over-estimate their ability to judge things more accurately than statistics, yet are consistently under performing them.
I can understand gamblers, etc. having that issue with things that are fairly concrete. But I think people (especially ones experienced at making good judgments) based on dynamic data, are better at making judgments about things like “intelligence”, “personality”, “potential for success on the job”, etc.
Ah, the race card. And no, because the correlation gap isn't sufficient to apply to the individual, nor is the IQ correlated signifcantly enough to job performance. Now, let's say you have to hire, blind, someone that has to move heavy boxes around. Do you hire a woman?
Ah, “the race card” card. This was simply my attempt at reductio ad absurdum. Of coarse, you have the option of believing the absurd in order to keep your view-point.
Regarding hiring a woman to move heavy boxes... If she is strong enough, yes I would hire her. Why would I weed out women explicitly? I would simply put on the job description will need to move X kg weight boxes regularly. Then on the interview, I would have her move several boxes, perhaps heavier than are needed on the job. The fact that she is a woman, and the fact that woman on average are less strong than men are irrelevant facts at this point. That irrelevance is what I am trying to highlight.
In both cases, the correct answer is to hire on what you can measure, to the best of your ability. If I had to hire someone from two distinct groups, blind, without knowing anything other than their group's average IQ, then yes, I would hire from the higher IQ group. One makes the decisions based upon the information one can get.
I have never known anyone to hire “blind”. Keep in mind, if you allow yourself to start with a false hypotheses, you can claim nearly anything in conclusion. My very point is that the data I (as a somewhat experienced interviewer) get with personal interaction with a person is a better judge of future performance on the job than IQ and personality tests.
In hiring situations, however, you have at least two alternatives - concrete grades/previous performance, and the ability to follow up with the individual (interview or otherwise). Those are individual traits.
In my hiring experience, previous grades, test scores and the like have told me very little in comparison with the my direct interaction with future candidates. In fact, high test scorers, have consistently been disappointments (with a couple of exceptions). Again, with the interview, the previous data turns irrelevant.
I think that you mistakenly believe that social sciences attempt to use factor analysis for individual analysis. Psychiatrists are what would be akin to your "fundamental" view - they drug people that exhibit symptoms, then adjust the medications as they get feedback.
Well psychiatrists work in a similar way to medical doctors then. Not saying that it is the best thing. But really, in their situations, this approach is better. You need to look at the specifics. I’d like to see you fix your car using factor analysis on cars in general instead of direct observations of your specific car.
As far as my mistake about social science trying to make inferences about individuals based on statistical data, I hope I am wrong. But I think, “intelligence”, “personality”, and “performance on the job” are extremely individualized(specific case) things, and I see social scientists making claims about such things.