• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Bull Sh*t Codified

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
I love statistics, and its applications. In particular to things like pattern recognition, classifiaction, computer vision and learning.

Hower, the following technique is a quintessential example of the saying, "There are lies, damn, lies, and statistics".

Factor analysis - Wikipedia, the free encyclopedia

Read the disadvantages sections and you'll see why this technique is hoplessly unsatisfatory for studying something as complex as human beings.

It has been said by many that the fundamental problem with psychological testing is that test makers have to use prejudice in making their questions and assigning interpretations to their answers. No amount of statistical manipulation is going to correct this.

There is no such thing as "being unbiased" in any field of study. Even assuming a Gaussian distribution can be a horrible bias (as physicists know quite well). Weibull, Poisson, and Unform distributions are also quite common. Bimodal distributions (very different from "binomial" distributions) are also quite common. A nuanced understanding of the "Central Limit Theorem" will tell you that the natural place for assuming Gaussians are for sums and averages of independent random variables.

To illustrate the utility of bias, consider a simple situation of measure defect rates of a products from a manfaturing facility. Very good facilities may show no defects for as long as we have time for publication of defect rates. No rational thinking human being with an ounce of common sense will publish a defect rate of "zero" even though "that is what the data shows". The common procedure is to assume (absolutley necessary) a Weibull distribution with a positive defect rate (and other fitting parameters based on prior "bias") and to keep adjusting the the parameters based on our "zero" defect rate for as many products has come up. This is just one example of a "corner case" where a bias is absolutely necessary to make inferrences that are meaningful. The more complex a system, the more corner cases that will come up.

There is something fundamentally wrong about correlatory studies of systems as complex as parts of human beings. Let alone the more mysterious study of "mind" or "health" (let alone "mental health").

First a reminder. Correlation does not imply a cause-effect relationship. I repeat, correlation does not imply a cause-effect relationship.

However, the fundamental problem is that we inherently ignore what is unique about our system in correlatory studies, but it is the uniquness itself that is vital for system function. This is true even for systems much simpler that human beings.
 

Mycroft

The elder Holmes
Joined
Jun 7, 2007
Messages
1,068
MBTI Type
INTP
Enneagram
5w6
Instinctual Variant
so/sp
Read the disadvantages sections and you'll see why this technique is hoplessly unsatisfatory for studying something as complex as human beings.
.

Well. That certainly hasn't been my experience of our species.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
Well. That certainly hasn't been my experience of our species.

Well I suppose I have a lower threshold for complexity. I think it is near imposible to create a theory that yeilds reliable and testable predictions about an individual.

We are poor at predicting when someone is about to become a serial killer, and if the FBI profilers that came on during the Norhtern Virgina shootings are any indication, the speculation of your average Joe was as good.

Similarily, I think we are horrible at predicting the who are best and brightest are also. Poincare, Feynman, and Crick are among history's greatest achievers of intellectual work, but their IQs paled in comparison to the world's highest IQ holder (an ENTP I believe), who writes advice columns (not a bad thing, but as far as "intellectual" achievement goes?).

We can certainly find averages, corelations, and tendencies, but for the most part, I don't think they tell us much more than what our guts already do.

There are some notable recent exceptions in behavioural finace and behavioral economics (A redrawing of the boundaries of the social "sciences" for the better I think. They are starting to ask better questions).

Quite frankly, without some better gueses, no amount of data/psychometrics/statistical manipulation is going to significantly improve out understanding. Certainly not the self-reinforcing world views created by factor analysis or MBTI.
 

Mycroft

The elder Holmes
Joined
Jun 7, 2007
Messages
1,068
MBTI Type
INTP
Enneagram
5w6
Instinctual Variant
so/sp
Your post prompted me to do a search. I'm quite taken aback by the number of people claiming to have IQs in the excess of 200.
 

ptgatsby

Well-known member
Joined
Apr 24, 2007
Messages
4,476
MBTI Type
ISTP
Well I suppose I have a lower threshold for complexity. I think it is near imposible to create a theory that yeilds reliable and testable predictions about an individual.

How about groups of people?

Your post prompted me to do a search. I'm quite taken aback by the number of people claiming to have IQs in the excess of 200.

Online IQ tests are the bane of my existance. I'm very curious about IQ and intelligence in general... but you just can't have a real conversation about it outside of some very narrow fields.

It doesn't help that a real IQ test will run you a couple of hundred bucks at least (the KAIT and the WAIS here were 400-500 and 300-600, estimated by one of the psychologists that referred me).
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
Your post prompted me to do a search. I'm quite taken aback by the number of people claiming to have IQs in the excess of 200.

Can you expand on that? How many people did you find? and where?

How about groups of people?

Large enough groups measuring simple enough traits, yes. Sometimes it may be useful to measure how much demand there will be for a product, and making decisions like that.

But as far as understanding the human condition, I don't think it will tell us too much more than the trends and patterns we already see withough these approaches.

Psychometrics on "intelligence", "personality", and other complex things like that, I find futile. I think we will miss too many "corner cases" taking statistical apporaches.

To see how real science is done on complex systems, look at modern biology. The discovery of DNA, its structure, what genes actually express, etc.

For a slightly older version, you could also look at the development of chemistry (for the more macrascopic ideal gas laws, to the periodic table, to P-chem (quantum mechanical modeling, and approximations based on them) , statistical mechanics implying thermodynamics, etc.)

Certainly probability and statistics were important in the development of these scinces, but they hadn't become crutches to try and get the status of being "scientific". There was a pursuit of truth by whatever means made most sense. Plain and simple.
 

darlets

New member
Joined
Apr 29, 2007
Messages
357
Your post prompted me to do a search. I'm quite taken aback by the number of people claiming to have IQs in the excess of 200.

I find it odd too. I could be wrong, but I thought I.Q results had a mean of 100 and a Standard deviation of 15.

An I.Q of 190 is 6 S.D away from the mean. That's a really, really small % (99.99999980268% from Standard deviation - Wikipedia, the free encyclopedia )
I think you're getting to the one in a billion level.
 

Mycroft

The elder Holmes
Joined
Jun 7, 2007
Messages
1,068
MBTI Type
INTP
Enneagram
5w6
Instinctual Variant
so/sp
Can you expand on that? How many people did you find? and where?

I just ran a search for "world's highest IQ" on Google.
 

ptgatsby

Well-known member
Joined
Apr 24, 2007
Messages
4,476
MBTI Type
ISTP
I find it odd too. I could be wrong, but I thought I.Q results had a mean of 100 and a Standard deviation of 15.

An I.Q of 190 is 6 S.D away from the mean. That's a really, really small % (99.99999980268% from Standard deviation - Wikipedia, the free encyclopedia )
I think you're getting to the one in a billion level.

One thing to note is that the ability to measure IQ decreases above 150... quite dramatically... so the upper threshold of most IQ systems, which is still fuzzy, is about 160-170.
 

Mycroft

The elder Holmes
Joined
Jun 7, 2007
Messages
1,068
MBTI Type
INTP
Enneagram
5w6
Instinctual Variant
so/sp
One thing to note is that the ability to measure IQ decreases above 150... quite dramatically... so the upper threshold of most IQ systems, which is still fuzzy, is about 160-170.

Makes sense. At that point nobody's smart enough to make questions for people that smart!
 

ptgatsby

Well-known member
Joined
Apr 24, 2007
Messages
4,476
MBTI Type
ISTP
But as far as understanding the human condition, I don't think it will tell us too much more than the trends and patterns we already see withough these approaches.

Fundamentally, why? What approach do you suggest in understanding social sciences? Or should we simply not attempt to look into it?

Psychometrics on "intelligence", "personality", and other complex things like that, I find futile. I think we will miss too many "corner cases" taking statistical apporaches.

Why does that matter, if it is done to help understand how certain traits, measured in large quantities, affects other quantities?

Take criminology, for example. Is it pointless to attempt to derive the main factors that drive crime?

To see how real science is done on complex systems, look at modern biology. The discovery of DNA, its structure, what genes actually express, etc.

For a slightly older version, you could also look at the development of chemistry (for the more macrascopic ideal gas laws, to the periodic table, to P-chem (quantum mechanical modeling, and approximations based on them) , statistical mechanics implying thermodynamics, etc.)

Certainly probability and statistics were important in the development of these scinces, but they hadn't become crutches to try and get the status of being "scientific". There was a pursuit of truth by whatever means made most sense. Plain and simple.

So suggest the better way of discovering truth...? Use the criminology example - how would you do "factor analysis" without doing "factor analysis" to see the contributions to general crime rates (and types of crime). It's one of the most complex things I can think of, in terms of social sciences, so if there is a better way of doing it, I'd be very interested.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
Fundamentally, why? What approach do you suggest in understanding social sciences? Or should we simply not attempt to look into it?

Perhaps I should clarify what I meant by "human condition". We can certain measure demand for a product, rates of violent crime, muber of accidents, etc. They tell us the state of the world as a whole and the about the environment people live in (granted the environment is created by humans to a large part). But trying to extrapolate from a sample statistics to a particular person is foolhardy. You could say I ranked in the 99th percentile in IQ and perhaps that means something in the aggregate (say in aggregate the highest 1% of scorers have better job performance and higher average incomes, etc), but to me personally it is nearly worthless information.

My reason is that the unique aspects of particular humans (the corner cases) dominate the averaging phenomenon. I think we should ask questions that will lead to meaningful answers. If we take already vague notions like "intelligence", and "personality" (as they are framed by psychologists, are traits of an individual) and try to test them with pre-written tests (which is incredibly noisy data) just doesn't seem promising to me. Keep in mind Terman and others initially studied IQ to try to find "geniuses" and this is where IQ has the least relavence.

Why does that matter, if it is done to help understand how certain traits, measured in large quantities, affects other quantities?

For concrete countable/measureable(in exedingly straightforward ways) things for which we use sample data to make inferences to the general population are OK. My point is none of this tells us more about the human condition (as I refer to it, the human condition is an individual to individual thing) than what common sense does. If you could provide a good counter-example, maybe I would change my mind.

Take criminology, for example. Is it pointless to attempt to derive the main factors that drive crime?

This is slightly tangential, but it seems to have been futile so far. What has been discovered by criminologists, that you can provide adequate evidence for?

But one thing to keep in mind is that crime takes place as concrete events (so countable). You can certainly track crime rates, and see how say poverty (in terms of income), unemployment, size of sub-population in teens/twenties affect crime. I think this is legitamate. But I think it'll teach us very little, about ourselves or our neighbors as individuals.

So suggest the better way of discovering truth...? Use the criminology example - how would you do "factor analysis" without doing "factor analysis" to see the contributions to general crime rates (and types of crime). It's one of the most complex things I can think of, in terms of social sciences, so if there is a better way of doing it, I'd be very interested.

OK. I am rather encouraged by the development of behavioral finance and behavioral economics (a redrawing of the boundaries of the social "sciences").

Behavioral finance - Wikipedia, the free encyclopedia

This deals with heuristics and biases of human populations (and to me illustrates why a mass of people can be idiotic while the people that compose the mass may actually be quite intelligent).

I think the questions are more modest, and can lead to more reliable results. Perhaps eventually we will get to more fundamental results.

Note I personally do have some of the biases mentioned in the articles. Some, I don't have, some are actually reversed from what the average population has. There is nothing fundamentally human we are discovering. We are simply finding tendencies. I we were to have a mass of people like me, instead of our current mass, then the biases they mention would be different.

I hope my point is a little clearer now. While neurobiology finds things about humans (brain chemistry and structure) that is nearly universal, social sciences simply measures the current states of variables. Nothing fundamental about being human. Perhaps that will change, but psychometrics are not going to help too much in that regard. Quite frankly I think neurobiology is going to beat psychology to revealing what really makes us humans tick, (being concious, having particular feelings, etc).

Neurobiology - Wikipedia, the free encyclopedia

So for the criminology example:

  • looking at factors of a sample to infer things about the population (we can gain some insight into how environmental factors affect crime) --legitamate
  • looking at factors of a sample to infer things about an individual (profiling, etc.) -- a waste of time
 
Last edited:

Mycroft

The elder Holmes
Joined
Jun 7, 2007
Messages
1,068
MBTI Type
INTP
Enneagram
5w6
Instinctual Variant
so/sp
...social sciences simply measures the current states of variables. Nothing fundamental about being human. Perhaps that will change...

That will never change. Determining what, on a spiritual level, makes us human is not within the purview of science. If you want to know why we exist, what the point of it all is, etc., choose a religion and start practicin'.
 

ptgatsby

Well-known member
Joined
Apr 24, 2007
Messages
4,476
MBTI Type
ISTP
  • looking at factors of a sample to infer things about the population (we can gain some insight into how environmental factors affect crime) --legitamate
  • looking at factors of a sample to infer things about an individual (profiling, etc.) -- a waste of time

This much I can more or less agree on. Mind you, profiling does help in criminology.

I see factor analysis as a valid approach to large number problems. For example, FFM is built upon factor analysis, but it's meant to be used in large numbers to find out how personality traits, even if self diagnosed, influence/are influenced/are related to other things. This differs from something like MBTI, which is more personal.

It really comes down to the concept that if you have 100 (x)s in a room, and 75% of them, statistically, are likely to have (y), the (x) doesn't gain since they don't know if it applies to them or not... however, someone on the outside benefits from knowing that if they pick someone from (x), they are likely to get (y).

And of course, that also applies to factors, such as applying (z) to infuence 75% of (x) to be (y), and so forth.

I do dispute the concept that, for example, knowing that poverty increases crime isn't a personal thing. For example, it has been shown that it is relative disparity of wealth that urges people towards theft. That does imply things about the human condition, or at least, how most people react to certain situations.

But at the same time, it's not supportive of saying that one person in particular would act that way.

I think it's a matter of perspective. If you use it at the strategic level, meaning that if you are hiring many people from a poor background in a poor society and you have much wealth that could be taken, it is probable that you will have items stolen from you. The same thing can't be done at the personal level (ie: you can't arrest someone because they are poor and you are rich, just because statistics say they would steal from you).
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
That will never change. Determining what, on a spiritual level, makes us human is not within the purview of science. If you want to know why we exist, what the point of it all is, etc., choose a religion and start practicin'.

What I meant by fundamental is not this. Remember, I consider brain structure and chemistry fundamental. I am more asking "how", not "why".

This much I can more or less agree on. Mind you, profiling does help in criminology.

I know people believe it does, but I think the resources spent to pay profilers and study profiles could be better used elsewhere. Like having more police working the beat.

I see factor analysis as a valid approach to large number problems. For example, FFM is built upon factor analysis, but it's meant to be used in large numbers to find out how personality traits, even if self diagnosed, influence/are influenced/are related to other things. This differs from something like MBTI, which is more personal.

See, but, what I am saying is that correlatory studies are really ill suited for this. I think the way personality traits influence other personality traits is fundamental to the person. My contention is that what you would be discovering from averages are just averages, just correlations. Correlations do not imply cause-effect relationships.

It really comes down to the concept that if you have 100 (x)s in a room, and 75% of them, statistically, are likely to have (y), the (x) doesn't gain since they don't know if it applies to them or not... however, someone on the outside benefits from knowing that if they pick someone from (x), they are likely to get (y).

But that is the problem, I contend, That you would be better off looking for (y) in each person in the room, rather than looking for (x) and trying to infer (y) from that.

And of course, that also applies to factors, such as applying (z) to infuence 75% of (x) to be (y), and so forth.

Again, finding a corelation between (z) in (x) to (y) does not imply a cause- effect relationship. You could increase (z) and have no effect what so ever on (y). It may be that (z) is dependent on (y) not the other way around.

I do dispute the concept that, for example, knowing that poverty increases crime isn't a personal thing. For example, it has been shown that it is relative disparity of wealth that urges people towards theft. That does imply things about the human condition, or at least, how most people react to certain situations.

But at the same time, it's not supportive of saying that one person in particular would act that way.

That is the problem. What you are discovering is not fundamental. What would happen if you were to strongly inculcate ethics (not a plausible experiment, but that's kind-of my point) to each of the relatively poor, and see if theft still remains as high?

I think it's a matter of perspective. If you use it at the strategic level, meaning that if you are hiring many people from a poor background in a poor society and you have much wealth that could be taken, it is probable that you will have items stolen from you. The same thing can't be done at the personal level (ie: you can't arrest someone because they are poor and you are rich, just because statistics say they would steal from you).

This "strategic" level use is something I have issues with. In your example, what if you also pay them well, and make it clear that working well, and not stealing, would mean a lifetime of increasing wealth? Would they still steal? Doing experiments like this are time consuming and difficult to pull of. I think the correlation we find are of tenuous value, even at the "strategic" level. That is why, I think in this particular case employers should rely more on the judgement and intuition of trusted hiring people, not statistics.

Just curious, would you also encourage companies not to hire black people for jobs that require intelligence because of the correlation between race and intelligence?

Perhaps a more concrete illustration would help (from the world of engineering again, since this is my background).

Lets say we are bringing down a satellite on purpose, and need to retreive it before the equipment on it sinks. Based statistical models we built we can get a confidence ellipse on where the satelite will land. This we call "prior" information.

As the satellite comes down, we get telemetry data, from which we compute new confidence ellipses. One key decision we have to make is how much we believe the telemetry data vs. our initial statistical models (we have to place a wieghting between these two things). The key to the decision is knowing how noisy our telemetry data is vs. how good our statistical model is. If we know our telemetry data is really noisy, but out models are really reliable then we would believe out statistical models and only let our telemetry data adjust our elipse a little. On the other hand (usually the case), if our telemetry data is good, and our statistical models are rough guesses at best, the elipse we calculate at the begining is nearly irrelevant (we just need it to initialize the algorithm).

I think the social sciences are faced with situations like the second case. The "telemetry" data is really good. (i.e. Your individual experiences with individuals is very accurate), but your "statistical models" (i.e. your prejudices based on past experiences, not matter how "scientifically" it is done are quite poor in comparison)
 

ptgatsby

Well-known member
Joined
Apr 24, 2007
Messages
4,476
MBTI Type
ISTP
I know people believe it does, but I think the resources spent to pay profilers and study profiles could be better used elsewhere. Like having more police working the beat.

Heh, statistically, crimes that use profiling are far more likely to be solved. The argument over resource allocation (ie: is it worth spending those resources to solve a small number of crimes) isn't the same thing. "It does work, but at what cost?" is different than "Does it work?"

See, but, what I am saying is that correlatory studies are really ill suited for this. I think the way personality traits influence other personality traits is fundamental to the person. My contention is that what you would be discovering from averages are just averages, just correlations. Correlations do not imply cause-effect relationships.

Correlations imply relationships. Why is it relevent if it is cause-effect? The relationship still exists.

But that is the problem, I contend, That you would be better off looking for (y) in each person in the room, rather than looking for (x) and trying to infer (y) from that.

Absolutely, if that is possible. If we could perfectly measure someone in a lab, that'd be great, but it doesn't work that way.

However, I was distilling the argument, not suggesting use. If, for example, the IQ of Ns is dramatically higher than Ss, there is a relationship between the two. If the IQ of the sub-trait "Open to ideas" is correlated to IQ, then there is a relationship between the two. And vice versa. The goal is understanding how things connect together... The whole point is to define relationships, not just use it as you describe above.

Again, finding a corelation between (z) in (x) to (y) does not imply a cause- effect relationship. You could increase (z) and have no effect what so ever on (y). It may be that (z) is dependent on (y) not the other way around.

It defines a relationship between them. 75% of the time, adding (z) will lead to (y) in (x).

For example, verbal IQ scores are correlated to attention given to children at a young age. In most cases, the attention has a relationship to the child developing verbal skills. As a parent, you have to decide if you will give your child attention or not; statistically, you can't be sure if you'll have an influence, but it's likely you will.

To create an actual situation where this is relevent, you can add a low-influence factor that you have to decide between. Measuring it after isn't helpful at all. And going with your gut doesn't average out any better than the odds would say it does. People are less objective in close situations than they are in normal situations; and even so, in normal situations, people tend to be less accurate than statistics.

Just because we live in an era where we can't measure the level of complexity (your version of fundamental) to 100% predict the outcome of every action doesn't mean that it doesn't serve a purpose. Prediction is always fuzzy. Your physics background uses absolutes - ellipses are perfectly defined in theory - but chaos still exists in practise.

Let me give you an example... a real one from my life, this morning. I have two buses I can take to work every morning. I know that one is faster than the other - I've measured it. I know that one bus is often full and skips my stop, making up for the fact it is faster. I walk out this morning, and I feel that I should take the slower bus. While taking it, the faster bus, not full, passes me.

Now, I felt that way, but I'm not sure which one I should take. So I start timing them. Statistically, I will arrive earlier if I take the faster bus, despite the fact it might be notably slower sometimes... but my gut was wrong because I mis-measure the extra time waiting as longer than it really is. That bias is actually normal. People often act when not acting is the best choice, just as people are inherently unable to understand risk outside of "the middle space".

Just because the situation is chaotic and I might not "always" know the right decision, on average, I can make the decision that will work for me. I can't measure the location of the buses perfectly, so I do the next best thing.
Is that useless?

What if I was to start recording the date, weather, exact time, day of the week... then peform factor analysis to know what factors may change the schedule/traffic/riders...? For example, that during the summer, the faster bus is the right choice (because summer vacation means less riders), but in winter it is the slow bus that is better (since the fast bus comes from a mountain). I don't need to know the cause - it would show clumps around dates, which I could use blind.

That is the problem. What you are discovering is not fundamental. What would happen if you were to strongly inculcate ethics (not a plausible experiment, but that's kind-of my point) to each of the relatively poor, and see if theft still remains as high?

If statistically, wealth divergence is an accurate predictor of behaviour in a given situation, one can inference a generalised behaviour. I understand that this isn't fundamental enough for you, but since we cannot measure fundamentals accurately enough individually - the individual does measure this through their own statistical model and it is generally not accurate - the relationship is well known and is only being measured to see how valid that gut feeling is.

re: inculate ethics, it is shown that mitigating factors include harsh punishments and % chance of being caught. Conditioning requires the consistent negative feedback during the "training period". It is possible to control for and experiments are done all the time. It's a lot tougher with people, yes, and fuzzier, but this is about factors.

Simply put, you asser that inculating ethics would have an effect. You can't measure the fundamentals, but you could measure it by taking many people, splitting them into two groups, then measuring the influence between the training/conditioning and the lack of. How else can you know how effective something is? Gut instinct? Why would that be better?

This "strategic" level use is something I have issues with. In your example, what if you also pay them well, and make it clear that working well, and not stealing, would mean a lifetime of increasing wealth? Would they still steal?

Yes, although there are other factors that do matter. Overpaying still tends to follow the same wealth-gap curve, although strangely enough, does cause people to work harder too!

Doing experiments like this are time consuming and difficult to pull of. I think the correlation we find are of tenuous value, even at the "strategic" level. That is why, I think in this particular case employers should rely more on the judgement and intuition of trusted hiring people, not statistics.

One should use what is most likely to be predictively accurate. I haven't found personal judgments very good so far, or at least, it has been shown that people always over-estimate their ability to judge things more accurately than statistics, yet are consistently under performing them.

Just curious, would you also encourage companies not to hire black people for jobs that require intelligence because of the correlation between race and intelligence?

Ah, the race card. And no, because the correlation gap isn't sufficient to apply to the individual, nor is the IQ correlated signifcantly enough to job performance. Now, let's say you have to hire, blind, someone that has to move heavy boxes around. Do you hire a woman?

In both cases, the correct answer is to hire on what you can measure, to the best of your ability. If I had to hire someone from two distinct groups, blind, without knowing anything other than their group's average IQ, then yes, I would hire from the higher IQ group. One makes the decisions based upon the information one can get.

In hiring situations, however, you have at least two alternatives - concrete grades/previous performance, and the ability to follow up with the individual (interview or otherwise). Those are individual traits.

I think the social sciences are faced with situations like the second case. The "telemetry" data is really good. (i.e. Your individual experiences with individuals is very accurate), but your "statistical models" (i.e. your prejudices based on past experiences, not matter how "scientifically" it is done are quite poor in comparison)

I think that you mistakenly believe that social sciences attempt to use factor analysis for individual analysis. Psychiatrists are what would be akin to your "fundamental" view - they drug people that exhibit symptoms, then adjust the medications as they get feedback.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
Heh, statistically, crimes that use profiling are far more likely to be solved. The argument over resource allocation (ie: is it worth spending those resources to solve a small number of crimes) isn't the same thing. "It does work, but at what cost?" is different than "Does it work?"

Cost cannot be so wryly dismissed. The ends don't justify the means. I certainly believe statistical methods work. I hope I haven't given the impression that I believe they do not. The question is how well can we trust what is currently being done? (Which is certainly constrained by cost.)

Correlations imply relationships. Why is it relevent if it is cause-effect? The relationship still exists.

Correlations could just be coincidence. There are coincidences you know. Also, there could be a common reason for seeing both things that correlate (like an unknown bias in the sample chosen, or testing procedure used), etc. You can say the "relationship" exists in such cases, but I think you are stretching the word "relationship".

Cause-Effect relationships are fundamental. You also implicitly assume cause-effect relationships in some of your examples.

Absolutely, if that is possible. If we could perfectly measure someone in a lab, that'd be great, but it doesn't work that way.

I think it does work that way (not in a lab, but directly) more often than not. In the hiring example, that would be doing background checks, having future coworkers meet and greet them to see how well they get along, testing their critical thinking skills directly, and in context of the job they would be doing, and have experts examine their thinking process in very hard (but on the job) situation to see if they would perform well.

However, I was distilling the argument, not suggesting use. If, for example, the IQ of Ns is dramatically higher than Ss, there is a relationship between the two. If the IQ of the sub-trait "Open to ideas" is correlated to IQ, then there is a relationship between the two. And vice versa.

See, in this case, if you wanted people with high IQs, why not test for IQ, instead of checking for N vs. S or Openness-to-Experience?

The goal is understanding how things connect together... The whole point is to define relationships, not just use it as you describe above.

We can find all the biases in our testing procedures we want. But we really need to determine weather what we see is an artifact of testing or something real. In the case of the N vs. S, Open-to-Experience thing, could the fact that you are giving written test material have anything to do with why the "relationship" exists? Knowing cause-effect relationships is the vital goal, not mapping coincidences or biases in our own testing procedures/sampling methods.


It defines a relationship between them. 75% of the time, adding (z) will lead to (y) in (x).

Here you are assuming a cause-effect relationship. (z) causes (y) in (x). To establish this properly, you would need a good control group, and a simple way to avoid selection bias, etc. Can you point me to a study that shows this sort of care? Say in your following example?

For example, verbal IQ scores are correlated to attention given to children at a young age. In most cases, the attention has a relationship to the child developing verbal skills. As a parent, you have to decide if you will give your child attention or not; statistically, you can't be sure if you'll have an influence, but it's likely you will.

Again, here how do you know that it isn't true that intelligent parents pay more attention to kids and also have kids that have high IQ? Was there really a controlled experiment done, where, say the IQs of both parents were controlled and the only thing that varied was attention given to kids by their parents? If so, please direct me to it. What factor analysis seems to be, is a DOE (Design-of-Experiment) style analysis done to get the hypothesis before real rigorous testing can be done. But I do But I do not think the rigorous follow up is possible within most cost constraints.

To create an actual situation where this is relevant, you can add a low-influence factor that you have to decide between. Measuring it after isn't helpful at all. And going with your gut doesn't average out any better than the odds would say it does. People are less objective in close situations than they are in normal situations; and even so, in normal situations, people tend to be less accurate than statistics.

I think you are referring to situations where we are measuring impersonal things like the odds in gambling and so on. I agree with you there (even though there are select individuals, I would trust for having keen number sense). But I think human beings are much better at figuring how well someone will do a particular job (especially people who are good at doing that job, and somewhat introspective/philosophical about it) than a test would be. I know there was a time when personality/IQ test were given during job applications, but I don’t know too many companies that do that now. I think they realized that it wasn't yielding the results the desired. I know Google is a conspicuous exception (but they were just trying to be incredibly selective, and having anyway to automatically weed candidates is a time saver, some-people weed out by length of resume, or the weather they follow the format, etc.).

Just because we live in an era where we can't measure the level of complexity (your version of fundamental) to 100% predict the outcome of every action doesn't mean that it doesn't serve a purpose. Prediction is always fuzzy. Your physics background uses absolutes - ellipses are perfectly defined in theory - but chaos still exists in practice.

Aah, the standard cop-out. You guessed my background wrong. I am not a physicist (but I think you are characterizing the “absolute” nature of their jobs incorrectly as well). Most of an engineer's job is design (with no math/stat involved). We make trade-offs by feel all the time. The idea is to achieve a "balance" in design requirements. It is plenty fuzzy. We do make "figures of merit" etc. But usually designs aimed at maximizing such figures-of-merits end up blowing up some other facet of the design that makes it unfeasible. That’s why there aren’t too many computer programs around that auto-design things(others than tiny subsets of the design).

The ellipses I mentioned are confidence ellipses (you can think of them as two dimensional confidence intervals). There is a 90% confidence ellipsoid, a 95% confidence ellipsoid, etc. The 100% confidence interval is never used because it tells us nothing. You will not always be right. The point I was making, is that the telemetry data should be trusted more than the static statistical model.

Let me give you an example... a real one from my life, this morning. I have two buses I can take to work every morning. I know that one is faster than the other - I've measured it. I know that one bus is often full and skips my stop, making up for the fact it is faster. I walk out this morning, and I feel that I should take the slower bus. While taking it, the faster bus, not full, passes me.

Now, I felt that way, but I'm not sure which one I should take. So I start timing them. Statistically, I will arrive earlier if I take the faster bus, despite the fact it might be notably slower sometimes... but my gut was wrong because I mis-measure the extra time waiting as longer than it really is. That bias is actually normal. People often act when not acting is the best choice, just as people are inherently unable to understand risk outside of "the middle space".

Just because the situation is chaotic and I might not "always" know the right decision, on average, I can make the decision that will work for me. I can't measure the location of the buses perfectly, so I do the next best thing.
Is that useless?

What if I was to start recording the date, weather, exact time, day of the week... then peform factor analysis to know what factors may change the schedule/traffic/riders...? For example, that during the summer, the faster bus is the right choice (because summer vacation means less riders), but in winter it is the slow bus that is better (since the fast bus comes from a mountain). I don't need to know the cause - it would show clumps around dates, which I could use blind.

Just because you're gut is wrong, doesn't mean an expert's gut will be wrong. Have you read Blink? Amazon.com: Blink: The Power of Thinking Without Thinking: Books: Malcolm Gladwell And quite frankly the simple, common sense, timing (maybe you do it a few times), is rather simple and concrete. Judging someone’s intelligence, is a completely different exercise.

Oh, and good luck with your factor analysis on the bus-picking thing. I'm sure you'll find it worthwhile ;) .Think of the minutes over you life time you will have saved over just picking the fast bus al the time, after one of the most rudimentary calculations.


If statistically, wealth divergence is an accurate predictor of behaviour in a given situation, one can inference a generalised behaviour.

Again you are inferring cause-effect relationship when you generalize like this. What if all the people in the town you are setting up shop in are really honest, and close knit, and the founder is well liked by the towns people? Again, looking at the specific circumstances of the situation you are in will largely make your statistical observations irrelevant.

I understand that this isn't fundamental enough for you, but since we cannot measure fundamentals accurately enough individually - the individual does measure this through their own statistical model and it is generally not accurate - the relationship is well known and is only being measured to see how valid that gut feeling is.

But you can measure fundamentals in this situation. First, there is the background check. Then you could place each person (easily done in an interview) in a situation, where they could steal something valuable, seemingly with impunity, and see if he or she does.


re: inculate ethics, it is shown that mitigating factors include harsh punishments and % chance of being caught. Conditioning requires the consistent negative feedback during the "training period". It is possible to control for and experiments are done all the time. It's a lot tougher with people, yes, and fuzzier, but this is about factors.

What I am saying is that when it comes to people, it is not about factors, but about the people themselves.

Simply put, you asser that inculating ethics would have an effect. You can not measure the fundamentals, but you could measure it by taking many people, splitting them into two groups, then measuring the influence between the training/conditioning and the lack of. How else can you know how effective something is? Gut instinct? Why would that be better?

If you did this study, it may or may not prove effective. If it doesn’t how do we know it isn’t because of the particular training method we used was poor. If it does, how do we know, that we didn’t have sample bias?

What I am saying is that it is much simpler (maybe “gut instinct” was too colloquial a term) to simply have good judges of character, judge the character of the people who are being hired. Note: the judges need to be people who are good at judging character, with a long career of being able to judge character--perhaps seasoned police, or FBI interrogators.

Yes, although there are other factors that do matter. Overpaying still tends to follow the same wealth-gap curve, although strangely enough, does cause people to work harder too!

It makes me work less hard. It’s like id doesn’t matter what I do, so why put in extra effort.

One should use what is most likely to be predictively accurate. I haven't found personal judgments very good so far, or at least, it has been shown that people always over-estimate their ability to judge things more accurately than statistics, yet are consistently under performing them.

I can understand gamblers, etc. having that issue with things that are fairly concrete. But I think people (especially ones experienced at making good judgments) based on dynamic data, are better at making judgments about things like “intelligence”, “personality”, “potential for success on the job”, etc.

Ah, the race card. And no, because the correlation gap isn't sufficient to apply to the individual, nor is the IQ correlated signifcantly enough to job performance. Now, let's say you have to hire, blind, someone that has to move heavy boxes around. Do you hire a woman?

Ah, “the race card” card. This was simply my attempt at reductio ad absurdum. Of coarse, you have the option of believing the absurd in order to keep your view-point.

Regarding hiring a woman to move heavy boxes... If she is strong enough, yes I would hire her. Why would I weed out women explicitly? I would simply put on the job description will need to move X kg weight boxes regularly. Then on the interview, I would have her move several boxes, perhaps heavier than are needed on the job. The fact that she is a woman, and the fact that woman on average are less strong than men are irrelevant facts at this point. That irrelevance is what I am trying to highlight.

In both cases, the correct answer is to hire on what you can measure, to the best of your ability. If I had to hire someone from two distinct groups, blind, without knowing anything other than their group's average IQ, then yes, I would hire from the higher IQ group. One makes the decisions based upon the information one can get.

I have never known anyone to hire “blind”. Keep in mind, if you allow yourself to start with a false hypotheses, you can claim nearly anything in conclusion. My very point is that the data I (as a somewhat experienced interviewer) get with personal interaction with a person is a better judge of future performance on the job than IQ and personality tests.

In hiring situations, however, you have at least two alternatives - concrete grades/previous performance, and the ability to follow up with the individual (interview or otherwise). Those are individual traits.

In my hiring experience, previous grades, test scores and the like have told me very little in comparison with the my direct interaction with future candidates. In fact, high test scorers, have consistently been disappointments (with a couple of exceptions). Again, with the interview, the previous data turns irrelevant.

I think that you mistakenly believe that social sciences attempt to use factor analysis for individual analysis. Psychiatrists are what would be akin to your "fundamental" view - they drug people that exhibit symptoms, then adjust the medications as they get feedback.

Well psychiatrists work in a similar way to medical doctors then. Not saying that it is the best thing. But really, in their situations, this approach is better. You need to look at the specifics. I’d like to see you fix your car using factor analysis on cars in general instead of direct observations of your specific car.

As far as my mistake about social science trying to make inferences about individuals based on statistical data, I hope I am wrong. But I think, “intelligence”, “personality”, and “performance on the job” are extremely individualized(specific case) things, and I see social scientists making claims about such things.
 
Last edited:
Top