Now what is wrong with this survey...
19 sample 14 studies... need I say more - I'll skim the surface of what is wrong with the research... but not give too much detail cos I can't be arsed
Multiple research conducted by multiple people tends to expose you to a lot of research error. A fair few were unpublished, we ahve no credentials of how badly conducted the research was - psycologist are typically poor at doign research.. if you want better executed research - speak to a social (being the most process orientated) or market reseacher... psycologist focus too much on the hypothesis and not enough on the mthod....
This is a desk survey of other people quant research...
TimeNone of the sample appear to align for a sample period or relate to a definable population. A 12 year old in 1980 and a 12 year old in 1985 might have significnat differences.
Synthisising quant data for secondary use requires data fusion to be able to do it with any degree of credibility.
It gets worse....
Not only was the sample combiend with wooly with non definde ages, time periods etc... but the MBTI classification was not give to a number fo the different surveys.... more that the research took the personality test conducted in some of the survey and converted it to MBTI.... NOW the correlation between MBTI and the other personality tests were at times as poor as 0.75... so 25% wrong before you start (are we see the level of wrongess creeping up).
MBTI is a segmentation tool witha fairly light differentciation (personally I've seen a lot better segmentation tools, more firm, more rigorous), I'd suspect there is up to 4-9% missatributation... ie people falling between categories.. and that is at the best of times.
Kids being asked to classy themselves with this device - yet another layer of vaguness.
So samples doens't match or allign, we are forcing MBTI onto other personality profiles, times doesn't align, we don't have a population defintion for the normative data OR for the gifted, I would place and educated guess on none of the survey definitons of gifted aligning with each other... or they would have defined it....
To then try and claim quantiative outcome from the research are ridiculous.....
As a surveying method, this is BAD! BAD research... Desk research can be used in this case but it still has firm strictures.. better to conduct isnight from each individual report and use it as an information platform...
By that... read through each research and list out key finding...
One survey which did use MBTI found a high skew of N's.... list that out, list out the big stuff, go through the reports and keep doing that...
How much aligment is their between the research findings of each individual report... where their is a building body of evidence - it can be said that there is a strong likelihood that on proplerly conducted primary research we would expect XYZ to hold true... ie you use the existing research to plan out your hypothesis...
From the research report the following are big enough to warrant further considerations:
N dominance in gifted samples... I'd say this is a big enough finding to warant putting money that this will hold true on a primary research project....
Increasing numbers of I's in gifted sample.... I'd say this would also hold true with follow up research, however I would think the "I" proportion will be bigger with gifted sample, but not over all gifted... ie below 49% of the gifted, but much higher than the nomrative sample.... I'd say this will hold true on primary research
P increase in gifted... Personally I said it's keeping as a hypothesis, but I don't think the data to date is strong enough to put any certainty on this as an outcome....
The only way that research should be sued is qualitativiely to draw hypothesis, it can not be used conclusively.
And for those interested in research... no single peice of is perfect... there is always better ways to ask questions, quota samples need rewighting (adding in error), random sample skew randomly so are less representative, interviewers add biase, non interviewed adds non responce biase, and the list goes on and on. The researchers jobs is to lower the level of error and biase, and to strip out as much of the managable error and minimse the non managable error... it's more complex in reality... Aim for good enough for purpose... for drust sampling it's most disciplined in the absolute research, but .... this requires absolutely testing of impact, not attitude or behaviour (usually), market research works on behaviour and focuses largely on purchasing behaviour, social research on human habits...
For thos eintersted, read the report and make your own mind up