Oh shit, using the Big Five to screen applicants is bad news. On the Big Five alone, I'd completely flunk an application for a tech company despite my resume and skillset.
It's not intended to be used for down-in-the-weeds purposes such as this one--it's intended for more abstract subjects, such as finding correlations, developing theories, furthering research, etc.
Big Data != Big Five
(It's Big Brother's secret sibling)
But agree with all of that. There is something sinister about screening people based on personality tests. Use it for career/personal development, by all means. But to exclude someone based on their being a specific type? That's just a dangerous new kind of -ism.
A psychologist involved in creating the algorithm discussed in the article, defended it, and the concerns about legality/potential for indirect discrimination thusly:
" We've known for years that intelligence is the single best predictor of performance across all job types, but as an industry we can't really use it because intelligence tests tend to discriminate. That's why you see so many personality-style tests.
There are a lot of specific questions employers cannot ask (personal, disability, some criminal history) as well as protected classes which cannot be arbitrarily discriminated against. Protected classes include ethnicity/race, gender, and age (people over 40). We're constantly checking our assessments to ensure they do not discriminate against women, any ethnicity, or older applicants.
Things get trickier when you add the notion of job relevance. IF you are using a screening tool that discriminates, it MUST be job relevant. You cannot disproportionately screen out women who can't lift xx lbs over their head from a firefighting job if that's not something a firefighter actually has to do on the job. You CAN disproportionately screen out blind people for the job of fire truck driver because vision is obviously job relevant."
Thereby implying that intelligence isn't "job relevant".....?
Let's say the "more than zero but less than five social networks" heuristic becomes a standard to screen candidates at the first stage of the recruitment process ( increasingly outsourced to a handful of CRAs).
That means people who refrain from using those sites are being discriminated against on the grounds of good sense / ethics. (When it's illegal to discriminate on the basis of a criminal record...) That makes it necessary to open up one's life to public scrutiny in order to secure work.
Why is it "wrong" to discriminate on the basis of intelligence, but not wrong to discriminate on the basis of personality type or desire for privacy?
Part of the problem is that technology is much more agile than the legislature, which means lots of unethical loophole spinning.
Beyond ethics, from a technical standpoint, there is so much that can go wrong with this model. Including, but not limited to:-
- More often than not, Big Data is "dirty" and unreliable
- Correlation is not causation
- Psychologists make mistakes
- Programmers make mistakes translating psychologists' mistakes into working s/w
Why do people find it so easy to trust computers when they don't trust human judgment?
Why don't they understand that not only are computers only as good as the human judgments/data fed into them, but unlike human beings, they do not have the ability to reflect on output and say "that's garbage"?