I've often toyed around with the idea of designing and proposing some sort of deep learning neural net that could look at some big data sets like the statistics that have been gathered over decades in MBTI, Big 5, enneagram, and socionics, research, in order to better model personality traits, using something like TenserFlow, or R, or SciPy.

To briefly explain what I mean by this, deep learning neural nets designed to interpret visual information are "trained" by being exposed to huge data sets (usually millions of images). Without being told, they "learn" how to spot and identify the subtle features that categorically differentiate each image, such that they can begin to give very accurate descriptions of any image you show it (around 95% accuracy or more). However, they don't "look" at images the way we do. That is to say, the "features" they come to learn and use to differentiate images make no sense to us humans. For instance, you could add noise to an image - enough noise that a human can't tell what it is anymore - but the machine can still tell what it is. Nevertheless, these neural nets are extremely accurate (again, 95% of the time or more).

This is what I mean by modeling personality using a deep learning neural net. It could be trained using statistical data sets from personality research that have been gathered over the years in order to accurately identify different types of personalities based on visual confirmation, without the need for self-reporting or test-taking or observation by a trained specialist. In a sense, the neural net would be a trained specialist.

Essentially, you feed it the data sets, then you show it millions of images of people which are each assigned to various personality type categories, and it "learns" what the statistical features are that correlate with a person's type based on physical appearance. Then you test the accuracy of the program (it will probably be low at first, something like 70% accurate) and make adjustments to the "weights" assigned to the neurons in each interpretive layer until the accuracy reaches 90% or more.

Come to think of it, I wouldn't be surprised if someone has already done this. I'll have to investigate into it more, it was just a thought I had recently for a masters thesis.