It would help if Fi wasn't so hard to 'get' for people. Ti seems simple enough. It looks for inconsistencies, seeks precision in speaking, etc. whereas Fi is always described in some almost ethereal way, like you have to have wings to understand it. Fe is misunderstood by a lot of people too. I'm not always looking for harmony, but I know how to speak in order to achieve it if I WANT to (Fe).
Below is a bit of explanation of Ti I can easily understand as I do this a LOT. I like to be understood, and I like to seek the objective truth, and almost NEED precision (One example, are you a secret or even open grammar nazi?). For instance, just recently, I felt incredibly uncomfortable (to the point of speaking out loud to MYSELF) when I heard a 'scientist' say they'd found the "oldest cave art." I loudly said, "THAT YOU
KNOW OF. Just because another hasn't been FOUND yet OMG." See? annoying. *shrug* Precision.
Here's the excerpt. Maybe it'll help. There's plenty more on the net, of course:
Introverted Thinking* - (Ti)
Introverted Thinking often involves finding just the right word to clearly express an idea concisely, crisply, and to the point.
Using introverted Thinking is like having an internal sense of the essential qualities of something, noticing the fine distinctions that make it what it is and then naming it.
It also involves an internal reasoning process of deriving subcategories of classes and sub-principles of general principles.
These can then be used in problem solving, analysis, and refining of a product or an idea.
This process is evidenced in behaviors like taking things or ideas apart to figure out how they work.
The analysis involves looking at different sides of an issue and seeing where there is inconsistency. [Personal note here. Inconsistency SHOUTS at me when I see/hear/sense it. It will NOT be ignored, and I can't let it pass without studying it, thinking about it, finding what's behind it.]
In so doing, we search for a "leverage point" that will fix problems with the least amount of effort or damage to the system.
We engage in this process when we notice logical inconsistencies between statements and frameworks, using a model to evaluate the likely accuracy of what's observed.