Yes. life-changing, enlightening, devastating, sometimes all at once. I've been through a few.

Don't forget how one arrives at an axiom. Usually, one starts with axioms that aren't really axioms at all, but a set of self-consistent hypotheses that might later become theorems, that might later point to a core axiom.

There aren't many axioms at all, but ... I don't get to shape the theorems. I only get to shape the axioms. If I choose crappy axioms, my theorems are even worse.I don't think all are axioms. Fi theorems are built on core axioms, each with an hypothesis and conclusion, and over time, are then "proven" (or not) by practical experience. (In fact, I don't think they are that many axioms at all. They tend to be global and few.) Theorems by their nature invite examination and this is not as threatening to an Fi user to assess the initial hypothesis.

Perhaps for me the axioms are more changeable because they're relatively newer?

I'll accept that distinction, but something else is going to happen first: I'm going to determine whether the action that Fi is urging me to do is a damn fool thing to try in the first place!I think it takes a while toarticulatewhy; inside youknowthe answer. One need not wait for the ability to articulate to act.

Part of what I'm working at is to have Ni/Te and Fi in harmony, such that the conclusions of either will be similar, or rather all have contributed to the analysis, and agree on at least similar courses of action. Fi becomes part of the core assumptions of Ni/Te, and Ni/Te have played a strong role in shaping Fi. This way, should I react instinctively, it is likely something which I would have chosen in the first place.

I have to change the axioms, not the theorems. Changing the theorems is at too small of a scale, and creates a meaningless patchwork of nitpicky rules that are not necessarily self-consistent, but are ad hoc attempts to address specific cases. The axioms determine much of who I am, and everything else follows from those choices. By choosing proper axioms, by rotating my frame of reference so that I have an orthogonal set of a few key values, rather than a long list of things that annoy me, it's much easier to see what applies and how.I find this interesting - I like how you are dissecting it. I would prefer the word "theorem" as described above. And - basically in the rest of the post. Axioms are HUGE; they are not the smaller branches on the tree.

Te can sometimes result in spectacular personal failure, but whose fault is that? And was one really using Te, or simply employing rationalization to do exactly what one felt with Fi in the first place?This is where we will disagree - unlike math, Te is not "good" at saying that certain axioms suck. An axiom being an underlying, unprovable principle. In fact, instinctively I think examination of axioms with Te can sometimes be a spectacular personal failure.

In my case, a sample sucky axiom of which Te intrinsically approves is, "objective, logical reasoning is the best way to make decisions."

(I would assert that the above is an Fi axiom for most young Te users. Fi learns to instinctively trust logic, both for good and for ill.)

However, I'm not referring to evaluating an axiom alone, but rather in tandem with other axioms. It can work out theobjectiveconsequences of axioms, especially if Ni (in my case) has had experience dealing with corresponding issues. Moreover, it can take a set of axioms, and work with them in a plug-and-play fashion, figuring out which ones "fit together". It quickly becomes obvious (to Ni, at least), which axioms are the oddballs and bear a closer look. An axiom such as "let's make everyone happy" gets thrown out right quick, because it breaks everything else. (Most INTJs have done this instinctively, if not deliberately.) Te can present several sets of axioms for Fi to decide upon which it prefers. Fi can then adopt the preferred set.

Now, on the Fi side, there is not as huge a paradigm shift as you might imagine. Fi wouldn'tchoose that. Rather, there is a rotation, an adjustment, a realignment, such that the "core me" that is not any single cognitive function is satisfied with the new arrangement, and Fi learns to work with the new arrangement. An analogy: thermodynamics and statistical quantum physics are the same damn thing, the latter being a completely different model, yet for most things the same quantifiable results as the former. The new model is qualitatively different, with very different axioms, but it's still essentially works like the prior set of rules, but is much more refined.

Sometimes I've had a very good rule, but my actual understanding of the rule was weak, and so the "Fi axiom" was imperfect. I replace it with a better understanding of the same rule, which can look very very different, especially if I try to articulate it, but it is intrinsically the same rule, better understood.

I don't think I could do any of this without having Fi and Te work together, each contributing its own strengths.