• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Pacifism

Siúil a Rúin

when the colors fade
Joined
Apr 23, 2007
Messages
14,044
MBTI Type
ISFP
Enneagram
496
Instinctual Variant
sp/sx
I couldn't find a thread already started, but if there is, then feel free to merge this one.

Most of my life I admired pacifists and aspired to be one myself. I understand the concept of all life being precious and not assuming the role of deciding whose life should perpetuate. There is some level of arrogance in deciding someone else should die. I have admired people like the Conscientious Objector, MLK, and many Buddhists who construct convincing philosophies in favor of non-violence.

On one level I want to be convinced of the validity of this, but I'm not. The second order effect of allowing a dangerous person to live seems like another form of devaluing life. If I know someone will harm-kill another person, isn't allowing that person to live the same thing as allowing the other to die?

Then we enter the realm of utilitarian ethics, which can get disturbing when delving deeply enough. I don't agree with all of that either. And another issue with the second-order effect problem is that you can take that out further to a point that any action we take can have a negative result which we cannot foresee. Is there a way that drawing the line at a second order effect is the valid place rather than first-order effect, or taking it much further out?

I'm curious to read more philosophy and ideas about these issues to determine clearly where I stand on the issue. Right now I would kill to protect. It's not just an idea, but something I suspect I would actually do in the heat of the moment.
 

Magic Poriferan

^He pronks, too!
Joined
Nov 4, 2007
Messages
14,081
MBTI Type
Yin
Enneagram
One
Instinctual Variant
sx/sp
Then we enter the realm of utilitarian ethics, which can get disturbing when delving deeply enough.

Not to me. :D

So, I see much the same problem with pacifism. Consequentially, pacifism is self-defeating if it prevents a person from killing one other person who would themselves kill many people. And I don't want to hear anything about culpability. You always have culpability, even if you do nothing. You can never choose to not make a choice. In the end, the pacifist is the same as the person who doesn't pull the lever in the trolley problem. They choose to kill more people in order to maintain a symbolic appearance of killing less.

But regarding your question about orders of effect, that is indeed an interesting one that I've been chewing on for some time. At the most basic, I'd say one should do the best they can. You can only know so much, some degree of unforeseen consequences is inescapable. That doesn't mean you should not at least try to carry out the most optimal course of action according to the knowledge available to you. But that's operating as if you have a static amount of knowledge. The trickier part comes when we ask about the value of acquiring knowledge itself. If you don't know something, you could learn more, which in theory will give you better judgment in doing the right thing. However, it will take some amount of time (and perhaps other things) to gain more knowledge, and there is a potentially infinite amount of knowledge you could gain. When do you have enough knowledge, and when is investing in gathering more knowledge worse than acting on the knowledge you already have? At some point, you must pass the threshold into analysis-paralysis or navel gazing, and fail to act optimally, but how do we know when that is?

I'm not entirely sure yet (and perhaps forever). It's like going down the rabbit hole of doing cost-benefit analysis on cost-benefit analysis. It gets very meta very fast.
 
Top