I myself don't assume letting someone achieve a goal is inherently good (unless we extremely vaguely take the other person's goal to be achieving happiness, not at the greater expense of the happiness of others). I try to go by a kind of quantitative happiness principle.
The effects that take precedent would relate to a kind of philosophical number crunching, if you will. If you go buy a typical mantra like the most happiness, for the most people, for the longest period of time, then you have something of an outline (my thoughts get more complex than those words left in their own vagueness). Sometimes a prediction is very difficult to make, but then sometimes, not so much.
The obvious one being that if you know everyone is going to die, you know about as well as you can know anything that they aren't coming back, that the human race is done, and there won't be humans appreciating anything anymore. It's a prediction I can make into eternity. That prediction can then be factored into a moral analysis whenever it might come up (blowing up the earth is bad, mm'kay?). I know most predictions won't be that easy, but rather than being the basis for getting rid of what is probably the most functional approach to morality, it should instead just be more incentive to gain insight.
And how would you know? You say you would with the knowledge you have. How is that much easier than trying to know what I say you should try to know in moral matters?
If goodness is ultimately that internal, based that much on how you feel about yourself, then it really is of no relevance to anyone else and I think it qualifies as sort of a solipsistic ethic.
I wish I could come up with a better way of putting that.
I often regret entering lengthy discussions on this forum, too. This time, I'm not the one. I still have time yet, however.