Keep in mind, as your networth increases, in both cases your expected rate of return decreases (both options are lousy investments).
But, lets see how the "bet-everything-everytime-on-the-same-coin-strategy" works out over the long run in terms of probability of ruin....
Which was the point - neither are relevent. Pretend, for example, that you have clawbacks to your pension if you have a certain income - flipping the coin once actually becomes weighted towards flipping it because the ROR doesn't have a "zero" return. What if, instead of resetting to 0, it resetted to 0.3* original networth, but you are unable to flip again. Or an absolute number... or in the pension case, a graduated amount dependent on your final networth.
I'm not mathematically astute enough to derive these models, I fear, so it isn't an option for me. I also feel that these models are inherently unsuitable - they work great in defining... in reducing data down to a model... but they rarely convey understanding of how things work. I'd rather see simulations than the created model because I can step through it to understand the options involved.
What would you consider the formula for the long run expected value of this process?
Meaning mine? I'm not sure, actually.
Lemme see, I'd need to know the degree of expected variation, the EV and the degree of margin... or at least, the point at which I would consider myself ruined.
If I was to use that scenario, my initial capital would be (a), taken from (b), the home residence. The relative cost of (a) decreases by 2% per year while the relative value of (b) increases by 2% per year. I take (a) and buy a non-correlated (structurally) asset, which I'll call (c), initially, c=a, but (c) will increase, on average at 2%. I then leverage (c)'s value, maintaining 50% in equity, meaning I now have a loan that is (d), which is initially d=c. Over time, however, the relative cost of (d) drops at 2%, and now c becomes 2c.
That makes sense so far. What is risk of ruin here? I would define it as a completely failure when a+d <= c+b, IOWs, the FV of the loans is greater than the FV of the investments.
So the value of (a+d) decreases at (a+d)/1.02 while (2c+b) increases at the rate of (2c+b)*(1+(0.02+market real return rate))... now I just need to know the variation for c and b... hrmmm. Except, that doesn't account for the cost of carrying (a+d). And that's correlated to the market rate... though I could define the cost of a, b would be floating... Hrmmm. And that depends on outside income, since a+b are a tax writable expense.. which ignores further income being injected... and it matters because excess income is added to the portfolio, changing the relative mix of c and d compared to a and b.
Now I remember why I can't do this

I'm going to stick with "I don't know" and I can't even begin to imagine my ability to calculate this with even the most complex of models... I know there are some models that allow you to put in this, but they are simplistic ROR measurements based upon margin and the like, which isn't the whole picture.
Though I certainly don't mind anyone telling me how I could do this mathematically, I'm pretty sure I wouldn't be able to understand it, making it rather useless for working things out on my own
Perhaps, we can create a ptgatsby's get rich scheme thread.
Scheme? *shrug* Simplify the problem if the leverage bothers you.
Do you pay off your mortgage or carry an interest-only long term loan?
Do you invest the money that would be in your mortgage in the market?
That's what I'm trying to determine. If it also applies to investments... then why own investment property for those advantages if I can do it through leverage? And then, if that is true, how is ROR affected by having two scenarios running at the same time.
The only way I know to do that is to model it; I'm not sure of the correlations between housing and interest rates, housing and markets, interest rates and markets and so forth... so to model, using historical data seems the best to me.