I hadn't realize there was such a long discussion going on about "extrapolation" into the past.
You've made several good points here so let me see if I can further elaborate on what you've said and also present my viewpoint of things. (Warning: this will not be brief.)
Here is my take on the issue...
We don't have data on the laws of mechanics, the laws of gravitation, or the laws of electrodynamics to well back into the millions of years, but it is scientifically valid to use assume they work just as they do now. Beta decay is a similar phenomenon.
Admitedly this pehnomenon has a bit of randomness built in, but beta decay is a law of nature as we know of these laws (infact is the first known interation of the weak nuclear force). We have had plenty of evidence of how weak interactions work... not as much as electrodynamic interactions, but plenty to consider it a law.
I think it is reasonable to assume that natural laws behave in the past as they do in the present. The problem when constructing the past is assuming that only said natural law is responsible for your current data. For example, say a person used current knowledge of gravity to extrapolate the paths of our solar system's planets up to billions of years ago. However if it was later discovered that another solar system had passed very close to ours about 1 million years ago, then all of the previous calculations would be thrown off for earlier dates. Everything would have to be recalculated (if possible). In this situation there is nothing wrong with gravity. The problem is assuming that our solar system is a closed system.
Now the nice thing about a planet's orbit is that it really can only be affected by something with a significant gravity field. On the other hand if you think about the parent-daughter ratio of elements used in radioactive dating, there are all sorts of things that could affect the ratio over time: chemical reactions, earthquakes, erosion, underground water flow, organisms in the ground, etc.... There is nothing wrong with the decay rate itself, but there are other factors which affect the ratio of the two elements used in the time calculation.
Or perhaps I am making something out of nothing? Perhaps all radioactive materials exist in a state similar to temporal stasis where nothing in the outside world can affect them?
Actually I can see either viewpoint being correct. The only way to know which is correct is to perform some type of test. The best one I can think of is to find other models which match the results of radioactive dating. If there are several good ones, then the method is solid. If there are none, then the method should be highly suspect. I personally do not know if there exist independent models that confirm radioactive dating, but the ones I tried did not match, so I became skeptical.
(Aside: I also take issue with the illogical conclusion that all life descended from a single microorganism, but I'd rather stay on one topic, so I'll save that for a later time.)
In this law, half-lifes inherently have a error-value associated with them. As AntisocialOne mentioned, using radiocarbon dating is inadequate for geological time because its error is too large. But you can still date back to 26000 yrs. with an error of 163 years (the source didn't specify, but I believe this is the standard error, so we'd expect the stardard deviation to be much greater). I don't know specifically if there is a substance that decays with a standard error that is small enough for dating geological time, but what ever method we use, we'll have a confidence interval around the date we claim. Many people forget this fact. Using radiocarbon dating for geological time would have a large confidence interval indeed.
This combination of fundamental law + confidence interval modeling is incredibly powerful and accurate and used all the time in physics. Things do become more dubious, however, if you do not have a fundamental law, but only an empirical model that you try to extrapolate well into the past, or if you do not specify a confidence interval around a specific value.
Since the emperical model is really only a summary of data and does not embody an understanding of nature, it would be dubious to use it, and perhaps this your objection. But only the half-life itself is emperical. The beta decay model is a law of nature (as we know laws of nature).
This is good, and I'd like to elaborate. Radioactive decay is very similar to a binomial probability model. An analogy often used goes something like this: Say a person flips 80 quarters and then takes away all the coins that turned up heads. Then they flip again and remove again, and so on. After the first flip the expected value is 40 quarters, 20 after the second flip, then 10, 5, etc.... In reality you usually don't get 40 heads after the first flip, but there is a margin of error (based on the standard deviation). If you used more quarters then the standard deviation increases too. The increase in the standard deviation is proportionate to the increase of the square root of the mean.
Now apply this to the idea of radioactive decay. Say for carbon-14 (half-life 5730 years), each carbon-14 atom has a .5 probability of decaying during any given 5730 year interval. After 5730 years half of the carbon-14 will decay modified by the standard deviation and confidence interval. The standard deviation is based on the amount of Carbon-14 originally present. This means that when you do the time calculation the "+ or -
x" is applied to the exponent rather than simply "+ or -
z years". And here I am referring only to the theoretical standard deviation that can be measured in a laboratory. The standard deviation of any sample taken in the field is always going to be greater than the theoretical standard deviation.
Now I don't really have a problem with this part of the theory. I'm just explaining, because I think it's useful to know how it works, and to see how easily the calculation could go wildly off. An increase in the standard deviation affects the exponent. It is easy to see that if the parent-daughter ratio of the sample elements started to vary significantly it would have a huge effect on the final time calculation. So this is a process that requires a high degree of accuracy in the original sample.