My little Radioactivity_Sim project has hit it’s first memory usage barrier while attempting to calculate the fates of 10^7 Radium-224 nuclei. This is a big number, but it is nowhere near the numbers needed for practical use. The program will need be capable of provided usable predictions for at least up to 10^3 moles of nuclei which would come to 6.022 x 10^26 nuclei.
But 6.022 x 10^26 is much too large if limited to 10^7 individual calculations, right?
Not necessarily. There are some tricks that I can employ to reduce the memory required for each nuclei and the first one will be to create a sort of time sieve. The user input a certain time range, say from 12:00 noon to 12:05 five years from now, and the program will run through a huge number of nuclei event calculations, but it will only save the events which occur during the time range and will throw away the ones which don’t occur in that target time period without affecting the progression through the decay chain associated with the events that are thrown away. This will effectively exchange computation time for simulation accuracy.
Will this measure be enough to get up to the target goal? Not really no. Of course work if the number of events fitting into the time period is no greater than 10^7 then the program will not have the same memory issues, but there is still computation time to consider. If it currently takes 5 minutes to calculate 10^7 nuclei then it would take 5.7287*10^11 millenia to complete a calculation of the target number of nuclei. That is too many millenia, so some predictive modeling is certain to be eventually required.