We need a fundamentally different approach to these iterations as it's obvious that the final output is **highly** dependent on the initial state, and that adding one new file shouldn't fundamentally change the outcome but does radically. This is demonstrating chaos theory at work instead of the deterministic progressive elaboration (refinement) process desired. To mitigate these fluctuations for multi-file inputs we need a stratified iteration approach where the first stage/phase randomly samples a certain percentage of files provided and runs a few iterations on these subsamples, then iterates the derivative results (equivalent to bootstrapping in statistical modeling). The intended outcome is that no single file should disproportionately affect the outcome, especially when files are of highly variable length and textual complexity. Basically we need more statistical rigor to this process to have higher confidence that the outcome of a particular iterative process, given a set of inputs is robust ("classic" statistical sampling techniques) and represents the "true" value (in terms of output text) that would be obtained with processing a hypothetical complete corpus of all knowledge (all words, phrases, sentences, thoughts, formulas, etc.).