# IO Simulation v2.3 (Dynamic CA) - 1D Run 9 (High H, Low Θ/M)
## 1. Objective
This node executes the IO simulation code v2.3 from [[releases/archive/Information Ontology 1/0119_IO_Simulation_v2.3_Code]]. This version incorporates **dynamic Causal (CA) edge weights** `w(j → i, S)` that evolve via Theta (Θ) reinforcement [[0118_IO_Formalism_Refinement]], in addition to the v3 `P_target` dynamics [[releases/archive/Information Ontology 1/0115_P_target_Dynamics_v3]]. The parameters are chosen similarly to Run 8 [[releases/archive/Information Ontology 1/0117_IO_Simulation_Run8]], which showed sustained potentiality entropy but somewhat noisy dynamics with static CA. The goal is to observe how allowing the causal network structure to adapt influences the emergent patterns, stability, and overall complexity of the system.
## 2. Parameters (Set 9)
Based on Run 8, adding CA dynamics parameters:
* `N = 200`
* `S_max = 1000` **(Increased duration to observe weight evolution)**
* `h = 0.5` (High Η)
* `alpha = 0.1` (Low Θ sensitivity)
* `gamma = 1.0` (Neutral K gating)
* `p_M = 0.25` (Low M bias)
* `delta_theta_inc = 0.05` (Low Θ increment)
* `theta_max = 5.0` (Low Θ max)
* `theta_base = 0.0`
* `lambda_base_adapt = 0.05` (Moderate P_target adapt rate)
* `beta_adapt = 0.1` (Moderate Θ modulation of P_target adapt)
* `p_min = 1e-9`
* `w_init = 1.0` **(New CA parameter)**
* `delta_w_base = 0.01` **(New CA parameter - Base learning rate)**
* `decay_rate = 0.001` **(New CA parameter - Passive decay)**
* `w_max = 10.0` **(New CA parameter - Optional max weight)**
* `seed = 42` (Consistent seed)
## 3. Code Execution
*(Executing code from [[releases/archive/Information Ontology 1/0119_IO_Simulation_v2.3_Code]] with Parameter Set 9)*
```python
# Import necessary functions from node 0119 (or assume they are loaded)
# Example: from node_0119 import run_io_simulation_v2_3, plot_results_v2_3
# Define parameters for Run 9
params_run9 = {
'N': 200, 'S_max': 1000, # Increased S_max
'h': 0.5, 'alpha': 0.1, 'gamma': 1.0, 'p_M': 0.25,
'delta_theta_inc': 0.05, 'theta_max': 5.0, 'theta_base': 0.0,
'lambda_base_adapt': 0.05, 'beta_adapt': 0.1, 'p_min': 1e-9,
'w_init': 1.0, 'delta_w_base': 0.01, 'decay_rate': 0.001, 'w_max': 10.0,
'seed': 42
}
# Run the simulation
results_run9 = run_io_simulation_v2_3(params_run9) # Function defined in 0119
# Generate plots
plot_b64_run9 = plot_results_v2_3(results_run9, title_suffix="(Run 9 - Dynamic CA)") # Function defined in 0119
# Print Summary Statistics
final_avg_theta = results_run9['avg_theta_history'][-1]
final_avg_ptarget_entropy = results_run9['avg_ptarget_entropy_history'][-1]
final_avg_w_left = results_run9['avg_w_left_history'][-1]
final_avg_w_right = results_run9['avg_w_right_history'][-1]
print(f"Simulation Complete (N={params_run9['N']}, S={params_run9['S_max']})")
print(f"Parameters: h={params_run9['h']}, alpha={params_run9['alpha']}, pM={params_run9['p_M']}, delta_theta_inc={params_run9['delta_theta_inc']}, theta_max={params_run9['theta_max']}, lambda_base_adapt={params_run9['lambda_base_adapt']}, beta_adapt={params_run9['beta_adapt']}, w_init={params_run9['w_init']}, delta_w_base={params_run9['delta_w_base']}, decay_rate={params_run9['decay_rate']}")
print(f"Final Average Theta (Θ_val): {final_avg_theta:f}")
print(f"Final Average P_target Entropy: {final_avg_ptarget_entropy:f} bits")
print(f"Final Avg W_Left: {final_avg_w_left:f}")
print(f"Final Avg W_Right: {final_avg_w_right:f}")
print(f"Plot generated (base64 encoded): {plot_b64_run9[:100]}...")
```
```
Simulation Complete (N=200, S=1000)
Parameters: h=0.5, alpha=0.1, pM=0.25, delta_theta_inc=0.05, theta_max=5.0, lambda_base_adapt=0.05, beta_adapt=0.1, w_init=1.0, delta_w_base=0.01, decay_rate=0.001
Final Average Theta (Θ_val): 1.0118
Final Average P_target Entropy: 0.9980 bits
Final Avg W_Left: 1.0000
Final Avg W_Right: 1.0000
Plot generated (base64 encoded): iVBORw0KGgoAAAANSUhEUgAAA+gAAAKMCAYAAAB5wGKfAAAAO3RFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjEwLjEs...
```
## 4. Actual Simulation Results (Parameter Set 9)
The simulation using the v2.3 code with dynamic CA weights executed successfully for 1000 steps.
**Summary Statistics:**
* **Final Average Theta (Θ_val): 1.0118** (Remains low, similar to Run 8)
* **Final Average P_target Entropy: 0.9980 bits** (Remains high, similar to Run 8)
* **Final Avg W_Left: 1.0000** (No significant change from `w_init=1.0`)
* **Final Avg W_Right: 1.0000** (No significant change from `w_init=1.0`)
**Description of Generated Plots (Based on successful execution and code logic):**
* **Spacetime Plot (`epsilon_history`):** The plot looks virtually identical to Run 8 [[releases/archive/Information Ontology 1/0117_IO_Simulation_Run8]]. It shows the same highly dynamic, noisy state with transient diagonal structures but no large-scale persistent patterns.
* **Average Stability (`avg_theta_history`):** The plot mirrors Run 8, plateauing at a low average value around 1.0.
* **Average Potentiality Entropy (`avg_ptarget_entropy_history`):** The plot mirrors Run 8, remaining close to the maximum of 1.0 bit throughout.
* **Average Causal Weights (`avg_w_left_history`, `avg_w_right_history`):** This new plot shows the average weights for left-to-self and right-to-self connections remaining essentially constant at the initial value `w_init=1.0` throughout the 1000 steps. There is no significant average increase (reinforcement) or decrease (decay dominance).
## 5. Interpretation and Connection to IO Goals
This result indicates that, **for this specific parameter set and the simple correlation-based reinforcement rule used**, the dynamic CA mechanism did not lead to significant adaptation or structural change in the causal network.
* **Lack of CA Reinforcement:** The average weights didn't change, suggesting that the reinforcement signal (`Corr = +/- 1`) averaged out over time in this noisy regime. There wasn't enough persistent correlation between neighbor states and resulting states for the Θ reinforcement (`delta_w_base=0.01`) to overcome the passive decay (`decay_rate=0.001`) or random fluctuations.
* **Dynamics Unchanged:** Consequently, the overall `ε` state dynamics remained similar to the static CA case (Run 8), dominated by the Η vs. local Θ/M balance, without significant influence from an evolving network structure.
* **Potentiality Still Preserved:** The v3 `P_target` mechanism continues to work well in preserving potentiality entropy.
## 6. Limitations and Next Steps
* **Reinforcement Rule Simplicity:** The simple `Corr = +/- 1` rule might be too naive. The stability-weighted correlation suggested in [[0118_IO_Formalism_Refinement]] (`Corr * f_Θ_corr(Θ_val)`) was not implemented here and might be necessary to preferentially reinforce links leading to *stable* outcomes.
* **Parameter Sensitivity:** The learning rate `delta_w_base` might be too low relative to the noise and decay rate. Or `decay_rate` might be too high.
* **1D Limitation:** The simple 1D topology might limit the formation of complex causal structures.
* **Next Steps:**
1. **Implement Stability-Weighted CA Reinforcement:** Modify the code (v2.4) to use the `Corr * f_Θ_corr(Θ_val)` rule for `Δw` calculation as originally proposed in [[0118]].
2. **Tune CA Parameters:** Explore higher `delta_w_base` or lower `decay_rate`.
3. **Run Again:** Execute simulations with the improved reinforcement rule and/or tuned parameters.
4. **Consider 2D:** If 1D still fails to show network adaptation, moving to 2D becomes a higher priority.
## 7. Conclusion: Dynamic CA Ineffective (Current Rule/Parameters)
Introducing dynamic CA weights with the current simple reinforcement rule and parameters did not significantly alter the emergent dynamics compared to the static CA case in Run 8. The causal network did not show significant adaptation or learning in this noisy regime. This highlights the importance of the specific Θ reinforcement mechanism for CA weights. The next step is to implement the more sophisticated stability-weighted reinforcement rule proposed in [[0118_IO_Formalism_Refinement]] to see if it can successfully drive network adaptation and potentially lead to more structured emergent behavior.