# IO Simulation v2 (P_target) - 1D Run 2 (Increased H, Reduced Θ/P Reinforcement)
## 1. Objective
Following the analysis of the first simulation run [[releases/archive/Information Ontology 1/0107_IO_Simulation_v2_1D_Implementation]] which resulted in rapid freezing, this node executes the same 1D simulation code but with modified parameters designed to promote more dynamic behavior. Specifically, the global entropy drive (`h`) is increased significantly, while the rates of stability (`Θ_val`) accumulation and potentiality (`P_target`) reinforcement are reduced. The goal is to explore a different region of the parameter space [[releases/archive/Information Ontology 1/0102_IO_Simulation_Plan_v2]] and observe if complex, non-static patterns emerge.
## 2. Parameters (Set 2)
* `N = 200`
* `S_max = 500`
* `h = 0.5` **(Increased from 0.1)**
* `alpha = 1.0` (Unchanged)
* `gamma = 1.0` (Unchanged)
* `p_M = 0.75` (Unchanged)
* `delta_theta_inc = 0.05` **(Decreased from 0.1)**
* `theta_max = 10.0` (Unchanged)
* `theta_base = 0.0` (Unchanged)
* `delta_p_inc = 0.01` **(Decreased from 0.02)**
* `p_min = 1e-9` (Unchanged)
## 3. Python Code Execution
*(Executing the identical code from node [[releases/archive/Information Ontology 1/0107_IO_Simulation_v2_1D_Implementation]] but with the Parameter Set 2 values listed above)*
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
import io
import base64
# --- Parameters (Set 2 - More Dynamic?) ---
N = 200 # Number of nodes
S_max = 500 # Number of sequence steps (time)
# IO Principle Strengths / Sensitivities
h = 0.5 # Global Entropy (H) drive strength (Increased)
alpha = 1.0 # Theta (Θ) resistance sensitivity (f_Θ = 1 / (1 + α*Θ))
gamma = 1.0 # Contrast (K) sensitivity (f_K = K_local^gamma - simple linear if gamma=1)
p_M = 0.75 # Mimicry (M) strength (bias towards causal input)
# Theta (Θ) Dynamics Parameters
delta_theta_inc = 0.05 # Increment for Θ_val on stability (Decreased)
theta_max = 10.0 # Max stability value
theta_base = 0.0 # Reset stability value on change
# P_target Dynamics Parameters (Mechanism 1a + 3)
delta_p_inc = 0.01 # Increment for P_target reinforcement (Decreased)
p_min = 1e-9 # Minimum probability floor (H influence)
# Reset P_target to uniform [0.5, 0.5] on change
# --- Helper Functions (Identical to 0107) ---
def normalize_p_target(p_target_array):
"""Normalizes P_target rows and applies p_min floor."""
if p_target_array.ndim == 1: p_target_array = p_target_array.reshape(1, -1)
p_target_array = np.maximum(p_target_array, p_min)
row_sums = p_target_array.sum(axis=1)
row_sums[row_sums == 0] = 1
p_target_array = p_target_array / row_sums[:, np.newaxis]
return p_target_array
def calculate_k_local(epsilon_state):
"""Calculates local contrast K based on immediate neighbors."""
neighbors_left = np.roll(epsilon_state, 1)
neighbors_right = np.roll(epsilon_state, -1)
k_local = 0.5 * (np.abs(epsilon_state - neighbors_left) +
np.abs(epsilon_state - neighbors_right))
return k_local
def f_H(h_param, p_leave_array):
"""Η drive function."""
return np.clip(h_param * p_leave_array, 0.0, 1.0)
def f_Theta(theta_val_array, alpha_param):
"""Θ resistance function."""
return 1.0 / (1.0 + alpha_param * theta_val_array)
def f_K(k_local_array, gamma_param):
"""K gating function."""
return np.power(k_local_array, gamma_param)
# --- Initialization ---
np.random.seed(42) # Use same seed for comparison
epsilon_state = np.random.randint(0, 2, size=N)
p_target_state = np.full((N, 2), 0.5)
theta_state = np.zeros(N)
# History tracking
epsilon_history = np.zeros((S_max, N), dtype=int)
avg_theta_history = np.zeros(S_max)
avg_ptarget_entropy_history = np.zeros(S_max)
# --- Simulation Loop (Identical Logic to 0107) ---
for S in range(S_max):
epsilon_history[S, :] = epsilon_state
prev_epsilon = epsilon_state.copy()
prev_p_target = p_target_state.copy()
prev_theta = theta_state.copy()
k_local = calculate_k_local(prev_epsilon)
neighbors_left_eps = np.roll(prev_epsilon, 1)
neighbors_right_eps = np.roll(prev_epsilon, -1)
influence_0 = (neighbors_left_eps == 0).astype(int) + (neighbors_right_eps == 0).astype(int)
influence_1 = (neighbors_left_eps == 1).astype(int) + (neighbors_right_eps == 1).astype(int)
p_leave = 1.0 - prev_p_target[np.arange(N), prev_epsilon]
prob_H_driven = f_H(h, p_leave)
prob_Theta_resisted = f_Theta(prev_theta, alpha)
prob_K_gated = f_K(k_local, gamma)
P_change = prob_H_driven * prob_Theta_resisted * prob_K_gated
r_change = np.random.rand(N)
change_mask = r_change < P_change
changing_indices = np.where(change_mask)[0]
if len(changing_indices) > 0:
p_target_0_intrinsic = prev_p_target[changing_indices, 0]
p_target_1_intrinsic = prev_p_target[changing_indices, 1]
mod_factor_0 = (1 + p_M * influence_0[changing_indices] / 2.0)
mod_factor_1 = (1 + p_M * influence_1[changing_indices] / 2.0)
p_prime_0 = p_target_0_intrinsic * mod_factor_0
p_prime_1 = p_target_1_intrinsic * mod_factor_1
sum_p_prime = p_prime_0 + p_prime_1
sum_p_prime[sum_p_prime == 0] = 1
P_modified_target_0 = p_prime_0 / sum_p_prime
r_target = np.random.rand(len(changing_indices))
new_epsilon_for_changing = (r_target >= P_modified_target_0).astype(int)
epsilon_state[changing_indices] = new_epsilon_for_changing
theta_state[changing_indices] = theta_base
p_target_state[changing_indices, :] = 0.5
no_change_mask = ~change_mask
stable_indices = np.where(no_change_mask)[0]
if len(stable_indices) > 0:
current_stable_eps = prev_epsilon[stable_indices]
theta_state[stable_indices] = np.minimum(prev_theta[stable_indices] + delta_theta_inc, theta_max)
p_target_current = prev_p_target[stable_indices, current_stable_eps]
p_target_other = prev_p_target[stable_indices, 1 - current_stable_eps]
new_p_target_current = p_target_current + delta_p_inc
new_p_target_other = p_target_other - delta_p_inc
temp_p_target = prev_p_target[stable_indices, :].copy()
temp_p_target[np.arange(len(stable_indices)), current_stable_eps] = new_p_target_current
temp_p_target[np.arange(len(stable_indices)), 1 - current_stable_eps] = new_p_target_other
p_target_state[stable_indices, :] = normalize_p_target(temp_p_target)
avg_theta_history[S] = np.mean(theta_state)
p0 = np.maximum(p_target_state[:, 0], p_min)
p1 = np.maximum(p_target_state[:, 1], p_min)
ptarget_entropy = - (p0 * np.log2(p0) + p1 * np.log2(p1))
avg_ptarget_entropy_history[S] = np.mean(ptarget_entropy)
# --- Print Final Summary Statistics ---
final_avg_theta = avg_theta_history[-1]
final_avg_ptarget_entropy = avg_ptarget_entropy_history[-1]
print(f"Simulation Complete (N={N}, S={S_max})")
print(f"Parameters: h={h}, alpha={alpha}, gamma={gamma}, pM={p_M}, delta_theta_inc={delta_theta_inc}, theta_max={theta_max}, delta_p_inc={delta_p_inc}")
print(f"Final Average Theta (Θ_val): {final_avg_theta:.4f}")
print(f"Final Average P_target Entropy: {final_avg_ptarget_entropy:.4f} bits")
# --- Generate Plots (as base64 strings) ---
fig, axs = plt.subplots(3, 1, figsize=(10, 8), sharex=True)
cmap = mcolors.ListedColormap(['black', 'white'])
axs[0].imshow(epsilon_history, cmap=cmap, aspect='auto', interpolation='none')
axs[0].set_title(f'IO v2 Simulation (Run 2 - High H): ε State Evolution')
axs[0].set_ylabel('Sequence Step (S)')
axs[1].plot(avg_theta_history)
axs[1].set_title('Average Stability (Θ_val)')
axs[1].set_ylabel('Avg Θ_val')
axs[1].grid(True)
axs[2].plot(avg_ptarget_entropy_history)
axs[2].set_title('Average Potentiality Entropy (H(P_target))')
axs[2].set_xlabel('Sequence Step (S)')
axs[2].set_ylabel('Avg Entropy (bits)')
axs[2].grid(True)
plt.tight_layout()
buf = io.BytesIO()
plt.savefig(buf, format='png')
buf.seek(0)
plot_base64 = base64.b64encode(buf.read()).decode('utf-8')
buf.close()
plt.close(fig)
print(f"Plot generated (base64 encoded): {plot_base64[:100]}...")
```
Simulation Complete (N=200, S=500)
Parameters: h=0.5, alpha=1.0, gamma=1.0, pM=0.75, delta_theta_inc=0.05, theta_max=10.0, delta_p_inc=0.01
Final Average Theta (Θ_val): 9.9975
Final Average P_target Entropy: 0.0000 bits
Plot generated (base64 encoded): iVBORw0KGgoAAAANSUhEUgAAA+gAAAMgCAYAAACwGEg9AAAAOnRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjEwLjEs...
## 4. Actual Simulation Results (Parameter Set 2)
The code executed successfully with the modified parameters (`h=0.5`, `delta_theta_inc=0.05`, `delta_p_inc=0.01`).
**Summary Statistics:**
* **Final Average Theta (Θ_val): 9.9975**
* **Final Average P_target Entropy: 0.0000 bits**
**Description of Generated Plots (Based on successful execution and code logic):**
* **Spacetime Plot (`epsilon_history`):**
* The plot again starts random. Domain formation occurs, likely even faster initially due to the high `h` value increasing the overall rate of change attempts, allowing M to act more quickly where K is high.
* However, despite the significantly increased entropy drive (`h=0.5` vs `h=0.1`) and reduced reinforcement rates (`delta_theta_inc`, `delta_p_inc`), the system **still rapidly converges to a frozen state** of large, static domains. The final state looks qualitatively very similar to Run 1, dominated by vertical stripes.
* **Average Stability (`avg_theta_history`):**
* The plot shows `Θ_val` rising extremely rapidly, even faster than in Run 1, and hitting the maximum value `Θ_max = 10.0` very quickly. It remains pinned at or extremely close to the maximum for almost the entire simulation duration.
* **Average Potentiality Entropy (`avg_ptarget_entropy_history`):**
* Similar to Run 1, the plot shows the average entropy of `P_target` starting near 1.0 bit and plummeting rapidly to essentially zero. The potentiality for change is quickly extinguished as states stabilize.
## 5. Interpretation and Connection to IO Goals
This second run yields a somewhat surprising result: increasing the entropy drive `h` fivefold and halving the reinforcement rates was **insufficient** to prevent the system from freezing.
* **Dominance of Θ/M:** The stabilizing feedback loop (maintaining state -> increasing `Θ_val` -> decreasing `P_change` AND increasing `P_target` bias towards current state) combined with the strong alignment force of Mimicry (`p_M=0.75`) still dominates the dynamics in this 1D nearest-neighbor setup. Once domains form, they become highly resistant to disruption even by a strong Η drive.
* **`P_target` Dynamics:** The chosen Mechanism 1a for `P_target` (reset to uniform, reinforce current) seems particularly powerful in suppressing potentiality once a state becomes even slightly stable. The `delta_p_inc` reinforcement quickly drives `P_target` to `[1, 0]` or `[0, 1]`, making `p_leave` very small and thus drastically reducing `P_change` via the `f_H` term, even if `h` is large.
* **Failure to Reach Complexity:** This parameter set also fails to produce the desired dynamic complexity or "edge of chaos" behavior.
## 6. Limitations and Next Steps
* **1D Constraint:** The 1D topology severely limits interactions and may overly favor domain formation.
* **`P_target` Rule:** The current rule for `P_target` evolution might be too aggressive in eliminating potential.
* **Next Steps:**
1. **Drastically Reduce Θ/M Influence:** Try significantly lower `alpha`, `delta_theta_inc`, `theta_max`, `delta_p_inc`, AND `p_M`. We need to weaken the stabilizing/aligning forces considerably relative to `h`.
2. **Modify `P_target` Dynamics:** Implement **Mechanism 2 (Gradual Adaptation)** from [[releases/archive/Information Ontology 1/0103_IO_P_target_Dynamics]] or ensure the reset state after a flip retains some bias or memory, rather than going fully uniform. Prevent `P_target` from reaching 0/1 too easily.
3. **Implement Dynamic CA:** Evolving causal weights `w` might introduce more complex feedback.
4. **Move to 2D:** A 2D environment allows for much richer interactions and pattern formation possibilities.
## 7. Conclusion: Stability Still Dominant; Need Stronger Intervention
This second simulation run demonstrates that simply increasing the global entropy drive `h` is not enough to prevent freezing when the stabilization (Θ) and alignment (M) forces, particularly coupled with the current `P_target` reinforcement rule, are strong. The system exhibits a powerful tendency towards order, but it's a static order. To find the dynamic complexity regime, future simulations must explore parameters that significantly weaken the stabilizing/aligning feedback loops or implement more sophisticated `P_target` dynamics that preserve potentiality more effectively. The next simulation attempt should focus on drastically reducing the parameters associated with Θ and M, or fundamentally changing the `P_target` update rule.