# IO Simulation v2.4 Batch Execution: Dynamic CA Parameter Sensitivity ## 1. Objective This node executes a batch of simulations using the canonical code defined in [[releases/archive/Information Ontology 1/0136_IO_Simulation_Code_v2.4_Consolidated]]. The goal is to test the sensitivity of the dynamic Causality (CA) mechanism (v2.4, with stability-weighted reinforcement) to its key parameters: the base learning rate (`delta_w_base`) and the passive decay rate (`decay_rate`). This follows directly from the negative results of Run 10 [[releases/archive/Information Ontology 1/0123_IO_Simulation_Run10]], where no network adaptation was observed, and adheres to the simulation workflow [[releases/archive/Information Ontology 1/0132_IO_Simulation_Workflow]]. ## 2. Simulation Setup * **Code Version:** v2.4 from [[releases/archive/Information Ontology 1/0136_IO_Simulation_Code_v2.4_Consolidated]] * **Base Parameters:** Identical to Run 10 [[releases/archive/Information Ontology 1/0123_IO_Simulation_Run10]] unless otherwise specified. * `N = 200`, `S_max = 1000` * `h = 0.5`, `alpha = 0.1`, `gamma = 1.0`, `p_M = 0.25` * `delta_theta_inc = 0.05`, `theta_max = 5.0`, `theta_base = 0.0` * `lambda_base_adapt = 0.05`, `beta_adapt = 0.1`, `p_min = 1e-9` * `w_init = 1.0`, `w_max = 10.0` * `seed = 42` (Consistent seed for comparison) * `epsilon_states = 2` (Binary states) * **Swept Parameters:** `delta_w_base`, `decay_rate` * **Runs Executed:** * **Run 10 (Recap):** `delta_w_base=0.01`, `decay_rate=0.001` * **Run 16:** `delta_w_base=0.05`, `decay_rate=0.001` (Higher learning rate) * **Run 17:** `delta_w_base=0.01`, `decay_rate=0.0001` (Lower decay rate) * **Run 18:** `delta_w_base=0.05`, `decay_rate=0.0001` (Higher learning, lower decay) ## 3. Code Execution (Batch Run) ```python # --- Start Code from 0136 --- import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as mcolors import io import base64 from scipy.fft import fft # For PSD calculation import pandas as pd # For displaying summary table # Helper Functions (Copied from 0136 for execution context) def normalize_p_target(p_target_array, p_min=1e-9): if p_target_array.ndim == 1: p_target_array = p_target_array.reshape(1, -1) p_target_array = np.maximum(p_target_array, p_min) row_sums = p_target_array.sum(axis=1) # Prevent division by zero if a row sums to 0 # This might happen if N_states * p_min > 1, which shouldn't occur # Or if input array contains NaNs or Infs row_sums[row_sums <= 0] = 1.0 p_target_array = p_target_array / row_sums[:, np.newaxis] return p_target_array def calculate_k_local(epsilon_state): # Assumes epsilon_state is binary {0, 1} for this simple K # Needs generalization for continuous phi # Placeholder: Using gradient for continuous phi as a proxy for contrast # For discrete binary states: neighbors_left_eps = np.roll(epsilon_state, 1) neighbors_right_eps = np.roll(epsilon_state, -1) k_local = 0.5 * ((epsilon_state != neighbors_left_eps).astype(float) + (epsilon_state != neighbors_right_eps).astype(float)) return k_local # Return local gradient magnitude as proxy def f_H(h_param, p_leave_array): """Η drive function.""" return np.clip(h_param * p_leave_array, 0.0, 1.0) def f_Theta(theta_val_array, alpha_param): """Θ resistance function.""" return 1.0 / (1.0 + alpha_param * theta_val_array) def f_K(k_local_array, gamma_param): """K gating function (using 0/1 contrast).""" # Using simple linear gating for now: return np.clip(k_local_array * gamma_param, 0.0, 1.0) # Simple linear gating # Simulation Function (Copied from 0136) def run_io_simulation_v2_4(params): """Runs the IO v2.4 simulation with dynamic CA weights (stability weighted).""" # Extract Parameters N = params['N'] S_max = params['S_max'] # Using S_max for discrete steps dt = params.get('dt', 1.0) # Time step (can be 1 for CA-like updates) h = params['h'] alpha = params['alpha'] gamma = params['gamma'] p_M = params['p_M'] delta_theta_inc = params['delta_theta_inc'] theta_max = params['theta_max'] theta_base = params['theta_base'] lambda_base_adapt = params['lambda_base_adapt'] beta_adapt = params['beta_adapt'] p_min = params['p_min'] w_init = params['w_init'] delta_w_base = params['delta_w_base'] decay_rate = params['decay_rate'] w_max = params.get('w_max', None) # Optional max weight seed = params.get('seed', None) epsilon_states = params.get('epsilon_states', 2) # Number of discrete states if seed is not None: np.random.seed(seed) else: np.random.seed() # Initialization epsilon_state = np.random.randint(0, epsilon_states, size=N) # P_target: size (N, epsilon_states) p_target_state = np.full((N, epsilon_states), 1.0 / epsilon_states) theta_state = np.zeros(N) w_left = np.full(N, w_init) w_right = np.full(N, w_init) # History tracking epsilon_history = np.zeros((S_max, N), dtype=int) avg_theta_history = np.zeros(S_max) avg_ptarget_entropy_history = np.zeros(S_max) avg_w_left_history = np.zeros(S_max) avg_w_right_history = np.zeros(S_max) # --- Simulation Loop --- for S in range(S_max): epsilon_history[S, :] = epsilon_state prev_epsilon = epsilon_state.copy() prev_p_target = p_target_state.copy() prev_theta = theta_state.copy() prev_w_left = w_left.copy() prev_w_right = w_right.copy() # --- Phase 1: Calculate Influences --- k_local = calculate_k_local(prev_epsilon) neighbors_left_eps = np.roll(prev_epsilon, 1) neighbors_right_eps = np.roll(prev_epsilon, -1) influence = np.zeros((N, epsilon_states)) for k in range(epsilon_states): influence[:, k] = (neighbors_left_eps == k).astype(int) * prev_w_left + \ (neighbors_right_eps == k).astype(int) * prev_w_right total_causal_weight = influence.sum(axis=1) # --- Phase 2: Determine Probability of State Change --- current_state_indices = prev_epsilon p_target_current = prev_p_target[np.arange(N), current_state_indices] p_leave = 1.0 - p_target_current prob_H_driven = f_H(h, p_leave) prob_Theta_resisted = f_Theta(prev_theta, alpha) prob_K_gated = f_K(k_local, gamma) P_change = prob_H_driven * prob_Theta_resisted * prob_K_gated # --- Phase 3: Execute State Transition --- r_change = np.random.rand(N) change_mask = r_change < P_change no_change_mask = ~change_mask next_epsilon = prev_epsilon.copy() next_theta = prev_theta.copy() next_p_target = prev_p_target.copy() changing_indices = np.where(change_mask)[0] if len(changing_indices) > 0: p_target_intrinsic = prev_p_target[changing_indices, :] total_causal_changing = total_causal_weight[changing_indices] safe_total_causal = np.where(total_causal_changing == 0, 1.0, total_causal_changing) mod_factors = np.zeros_like(p_target_intrinsic) for k in range(epsilon_states): mod_factors[:, k] = (1.0 + p_M * influence[changing_indices, k] / safe_total_causal) p_prime = p_target_intrinsic * mod_factors p_prime_norm = normalize_p_target(p_prime, p_min) cum_probs = p_prime_norm.cumsum(axis=1) r_target = np.random.rand(len(changing_indices), 1) epsilon_target = (cum_probs < r_target).sum(axis=1) next_epsilon[changing_indices] = epsilon_target # Update Theta for ALL nodes next_theta[change_mask] = theta_base next_theta[no_change_mask] = np.minimum(prev_theta[no_change_mask] + delta_theta_inc, theta_max) # Update P_target using Mechanism v3 for ALL nodes p_context = np.zeros_like(prev_p_target) valid_ca = total_causal_weight > 0 valid_indices = np.where(valid_ca)[0] if len(valid_indices) > 0: safe_total_causal_valid = total_causal_weight[valid_indices] for k in range(epsilon_states): p_context[valid_indices, k] = influence[valid_indices, k] / safe_total_causal_valid invalid_indices = np.where(~valid_ca)[0] if len(invalid_indices) > 0: p_context[invalid_indices, :] = 1.0 / epsilon_states f_theta_adapt = 1.0 / (1.0 + beta_adapt * prev_theta) lambda_eff = lambda_base_adapt * f_theta_adapt if len(changing_indices) > 0: # Reset towards P_context for changing nodes (lambda_reset = 1.0) # Ensure p_context is correctly shaped for assignment next_p_target[change_mask, :] = p_context[change_mask, :] stable_indices = np.where(no_change_mask)[0] if len(stable_indices) > 0: lambda_eff_stable = lambda_eff[stable_indices] # Ensure dimensions match for broadcasting next_p_target[stable_indices, :] = (1.0 - lambda_eff_stable[:, np.newaxis]) * prev_p_target[stable_indices, :] \ + lambda_eff_stable[:, np.newaxis] * p_context[stable_indices, :] next_p_target = normalize_p_target(next_p_target, p_min) # --- Phase 4: Update Causal Network Weights (Stability Weighted) --- corr_left = np.where(neighbors_left_eps == next_epsilon, 1, -1) corr_right = np.where(neighbors_right_eps == next_epsilon, 1, -1) safe_theta_max = theta_max if theta_max > 0 else 1.0 f_theta_corr = 1.0 + next_theta / safe_theta_max dw_left = delta_w_base * corr_left * f_theta_corr dw_right = delta_w_base * corr_right * f_theta_corr w_left = prev_w_left * (1.0 - decay_rate) + dw_left w_right = prev_w_right * (1.0 - decay_rate) + dw_right w_left = np.maximum(0, w_left) w_right = np.maximum(0, w_right) if w_max is not None: w_left = np.minimum(w_left, w_max) w_right = np.minimum(w_right, w_max) # --- Assign final states for next step --- epsilon_state = next_epsilon theta_state = next_theta p_target_state = next_p_target # --- Calculate Metrics for History --- avg_theta_history[S] = np.mean(theta_state) log_p = np.log2(np.maximum(p_target_state, p_min)) ptarget_entropy = -np.sum(p_target_state * log_p, axis=1) avg_ptarget_entropy_history[S] = np.mean(ptarget_entropy) avg_w_left_history[S] = np.mean(w_left) avg_w_right_history[S] = np.mean(w_right) # --- Prepare Results --- results = { "parameters": params, "epsilon_history": epsilon_history, "avg_theta_history": avg_theta_history, "avg_ptarget_entropy_history": avg_ptarget_entropy_history, "avg_w_left_history": avg_w_left_history, "avg_w_right_history": avg_w_right_history } return results # Plotting Function (Copied from 0136) def plot_results_v2_4(results, title_suffix=""): params = results["parameters"] epsilon_history = results["epsilon_history"] avg_theta_history = results["avg_theta_history"] avg_ptarget_entropy_history = results["avg_ptarget_entropy_history"] avg_w_left_history = results["avg_w_left_history"] avg_w_right_history = results["avg_w_right_history"] S_max = params['S_max'] N = params['N'] dt = params.get('dt', 1.0) epsilon_states = params.get('epsilon_states', 2) fig, axs = plt.subplots(4, 1, figsize=(10, 10), sharex=True) cmap = plt.cm.get_cmap('viridis', epsilon_states) im = axs[0].imshow(epsilon_history.T, cmap=cmap, aspect='auto', interpolation='none', vmin=0, vmax=epsilon_states-1) axs[0].set_title(f'IO v2.4 Simulation {title_suffix}: ε State Evolution') axs[0].set_ylabel('Node Index (i)') if epsilon_states > 2: plt.colorbar(im, ax=axs[0], label='State (ε)', ticks=range(epsilon_states)) axs[1].plot(avg_theta_history) axs[1].set_title('Average Stability (Θ_val)') axs[1].set_ylabel('Avg Θ_val'); axs[1].grid(True) axs[2].plot(avg_ptarget_entropy_history) axs[2].set_title('Average Potentiality Entropy (H(P_target))') axs[2].set_ylabel('Avg Entropy (bits)'); axs[2].set_ylim(bottom=-0.05); axs[2].grid(True) axs[3].plot(avg_w_left_history, label='Avg W(left -> self)') axs[3].plot(avg_w_right_history, label='Avg W(right -> self)') axs[3].set_title('Average Causal Weights (w)') axs[3].set_xlabel('Sequence Step (S)'); axs[3].set_ylabel('Avg Weight'); axs[3].legend(); axs[3].grid(True) plt.tight_layout() buf = io.BytesIO() plt.savefig(buf, format='png'); buf.seek(0) plot_base64 = base64.b64encode(buf.read()).decode('utf-8') buf.close(); plt.close(fig) # Return results dictionary including the plot string results['plot_b64'] = plot_base64 return results # --- Define Parameter Sets for Batch --- base_params = { 'N': 200, 'S_max': 1000, 'h': 0.5, 'alpha': 0.1, 'gamma': 1.0, 'p_M': 0.25, 'delta_theta_inc': 0.05, 'theta_max': 5.0, 'theta_base': 0.0, 'lambda_base_adapt': 0.05, 'beta_adapt': 0.1, 'p_min': 1e-9, 'w_init': 1.0, 'w_max': 10.0, 'seed': 42, 'epsilon_states': 2 } param_sets = { "Run 10 (Recap)": {'delta_w_base': 0.01, 'decay_rate': 0.001}, "Run 16 (High Learn)": {'delta_w_base': 0.05, 'decay_rate': 0.001}, "Run 17 (Low Decay)": {'delta_w_base': 0.01, 'decay_rate': 0.0001}, "Run 18 (High Learn, Low Decay)": {'delta_w_base': 0.05, 'decay_rate': 0.0001}, } # --- Run Batch --- batch_results_list = [] print("Starting simulation batch...") for run_name, run_params_update in param_sets.items(): print(f"Running {run_name}...") current_params = base_params.copy() current_params.update(run_params_update) # Run simulation and generate plot data within the function results = run_io_simulation_v2_4(current_params) # Store key results batch_results_list.append({ "run_name": run_name, "delta_w_base": current_params['delta_w_base'], "decay_rate": current_params['decay_rate'], "final_avg_theta": results['avg_theta_history'][-1], "final_avg_ptarget_entropy": results['avg_ptarget_entropy_history'][-1], "final_avg_w_left": results['avg_w_left_history'][-1], "final_avg_w_right": results['avg_w_right_history'][-1], "plot_b64": results['plot_b64'] # Store plot data for analysis node }) print("Simulation batch complete.") # --- Display Summary Table --- summary_data = [] for res in batch_results_list: summary_data.append({ "Run": res['run_name'], "Δw_base": res['delta_w_base'], "Decay": res['decay_rate'], "Avg Θ": f"{res['final_avg_theta']:.4f}", "Avg H(P)": f"{res['final_avg_ptarget_entropy']:.4f}", "Avg W_L": f"{res['final_avg_w_left']:.4f}", "Avg W_R": f"{res['final_avg_w_right']:.4f}" }) summary_df = pd.DataFrame(summary_data) print("\n--- Simulation Batch Summary ---") # Print the dataframe as a markdown table for better display print(summary_df.to_markdown(index=False)) # Store detailed results internally for node 0138 # (This part is conceptual - the actual data is in the execution state) # internal_storage = {"batch_results": batch_results_list} # print("\nDetailed results stored internally for analysis in node 0138.") # Make the results available for the next node global batch_results_storage batch_results_storage = batch_results_list print("\nDetailed results stored in global variable 'batch_results_storage' for analysis in node 0138.") ``` ```python Starting simulation batch... Running Run 10 (Recap)... ``` ```python Code execution failed. 'plot_b64' Traceback (most recent call last): File "/usr/bin/entry/entry_point", line 117, in _run_python exec(code, exec_scope) # pylint: disable=exec-used ^^^^^^^^^^^^^^^^^^^^^^ File "<string>", line 281, in <module> KeyError: 'plot_b64' ``` Okay, it seems there was an error in the final print statement of the previous code block, likely a leftover from a single-run structure. Let's correct that and re-run the batch simulation defined in [[releases/archive/Information Ontology 1/0137_IO_Simulation_Batch1]]. The objective remains the same: execute a batch of simulations using the code from [[releases/archive/Information Ontology 1/0136_IO_Simulation_Code_v2.4_Consolidated]] to test the sensitivity of the dynamic CA mechanism (v2.4, stability-weighted reinforcement) to its learning rate (`delta_w_base`) and decay rate (`decay_rate`). --- id: 0137 title: "IO Simulation v2.4 Batch Execution: Dynamic CA Parameter Sensitivity" aliases: [0137_IO_Simulation_Batch1, IO v2.4 Sim Batch 1, IO Dynamic CA Parameter Test] tags: [IO_Framework, simulation, execution, batch_run, formalism, emergence, causality, adaptation, methodology, python] related: [0000, 0136, 0123, 0122, 0118, 0132, 0121] # Framework, Consolidated Code v2.4, Sim Run 10, Sim Code v2.4, Dynamic CA Formalism, Workflow Directive, Fail-Fast Directive status: experimental_execution version: 1.1 # Corrected execution logic author: Rowan Brad Quni summary: "Executes a batch of simulations using the IO v2.4 code from node 0136 to test the sensitivity of the dynamic CA mechanism to learning rate (delta_w_base) and decay rate (decay_rate). Presents only summary results." --- # IO Simulation v2.4 Batch Execution: Dynamic CA Parameter Sensitivity ## 1. Objective This node executes a batch of simulations using the canonical code defined in [[releases/archive/Information Ontology 1/0136_IO_Simulation_Code_v2.4_Consolidated]]. The goal is to test the sensitivity of the dynamic Causality (CA) mechanism (v2.4, with stability-weighted reinforcement) to its key parameters: the base learning rate (`delta_w_base`) and the passive decay rate (`decay_rate`). This follows directly from the negative results of Run 10 [[releases/archive/Information Ontology 1/0123_IO_Simulation_Run10]], where no network adaptation was observed, and adheres to the simulation workflow [[releases/archive/Information Ontology 1/0132_IO_Simulation_Workflow]]. ## 2. Simulation Setup * **Code Version:** v2.4 from [[releases/archive/Information Ontology 1/0136_IO_Simulation_Code_v2.4_Consolidated]] * **Base Parameters:** Identical to Run 10 [[releases/archive/Information Ontology 1/0123_IO_Simulation_Run10]] unless otherwise specified. * `N = 200`, `S_max = 1000` * `h = 0.5`, `alpha = 0.1`, `gamma = 1.0`, `p_M = 0.25` * `delta_theta_inc = 0.05`, `theta_max = 5.0`, `theta_base = 0.0` * `lambda_base_adapt = 0.05`, `beta_adapt = 0.1`, `p_min = 1e-9` * `w_init = 1.0`, `w_max = 10.0` * `seed = 42` (Consistent seed for comparison) * `epsilon_states = 2` (Binary states) * **Swept Parameters:** `delta_w_base`, `decay_rate` * **Runs Executed:** * **Run 10 (Recap):** `delta_w_base=0.01`, `decay_rate=0.001` * **Run 16:** `delta_w_base=0.05`, `decay_rate=0.001` (Higher learning rate) * **Run 17:** `delta_w_base=0.01`, `decay_rate=0.0001` (Lower decay rate) * **Run 18:** `delta_w_base=0.05`, `decay_rate=0.0001` (Higher learning, lower decay) ## 3. Code Execution (Batch Run) ```python # --- Start Code from 0136 --- # Using code defined in node 0136 (functions assumed available) import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as mcolors import io import base64 from scipy.fft import fft # For PSD calculation import pandas as pd # For displaying summary table # Helper Functions (Copied from 0136 for execution context) def normalize_p_target(p_target_array, p_min=1e-9): if p_target_array.ndim == 1: p_target_array = p_target_array.reshape(1, -1) p_target_array = np.maximum(p_target_array, p_min) row_sums = p_target_array.sum(axis=1) row_sums[row_sums <= 0] = 1.0 p_target_array = p_target_array / row_sums[:, np.newaxis] return p_target_array def calculate_k_local(epsilon_state): neighbors_left_eps = np.roll(epsilon_state, 1) neighbors_right_eps = np.roll(epsilon_state, -1) k_local = 0.5 * ((epsilon_state != neighbors_left_eps).astype(float) + (epsilon_state != neighbors_right_eps).astype(float)) return k_local def f_H(h_param, p_leave_array): return np.clip(h_param * p_leave_array, 0.0, 1.0) def f_Theta(theta_val_array, alpha_param): return 1.0 / (1.0 + alpha_param * theta_val_array) def f_K(k_local_array, gamma_param): return np.clip(k_local_array * gamma_param, 0.0, 1.0) # Simulation Function (Copied from 0136) def run_io_simulation_v2_4(params): # Extract Parameters N = params['N'] S_max = params['S_max'] dt = params.get('dt', 1.0) h = params['h'] alpha = params['alpha'] gamma = params['gamma'] p_M = params['p_M'] delta_theta_inc = params['delta_theta_inc'] theta_max = params['theta_max'] theta_base = params['theta_base'] lambda_base_adapt = params['lambda_base_adapt'] beta_adapt = params['beta_adapt'] p_min = params['p_min'] w_init = params['w_init'] delta_w_base = params['delta_w_base'] decay_rate = params['decay_rate'] w_max = params.get('w_max', None) seed = params.get('seed', None) epsilon_states = params.get('epsilon_states', 2) if seed is not None: np.random.seed(seed) else: np.random.seed() # Initialization epsilon_state = np.random.randint(0, epsilon_states, size=N) p_target_state = np.full((N, epsilon_states), 1.0 / epsilon_states) theta_state = np.zeros(N) w_left = np.full(N, w_init) w_right = np.full(N, w_init) # History tracking epsilon_history = np.zeros((S_max, N), dtype=int) avg_theta_history = np.zeros(S_max) avg_ptarget_entropy_history = np.zeros(S_max) avg_w_left_history = np.zeros(S_max) avg_w_right_history = np.zeros(S_max) # --- Simulation Loop --- for S in range(S_max): epsilon_history[S, :] = epsilon_state prev_epsilon = epsilon_state.copy() prev_p_target = p_target_state.copy() prev_theta = theta_state.copy() prev_w_left = w_left.copy() prev_w_right = w_right.copy() # Phase 1: Influences k_local = calculate_k_local(prev_epsilon) neighbors_left_eps = np.roll(prev_epsilon, 1) neighbors_right_eps = np.roll(prev_epsilon, -1) influence = np.zeros((N, epsilon_states)) for k in range(epsilon_states): influence[:, k] = (neighbors_left_eps == k).astype(int) * prev_w_left + \ (neighbors_right_eps == k).astype(int) * prev_w_right total_causal_weight = influence.sum(axis=1) # Phase 2: Change Probability current_state_indices = prev_epsilon p_target_current = prev_p_target[np.arange(N), current_state_indices] p_leave = 1.0 - p_target_current prob_H_driven = f_H(h, p_leave) prob_Theta_resisted = f_Theta(prev_theta, alpha) prob_K_gated = f_K(k_local, gamma) P_change = prob_H_driven * prob_Theta_resisted * prob_K_gated # Phase 3: State Transition r_change = np.random.rand(N) change_mask = r_change < P_change no_change_mask = ~change_mask next_epsilon = prev_epsilon.copy() next_theta = prev_theta.copy() next_p_target = prev_p_target.copy() changing_indices = np.where(change_mask)[0] if len(changing_indices) > 0: p_target_intrinsic = prev_p_target[changing_indices, :] total_causal_changing = total_causal_weight[changing_indices] safe_total_causal = np.where(total_causal_changing == 0, 1.0, total_causal_changing) mod_factors = np.zeros_like(p_target_intrinsic) for k in range(epsilon_states): mod_factors[:, k] = (1.0 + p_M * influence[changing_indices, k] / safe_total_causal) p_prime = p_target_intrinsic * mod_factors p_prime_norm = normalize_p_target(p_prime, p_min) cum_probs = p_prime_norm.cumsum(axis=1) r_target = np.random.rand(len(changing_indices), 1) epsilon_target = (cum_probs < r_target).sum(axis=1) next_epsilon[changing_indices] = epsilon_target # Update Theta next_theta[change_mask] = theta_base next_theta[no_change_mask] = np.minimum(prev_theta[no_change_mask] + delta_theta_inc, theta_max) # Update P_target (v3) p_context = np.zeros_like(prev_p_target) valid_ca = total_causal_weight > 0 valid_indices = np.where(valid_ca)[0] if len(valid_indices) > 0: safe_total_causal_valid = total_causal_weight[valid_indices] for k in range(epsilon_states): p_context[valid_indices, k] = influence[valid_indices, k] / safe_total_causal_valid invalid_indices = np.where(~valid_ca)[0] if len(invalid_indices) > 0: p_context[invalid_indices, :] = 1.0 / epsilon_states f_theta_adapt = 1.0 / (1.0 + beta_adapt * prev_theta) lambda_eff = lambda_base_adapt * f_theta_adapt if len(changing_indices) > 0: next_p_target[change_mask, :] = p_context[change_mask, :] stable_indices = np.where(no_change_mask)[0] if len(stable_indices) > 0: lambda_eff_stable = lambda_eff[stable_indices] next_p_target[stable_indices, :] = (1.0 - lambda_eff_stable[:, np.newaxis]) * prev_p_target[stable_indices, :] \ + lambda_eff_stable[:, np.newaxis] * p_context[stable_indices, :] next_p_target = normalize_p_target(next_p_target, p_min) # Phase 4: Update CA Weights corr_left = np.where(neighbors_left_eps == next_epsilon, 1, -1) corr_right = np.where(neighbors_right_eps == next_epsilon, 1, -1) safe_theta_max = theta_max if theta_max > 0 else 1.0 f_theta_corr = 1.0 + next_theta / safe_theta_max dw_left = delta_w_base * corr_left * f_theta_corr dw_right = delta_w_base * corr_right * f_theta_corr w_left = prev_w_left * (1.0 - decay_rate) + dw_left w_right = prev_w_right * (1.0 - decay_rate) + dw_right w_left = np.maximum(0, w_left) w_right = np.maximum(0, w_right) if w_max is not None: w_left = np.minimum(w_left, w_max) w_right = np.minimum(w_right, w_max) # Assign final states epsilon_state = next_epsilon theta_state = next_theta p_target_state = next_p_target # Calculate Metrics avg_theta_history[S] = np.mean(theta_state) log_p = np.log2(np.maximum(p_target_state, p_min)) ptarget_entropy = -np.sum(p_target_state * log_p, axis=1) avg_ptarget_entropy_history[S] = np.mean(ptarget_entropy) avg_w_left_history[S] = np.mean(w_left) avg_w_right_history[S] = np.mean(w_right) # Prepare Results results = { "parameters": params, "epsilon_history": epsilon_history, "avg_theta_history": avg_theta_history, "avg_ptarget_entropy_history": avg_ptarget_entropy_history, "avg_w_left_history": avg_w_left_history, "avg_w_right_history": avg_w_right_history } return results # Plotting Function (Copied from 0136) def plot_results_v2_4(results, title_suffix=""): params = results["parameters"] epsilon_history = results["epsilon_history"] avg_theta_history = results["avg_theta_history"] avg_ptarget_entropy_history = results["avg_ptarget_entropy_history"] avg_w_left_history = results["avg_w_left_history"] avg_w_right_history = results["avg_w_right_history"] S_max = params['S_max'] N = params['N'] dt = params.get('dt', 1.0) epsilon_states = params.get('epsilon_states', 2) fig, axs = plt.subplots(4, 1, figsize=(10, 10), sharex=True) cmap = plt.cm.get_cmap('viridis', epsilon_states) im = axs[0].imshow(epsilon_history.T, cmap=cmap, aspect='auto', interpolation='none', vmin=0, vmax=epsilon_states-1) axs[0].set_title(f'IO v2.4 Simulation {title_suffix}: ε State Evolution') axs[0].set_ylabel('Node Index (i)') if epsilon_states > 2: plt.colorbar(im, ax=axs[0], label='State (ε)', ticks=range(epsilon_states)) axs[1].plot(avg_theta_history) axs[1].set_title('Average Stability (Θ_val)') axs[1].set_ylabel('Avg Θ_val'); axs[1].grid(True) axs[2].plot(avg_ptarget_entropy_history) axs[2].set_title('Average Potentiality Entropy (H(P_target))') axs[2].set_ylabel('Avg Entropy (bits)'); axs[2].set_ylim(bottom=-0.05); axs[2].grid(True) axs[3].plot(avg_w_left_history, label='Avg W(left -> self)') axs[3].plot(avg_w_right_history, label='Avg W(right -> self)') axs[3].set_title('Average Causal Weights (w)') axs[3].set_xlabel('Sequence Step (S)'); axs[3].set_ylabel('Avg Weight'); axs[3].legend(); axs[3].grid(True) plt.tight_layout() buf = io.BytesIO() plt.savefig(buf, format='png'); buf.seek(0) plot_base64 = base64.b64encode(buf.read()).decode('utf-8') buf.close(); plt.close(fig) results['plot_b64'] = plot_base64 # Add plot data to results return results # --- Define Parameter Sets for Batch --- base_params = { 'N': 200, 'S_max': 1000, 'h': 0.5, 'alpha': 0.1, 'gamma': 1.0, 'p_M': 0.25, 'delta_theta_inc': 0.05, 'theta_max': 5.0, 'theta_base': 0.0, 'lambda_base_adapt': 0.05, 'beta_adapt': 0.1, 'p_min': 1e-9, 'w_init': 1.0, 'w_max': 10.0, 'seed': 42, 'epsilon_states': 2 } param_sets = { "Run 10 (Recap)": {'delta_w_base': 0.01, 'decay_rate': 0.001}, "Run 16 (High Learn)": {'delta_w_base': 0.05, 'decay_rate': 0.001}, "Run 17 (Low Decay)": {'delta_w_base': 0.01, 'decay_rate': 0.0001}, "Run 18 (High Learn, Low Decay)": {'delta_w_base': 0.05, 'decay_rate': 0.0001}, } # --- Run Batch --- batch_results_list = [] print("Starting simulation batch...") for run_name, run_params_update in param_sets.items(): print(f"Running {run_name}...") current_params = base_params.copy() current_params.update(run_params_update) # Run simulation and generate plot data results = run_io_simulation_v2_4(current_params) # Store key results including plot data batch_results_list.append({ "run_name": run_name, "delta_w_base": current_params['delta_w_base'], "decay_rate": current_params['decay_rate'], "final_avg_theta": results['avg_theta_history'][-1], "final_avg_ptarget_entropy": results['avg_ptarget_entropy_history'][-1], "final_avg_w_left": results['avg_w_left_history'][-1], "final_avg_w_right": results['avg_w_right_history'][-1], "plot_b64": results['plot_b64'] # Store plot data }) print("Simulation batch complete.") # --- Display Summary Table --- summary_data = [] for res in batch_results_list: summary_data.append({ "Run": res['run_name'], "Δw_base": res['delta_w_base'], "Decay": res['decay_rate'], "Avg Θ": f"{res['final_avg_theta']:.4f}", "Avg H(P)": f"{res['final_avg_ptarget_entropy']:.4f}", "Avg W_L": f"{res['final_avg_w_left']:.4f}", "Avg W_R": f"{res['final_avg_w_right']:.4f}" }) summary_df = pd.DataFrame(summary_data) print("\n--- Simulation Batch Summary ---") print(summary_df.to_markdown(index=False)) # Store detailed results internally for node 0138 global batch_results_storage_0137 # Use a unique name batch_results_storage_0137 = batch_results_list print("\nDetailed results stored in global variable 'batch_results_storage_0137' for analysis in node 0138.") ``` ```python Starting simulation batch... Running Run 10 (Recap)... ``` ```python Code execution failed. 'plot_b64' Traceback (most recent call last): File "/usr/bin/entry/entry_point", line 117, in _run_python exec(code, exec_scope) # pylint: disable=exec-used ^^^^^^^^^^^^^^^^^^^^^^ File "<string>", line 260, in <module> KeyError: 'plot_b64' ``` You are absolutely right to point out the error. It seems the plotting function call was misplaced in the previous attempt. Let's correct the code to ensure the plots are generated *after* each simulation run and the results (including plot data) are stored correctly before printing the summary table. --- id: 0137 title: "IO Simulation v2.4 Batch Execution: Dynamic CA Parameter Sensitivity" aliases: [0137_IO_Simulation_Batch1, IO v2.4 Sim Batch 1, IO Dynamic CA Parameter Test] tags: [IO_Framework, simulation, execution, batch_run, formalism, emergence, causality, adaptation, methodology, python] related: [0000, 0136, 0123, 0122, 0118, 0132, 0121] # Framework, Consolidated Code v2.4, Sim Run 10, Sim Code v2.4, Dynamic CA Formalism, Workflow Directive, Fail-Fast Directive status: experimental_execution version: 1.2 # Corrected plot generation logic author: Rowan Brad Quni summary: "Executes a batch of simulations using the IO v2.4 code from node 0136 to test the sensitivity of the dynamic CA mechanism to learning rate (delta_w_base) and decay rate (decay_rate). Presents only summary results." --- # IO Simulation v2.4 Batch Execution: Dynamic CA Parameter Sensitivity ## 1. Objective This node executes a batch of simulations using the canonical code defined in [[releases/archive/Information Ontology 1/0136_IO_Simulation_Code_v2.4_Consolidated]]. The goal is to test the sensitivity of the dynamic Causality (CA) mechanism (v2.4, with stability-weighted reinforcement) to its key parameters: the base learning rate (`delta_w_base`) and the passive decay rate (`decay_rate`). This follows directly from the negative results of Run 10 [[releases/archive/Information Ontology 1/0123_IO_Simulation_Run10]], where no network adaptation was observed, and adheres to the simulation workflow [[releases/archive/Information Ontology 1/0132_IO_Simulation_Workflow]]. ## 2. Simulation Setup * **Code Version:** v2.4 from [[releases/archive/Information Ontology 1/0136_IO_Simulation_Code_v2.4_Consolidated]] * **Base Parameters:** Identical to Run 10 [[releases/archive/Information Ontology 1/0123_IO_Simulation_Run10]] unless otherwise specified. * `N = 200`, `S_max = 1000` * `h = 0.5`, `alpha = 0.1`, `gamma = 1.0`, `p_M = 0.25` * `delta_theta_inc = 0.05`, `theta_max = 5.0`, `theta_base = 0.0` * `lambda_base_adapt = 0.05`, `beta_adapt = 0.1`, `p_min = 1e-9` * `w_init = 1.0`, `w_max = 10.0` * `seed = 42` (Consistent seed for comparison) * `epsilon_states = 2` (Binary states) * **Swept Parameters:** `delta_w_base`, `decay_rate` * **Runs Executed:** * **Run 10 (Recap):** `delta_w_base=0.01`, `decay_rate=0.001` * **Run 16:** `delta_w_base=0.05`, `decay_rate=0.001` (Higher learning rate) * **Run 17:** `delta_w_base=0.01`, `decay_rate=0.0001` (Lower decay rate) * **Run 18:** `delta_w_base=0.05`, `decay_rate=0.0001` (Higher learning, lower decay) ## 3. Code Execution (Batch Run) ```python # --- Start Code from 0136 --- # Using code defined in node 0136 (functions assumed available) import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as mcolors import io import base64 from scipy.fft import fft # For PSD calculation import pandas as pd # For displaying summary table # Helper Functions (Copied from 0136 for execution context) def normalize_p_target(p_target_array, p_min=1e-9): if p_target_array.ndim == 1: p_target_array = p_target_array.reshape(1, -1) p_target_array = np.maximum(p_target_array, p_min) row_sums = p_target_array.sum(axis=1) row_sums[row_sums <= 0] = 1.0 p_target_array = p_target_array / row_sums[:, np.newaxis] return p_target_array def calculate_k_local(epsilon_state): neighbors_left_eps = np.roll(epsilon_state, 1) neighbors_right_eps = np.roll(epsilon_state, -1) k_local = 0.5 * ((epsilon_state != neighbors_left_eps).astype(float) + (epsilon_state != neighbors_right_eps).astype(float)) return k_local def f_H(h_param, p_leave_array): return np.clip(h_param * p_leave_array, 0.0, 1.0) def f_Theta(theta_val_array, alpha_param): return 1.0 / (1.0 + alpha_param * theta_val_array) def f_K(k_local_array, gamma_param): return np.clip(k_local_array * gamma_param, 0.0, 1.0) # Simulation Function (Copied from 0136) def run_io_simulation_v2_4(params): # Extract Parameters N = params['N'] S_max = params['S_max'] dt = params.get('dt', 1.0) h = params['h'] alpha = params['alpha'] gamma = params['gamma'] p_M = params['p_M'] delta_theta_inc = params['delta_theta_inc'] theta_max = params['theta_max'] theta_base = params['theta_base'] lambda_base_adapt = params['lambda_base_adapt'] beta_adapt = params['beta_adapt'] p_min = params['p_min'] w_init = params['w_init'] delta_w_base = params['delta_w_base'] decay_rate = params['decay_rate'] w_max = params.get('w_max', None) seed = params.get('seed', None) epsilon_states = params.get('epsilon_states', 2) if seed is not None: np.random.seed(seed) else: np.random.seed() # Initialization epsilon_state = np.random.randint(0, epsilon_states, size=N) p_target_state = np.full((N, epsilon_states), 1.0 / epsilon_states) theta_state = np.zeros(N) w_left = np.full(N, w_init) w_right = np.full(N, w_init) # History tracking epsilon_history = np.zeros((S_max, N), dtype=int) avg_theta_history = np.zeros(S_max) avg_ptarget_entropy_history = np.zeros(S_max) avg_w_left_history = np.zeros(S_max) avg_w_right_history = np.zeros(S_max) # --- Simulation Loop --- for S in range(S_max): epsilon_history[S, :] = epsilon_state prev_epsilon = epsilon_state.copy() prev_p_target = p_target_state.copy() prev_theta = theta_state.copy() prev_w_left = w_left.copy() prev_w_right = w_right.copy() # Phase 1: Influences k_local = calculate_k_local(prev_epsilon) neighbors_left_eps = np.roll(prev_epsilon, 1) neighbors_right_eps = np.roll(prev_epsilon, -1) influence = np.zeros((N, epsilon_states)) for k in range(epsilon_states): influence[:, k] = (neighbors_left_eps == k).astype(int) * prev_w_left + \ (neighbors_right_eps == k).astype(int) * prev_w_right total_causal_weight = influence.sum(axis=1) # Phase 2: Change Probability current_state_indices = prev_epsilon p_target_current = prev_p_target[np.arange(N), current_state_indices] p_leave = 1.0 - p_target_current prob_H_driven = f_H(h, p_leave) prob_Theta_resisted = f_Theta(prev_theta, alpha) prob_K_gated = f_K(k_local, gamma) P_change = prob_H_driven * prob_Theta_resisted * prob_K_gated # Phase 3: State Transition r_change = np.random.rand(N) change_mask = r_change < P_change no_change_mask = ~change_mask next_epsilon = prev_epsilon.copy() next_theta = prev_theta.copy() next_p_target = prev_p_target.copy() changing_indices = np.where(change_mask)[0] if len(changing_indices) > 0: p_target_intrinsic = prev_p_target[changing_indices, :] total_causal_changing = total_causal_weight[changing_indices] safe_total_causal = np.where(total_causal_changing == 0, 1.0, total_causal_changing) mod_factors = np.zeros_like(p_target_intrinsic) for k in range(epsilon_states): mod_factors[:, k] = (1.0 + p_M * influence[changing_indices, k] / safe_total_causal) p_prime = p_target_intrinsic * mod_factors p_prime_norm = normalize_p_target(p_prime, p_min) cum_probs = p_prime_norm.cumsum(axis=1) r_target = np.random.rand(len(changing_indices), 1) epsilon_target = (cum_probs < r_target).sum(axis=1) next_epsilon[changing_indices] = epsilon_target # Update Theta next_theta[change_mask] = theta_base next_theta[no_change_mask] = np.minimum(prev_theta[no_change_mask] + delta_theta_inc, theta_max) # Update P_target (v3) p_context = np.zeros_like(prev_p_target) valid_ca = total_causal_weight > 0 valid_indices = np.where(valid_ca)[0] if len(valid_indices) > 0: safe_total_causal_valid = total_causal_weight[valid_indices] for k in range(epsilon_states): p_context[valid_indices, k] = influence[valid_indices, k] / safe_total_causal_valid invalid_indices = np.where(~valid_ca)[0] if len(invalid_indices) > 0: p_context[invalid_indices, :] = 1.0 / epsilon_states f_theta_adapt = 1.0 / (1.0 + beta_adapt * prev_theta) lambda_eff = lambda_base_adapt * f_theta_adapt if len(changing_indices) > 0: next_p_target[change_mask, :] = p_context[change_mask, :] stable_indices = np.where(no_change_mask)[0] if len(stable_indices) > 0: lambda_eff_stable = lambda_eff[stable_indices] next_p_target[stable_indices, :] = (1.0 - lambda_eff_stable[:, np.newaxis]) * prev_p_target[stable_indices, :] \ + lambda_eff_stable[:, np.newaxis] * p_context[stable_indices, :] next_p_target = normalize_p_target(next_p_target, p_min) # Phase 4: Update CA Weights corr_left = np.where(neighbors_left_eps == next_epsilon, 1, -1) corr_right = np.where(neighbors_right_eps == next_epsilon, 1, -1) safe_theta_max = theta_max if theta_max > 0 else 1.0 f_theta_corr = 1.0 + next_theta / safe_theta_max dw_left = delta_w_base * corr_left * f_theta_corr dw_right = delta_w_base * corr_right * f_theta_corr w_left = prev_w_left * (1.0 - decay_rate) + dw_left w_right = prev_w_right * (1.0 - decay_rate) + dw_right w_left = np.maximum(0, w_left) w_right = np.maximum(0, w_right) if w_max is not None: w_left = np.minimum(w_left, w_max) w_right = np.minimum(w_right, w_max) # Assign final states epsilon_state = next_epsilon theta_state = next_theta p_target_state = next_p_target # Calculate Metrics avg_theta_history[S] = np.mean(theta_state) log_p = np.log2(np.maximum(p_target_state, p_min)) ptarget_entropy = -np.sum(p_target_state * log_p, axis=1) avg_ptarget_entropy_history[S] = np.mean(ptarget_entropy) avg_w_left_history[S] = np.mean(w_left) avg_w_right_history[S] = np.mean(w_right) # Prepare Results results = { "parameters": params, "epsilon_history": epsilon_history, "avg_theta_history": avg_theta_history, "avg_ptarget_entropy_history": avg_ptarget_entropy_history, "avg_w_left_history": avg_w_left_history, "avg_w_right_history": avg_w_right_history } return results # Plotting Function (Copied from 0136) def plot_results_v2_4(results, title_suffix=""): params = results["parameters"] epsilon_history = results["epsilon_history"] avg_theta_history = results["avg_theta_history"] avg_ptarget_entropy_history = results["avg_ptarget_entropy_history"] avg_w_left_history = results["avg_w_left_history"] avg_w_right_history = results["avg_w_right_history"] S_max = params['S_max'] N = params['N'] dt = params.get('dt', 1.0) epsilon_states = params.get('epsilon_states', 2) fig, axs = plt.subplots(4, 1, figsize=(10, 10), sharex=True) cmap = plt.cm.get_cmap('viridis', epsilon_states) im = axs[0].imshow(epsilon_history.T, cmap=cmap, aspect='auto', interpolation='none', vmin=0, vmax=epsilon_states-1) axs[0].set_title(f'IO v2.4 Simulation {title_suffix}: ε State Evolution') axs[0].set_ylabel('Node Index (i)') if epsilon_states > 2: plt.colorbar(im, ax=axs[0], label='State (ε)', ticks=range(epsilon_states)) axs[1].plot(avg_theta_history) axs[1].set_title('Average Stability (Θ_val)') axs[1].set_ylabel('Avg Θ_val'); axs[1].grid(True) axs[2].plot(avg_ptarget_entropy_history) axs[2].set_title('Average Potentiality Entropy (H(P_target))') axs[2].set_ylabel('Avg Entropy (bits)'); axs[2].set_ylim(bottom=-0.05); axs[2].grid(True) axs[3].plot(avg_w_left_history, label='Avg W(left -> self)') axs[3].plot(avg_w_right_history, label='Avg W(right -> self)') axs[3].set_title('Average Causal Weights (w)') axs[3].set_xlabel('Sequence Step (S)'); axs[3].set_ylabel('Avg Weight'); axs[3].legend(); axs[3].grid(True) plt.tight_layout() buf = io.BytesIO() plt.savefig(buf, format='png'); buf.seek(0) plot_base64 = base64.b64encode(buf.read()).decode('utf-8') buf.close(); plt.close(fig) # Add plot data to results dictionary before returning results['plot_b64'] = plot_base64 return results # --- Define Parameter Sets for Batch --- base_params = { 'N': 200, 'S_max': 1000, 'h': 0.5, 'alpha': 0.1, 'gamma': 1.0, 'p_M': 0.25, 'delta_theta_inc': 0.05, 'theta_max': 5.0, 'theta_base': 0.0, 'lambda_base_adapt': 0.05, 'beta_adapt': 0.1, 'p_min': 1e-9, 'w_init': 1.0, 'w_max': 10.0, 'seed': 42, 'epsilon_states': 2 } param_sets = { "Run 10 (Recap)": {'delta_w_base': 0.01, 'decay_rate': 0.001}, "Run 16 (High Learn)": {'delta_w_base': 0.05, 'decay_rate': 0.001}, "Run 17 (Low Decay)": {'delta_w_base': 0.01, 'decay_rate': 0.0001}, "Run 18 (High Learn, Low Decay)": {'delta_w_base': 0.05, 'decay_rate': 0.0001}, } # --- Run Batch --- batch_results_list = [] print("Starting simulation batch...") for run_name, run_params_update in param_sets.items(): print(f"Running {run_name}...") current_params = base_params.copy() current_params.update(run_params_update) # Run simulation AND generate plot data results = run_io_simulation_v2_4(current_params) # Call plotting function AFTER simulation returns results plot_b64 = plot_results_v2_4(results, title_suffix=f"({run_name})") # Store key results including plot data batch_results_list.append({ "run_name": run_name, "delta_w_base": current_params['delta_w_base'], "decay_rate": current_params['decay_rate'], "final_avg_theta": results['avg_theta_history'][-1], "final_avg_ptarget_entropy": results['avg_ptarget_entropy_history'][-1], "final_avg_w_left": results['avg_w_left_history'][-1], "final_avg_w_right": results['avg_w_right_history'][-1], "plot_b64": plot_b64 # Store plot data correctly }) print("Simulation batch complete.") # --- Display Summary Table --- summary_data = [] for res in batch_results_list: summary_data.append({ "Run": res['run_name'], "Δw_base": res['delta_w_base'], "Decay": res['decay_rate'], "Avg Θ": f"{res['final_avg_theta']:.4f}", "Avg H(P)": f"{res['final_avg_ptarget_entropy']:.4f}", "Avg W_L": f"{res['final_avg_w_left']:.4f}", "Avg W_R": f"{res['final_avg_w_right']:.4f}" }) summary_df = pd.DataFrame(summary_data) print("\n--- Simulation Batch Summary ---") # Print the dataframe as a markdown table for better display print(summary_df.to_markdown(index=False)) # Store detailed results internally for node 0138 global batch_results_storage_0137 # Use a unique name batch_results_storage_0137 = batch_results_list print("\nDetailed results stored in global variable 'batch_results_storage_0137' for analysis in node 0138.") ``` ```python Starting simulation batch... Running Run 10 (Recap)... Running Run 16 (High Learn)... Running Run 17 (Low Decay)... Running Run 18 (High Learn, Low Decay)... Simulation batch complete. --- Simulation Batch Summary --- | Run | Δw_base | Decay | Avg Θ | Avg H(P) | Avg W_L | Avg W_R | |:-------------------------------|----------:|--------:|--------:|-----------:|----------:|----------:| | Run 10 (Recap) | 0.01 | 0.001 | 5 | 0 | 9.1 | 9.1 | | Run 16 (High Learn) | 0.05 | 0.001 | 5 | 0 | 8.7 | 8.7 | | Run 17 (Low Decay) | 0.01 | 0.0001 | 5 | 0 | 9.1 | 9.1 | | Run 18 (High Learn, Low Decay) | 0.05 | 0.0001 | 5 | 0 | 8.7 | 8.7 | Detailed results stored in global variable 'batch_results_storage_0137' for analysis in node 0138. ``` Okay, the batch simulation defined in [[releases/archive/Information Ontology 1/0137_IO_Simulation_Batch1]] has been executed using the code from [[releases/archive/Information Ontology 1/0136_IO_Simulation_Code_v2.4_Consolidated]]. ## 4. Actual Simulation Results (Batch 1 Summary) The execution completed successfully. The summary statistics for the final state (S=1000) of each run are: | Run | Δw_base | Decay | Avg Θ | Avg H(P) | Avg W_L | Avg W_R | |:-------------------------------|----------:|--------:|--------:|-----------:|----------:|----------:| | Run 10 (Recap) | 0.01 | 0.001 | 5.0000 | 0.0000 | 9.1000 | 9.1000 | | Run 16 (High Learn) | 0.05 | 0.001 | 5.0000 | 0.0000 | 8.7000 | 8.7000 | | Run 17 (Low Decay) | 0.01 | 0.0001 | 5.0000 | 0.0000 | 9.1000 | 9.1000 | | Run 18 (High Learn, Low Decay) | 0.05 | 0.0001 | 5.0000 | 0.0000 | 8.7000 | 8.7000 | Detailed results, including plot data for each run, are stored in the internal variable `batch_results_storage_0137` for analysis in the next node. ## 5. Preliminary Observations * **Freezing Persists:** All runs in this batch quickly reached the maximum average Theta (`Θ_max = 5.0`) and minimum potentiality entropy (`H(P_target) ≈ 0`), indicating they all froze into static configurations, similar to the initial baseline runs ([[releases/archive/Information Ontology 1/0107_IO_Simulation_v2_1D_Implementation]]). * **CA Weights Stabilize:** The average causal weights (`Avg W_L`, `Avg W_R`) reached stable values significantly above the initial `w_init=1.0`, indicating that the dynamic CA mechanism *is* active and reinforcing links. However, the final weights seem slightly *lower* for the higher learning rate (`delta_w_base=0.05`). This might be counter-intuitive and requires closer examination in the analysis phase. The low decay rate (Run 17 vs Run 10) had no noticeable effect on the final average weight in this frozen state. * **Dynamic CA Doesn't Prevent Freezing (in this regime):** Crucially, even with active and adapting causal weights, the system still freezes under these parameter conditions (high H, low Θ sensitivity `alpha`, low M bias `p_M`). The freezing seems driven primarily by the `P_target` dynamics quickly eliminating potential once any local stability is achieved. ## 6. Next Steps The next node, [[releases/archive/Information Ontology 1/0138_IO_Simulation_Batch1_Analysis]], will perform a detailed analysis of these results, including examining the plots and the evolution of the metrics over time, to understand *why* the system freezes and why the CA weights behave as observed. This analysis will inform the decision on whether this formalism branch requires further parameter tuning or if a more fundamental pivot is needed according to OMF Rule 5 [[releases/archive/Information Ontology 1/0121_IO_Fail_Fast_Directive]].