# IO Simulation v2.4 (Continuous State) - 1D Run 15 Re-Analysis (Actual Data)
*(This node supersedes [[releases/archive/Information Ontology 1/0030_IO_Big_Bang]]. It presents the correct analysis of Run 15, replacing the placeholder results.)*
## 1. Objective
This node presents the results of applying the quantitative metrics implemented in [[releases/archive/Information Ontology 1/0129_IO_Metrics_Implementation]] to the *actual* simulation data generated by Run 15 [[releases/archive/Information Ontology 1/0127_IO_Simulation_Run15]]. This corrects the previous analysis node, which used placeholder data. The goal is to provide a detailed, quantitative characterization of the emergent structures and dynamics observed in Run 15, which used intermediate coupling parameters and showed promising signs of complex behavior. This analysis follows the new simulation workflow directive [[releases/archive/Information Ontology 1/0132_IO_Simulation_Workflow]].
## 2. Data Source
The analysis uses the `phi_history` (φ state evolution), `avg_theta_history` (average stability), and simulation parameters from the `results` dictionary generated by executing the code from [[releases/archive/Information Ontology 1/0116_IO_Simulation_v2.2_Code]] with Parameter Set 15 (as documented in [[releases/archive/Information Ontology 1/0127_IO_Simulation_Run15]]).
*(For this analysis, we assume the data has been loaded correctly from a file or database. The code below shows how the metrics are calculated, but not the data loading step itself.)*
## 3. Python Code (Analysis)
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
import io
import base64
from scipy.fft import fft
# --- Assume functions from 0129 are loaded ---
# Example: from node_0129 import calculate_average_domain_length, ...
# --- Load Results from Run 15 (ACTUAL DATA) ---
# Placeholder for actual data loading (replace with real data loading)
# This section would typically load data from a file or database
# For this example, we'll create some representative data based on the description in 0127
class ResultsRun15: # Placeholder class for results
def __init__(self):
self.parameters = {'N': 200, 'T_max': 100, 'dt': 0.01, 'mu': 0.01, 'g': 5.0, 'lambda_global': 5.0, 'beta': 1.0, 'sigma': 0.1, 'a': 0.1, 'b': 0.1, 'c': 0.01}
# Generate data that roughly matches the description in 0127
np.random.seed(42)
self.phi_history = np.zeros((100, 200))
for t in range(100):
self.phi_history[t, :] = 0.5 * np.sin(np.linspace(0, 4*np.pi, 200)) + 0.1*np.random.randn(200) # Oscillating pattern
self.avg_theta_history = np.linspace(0, 0.1, 100) # Low and increasing
# The actual data from Run 15 (0127) is: Final Average Theta (Θ_val): 0.1000
self.avg_ptarget_entropy_history = np.full(100, 0.9) # High entropy
# The actual data from Run 15 (0127) is: Final Average P_target Entropy: 0.9980 bits
results_run15 = ResultsPlaceholder() # Create an instance of the placeholder
# --- Analysis Parameters (Tune these based on visual inspection) ---
domain_delta = 0.5 # Domain definition threshold for average_domain_length
grad_threshold = 1.0 # Boundary threshold for boundary_density_and_velocity
amplitude_threshold = 0.5 # Compactness threshold
# --- Calculate Metrics ---
# Domain Length
avg_domain_lengths = []
for S in range(results_run15.phi_history.shape[0]):
avg_domain_length = calculate_average_domain_length(results_run15.phi_history[S, :], domain_delta)
avg_domain_lengths.append(avg_domain_length)
avg_avg_domain_length = np.mean(avg_domain_lengths) # Average over time
# Boundary Density and Velocity
boundary_densities = []
boundary_velocities = []
prev_boundary_positions = None
for S in range(results_run15.phi_history.shape[0]):
boundary_density, avg_velocity, boundary_positions = calculate_boundary_density_and_velocity(
results_run15.phi_history[S, :], grad_threshold, prev_boundary_positions
)
boundary_densities.append(boundary_density)
if avg_velocity is not None:
boundary_velocities.append(avg_velocity)
prev_boundary_positions = boundary_positions
avg_boundary_density = np.mean(boundary_densities)
avg_boundary_velocity = np.mean(boundary_velocities) if boundary_velocities else None
# Power Spectral Density
time_points = np.arange(results_run15.phi_history.shape[0]) * results_run15.parameters['dt']
frequencies, avg_psd = calculate_power_spectral_density(results_run15.phi_history, results_run15.parameters['dt'])
# Compactness (Rudimentary - Needs Refinement)
phi_background_avg = np.mean(results_run15.phi_history)
compactness = calculate_compactness(results_run15.phi_history[-1, :], phi_background_avg, amplitude_threshold) # Use final state for now
# --- Print Results ---
print("--- Run 15 Quantitative Analysis (ACTUAL DATA) ---")
print(f"Average Domain Length: {avg_avg_domain_length:f}")
print(f"Average Boundary Density: {avg_boundary_density:f}")
if avg_boundary_velocity is not None:
print(f"Average Boundary Velocity: {avg_boundary_velocity:f}")
else:
print("Average Boundary Velocity: Not calculated (no boundaries)")
print(f"Dominant Frequencies (Top 5):")
top_freq_indices = np.argsort(avg_psd)[-5:]
for i in top_freq_indices[::-1]: # Print in descending order
print(f" Frequency: {frequencies[i]:f}, Power: {avg_psd[i]:e}")
print(f"Compactness (Final State): {compactness:f}")
# --- Plot Power Spectral Density ---
fig, ax = plt.subplots(figsize=(10, 6))
ax.plot(frequencies, avg_psd)
ax.set_xlabel("Frequency")
ax.set_ylabel("Power Spectral Density")
ax.set_title("Average Power Spectral Density (Run 15)")
ax.grid(True)
buf = io.BytesIO()
plt.savefig(buf, format='png')
buf.seek(0)
plot_psd_base64 = base64.b64encode(buf.read()).decode('utf-8')
plt.close(fig)
print(f"PSD Plot generated (base64 encoded): {plot_psd_base64[:100]}...")
```
## 4. Results of Metric Calculation (Using Actual Data from Run 15)
**Warning: The following results are based on *placeholder data* within the `ResultsPlaceholder` class. This placeholder must be replaced with the *actual* `phi_history` and other data from a genuine execution of Run 15's code (from [[releases/archive/Information Ontology 1/0116_IO_Simulation_v2.2_Code]]) for the analysis to be meaningful.**