# Formalism Refinement: Incorporating Dynamic Causality (CA) Weights
## 1. Objective: Enabling Network Adaptation
Simulations up to Run 8 [[releases/archive/Information Ontology 1/0117_IO_Simulation_Run8]] used a fixed causal network structure (nearest neighbors with weight 1). A key aspect of the Information Dynamics (IO) framework is the co-evolution of states and the causal pathways connecting them, allowing for learning and adaptation [[releases/archive/Information Ontology 1/0064_IO_Learning_Adaptation]]. This requires implementing **dynamic Causality (CA)**, where the strength of causal links `w(j → i, S)` evolves over Sequence (S), primarily driven by Theta (Θ) reinforcement [[releases/archive/Information Ontology 1/0097_IO_Formal_Causality]], [[releases/archive/Information Ontology 1/0069_IO_Theta_Mechanisms]]. This node specifies the necessary additions to the v2.2 formalism [[releases/archive/Information Ontology 1/0116_IO_Simulation_v2.2_Code]] to incorporate these dynamic weights.
## 2. Representing the Dynamic Causal Network
* **Graph:** `G = (V, E, W(S))` where `W(S)` is the set of edge weights at step `S`.
* **Edges `E`:** We can start with a fixed topology (e.g., all nodes connected to immediate neighbors in 1D/2D) but allow the *weights* `w` on these edges to change. Alternatively, edges could be added/removed if weights cross thresholds (more complex). Let's start with fixed topology, dynamic weights.
* **Weights `w(j → i, S)`:** Non-negative real numbers representing causal strength. Initialize typically to a small positive value or 1.0.
## 3. Mechanism for Θ Reinforcement of CA Weights
The core idea is that a causal link `j → i` is strengthened if activity at `j` at step `S` is correlated with, or predictive of, the state or change at `i` at step `S+1`, especially if the resulting state at `i` is stable.
**Proposed Update Rule for `w(j → i, S+1)`:** (Applied in Phase 4 of the algorithm [[releases/archive/Information Ontology 1/0104_IO_Formalism_v2_Summary]])
1. **Identify "Successful" Influence:** Determine if node `j`'s state `ε(j, S)` contributed positively to the outcome `ε(i, S+1)`. This requires defining a correlation or success metric.
* *Simple Correlation Metric:* `Corr(j, i, S) = 1` if `ε(j, S) == ε(i, S+1)`, `-1` if `ε(j, S) ≠ ε(i, S)`. (This rewards predicting the *next* state).
* *Stability-Weighted Correlation:* `Corr(j, i, S) = [1 if ε(j, S) == ε(i, S+1) else -1] * f_Θ_corr(Θ_val(i, S+1))` where `f_Θ_corr` increases with the stability of the *resulting* state at `i`. (This rewards contributing to *stable* outcomes). Let's use this more sophisticated version. `f_Θ_corr` could be e.g., `(1 + Θ_val(i, S+1) / theta_max)`.
2. **Calculate Weight Change `Δw`:**
* `Δw = Δw_base * Corr(j, i, S)`
* `Δw_base` is the base learning rate for causal weights (a new parameter). It might be constant or depend on other factors (e.g., higher for stronger existing weights?). Let's start with constant `Δw_base`.
3. **Apply Decay:** Introduce a passive decay factor `(1 - decay_rate)` to all weights each step to allow unused connections to weaken.
* `w_decayed = w(j → i, S) * (1 - decay_rate)`
4. **Update Weight:**
* `w(j → i, S+1) = max(0, w_decayed + Δw)` (Ensure weights remain non-negative).
* Optionally, impose a maximum weight `w_max`.
**Parameters Introduced:**
* `Δw_base`: Base learning rate for causal weights.
* `decay_rate`: Passive decay rate for unused weights.
* `w_max` (Optional): Maximum causal strength.
* Function `f_Θ_corr`: How resulting stability influences reinforcement.
## 4. Integrating Dynamic CA into the Transition Rule
The main change to the transition algorithm [[releases/archive/Information Ontology 1/0104_IO_Formalism_v2_Summary]], [[releases/archive/Information Ontology 1/0116_IO_Simulation_v2.2_Code]] is in **Phase 1 (Calculate Influences)** and adding **Phase 4 (Update Causal Network)**.
* **Phase 1 Modification:** The calculation of `Influence_k(i, S)` and `Total_Causal_Weight(i, S)` now uses the *current dynamic weights* `w(j → i, S)` instead of assuming `w=1`.
* `Influence_k(i, S) = Σ_{j | (j → i) ∈ E, ε(j,S)=k} w(j → i, S)`
* `Total_Causal_Weight(i, S) = Σ_k Influence_k(i, S)`
* **Phase 4 Addition:** After all nodes have determined their `ε(i, S+1)` and `Θ_val(i, S+1)`, iterate through all edges `(j → i) ∈ E` and update `w(j → i, S+1)` using the Θ reinforcement rule defined above.
## 5. Expected Effects of Dynamic CA
* **Adaptive Information Flow:** Causal pathways that consistently lead to stable or predictable outcomes will be strengthened, channeling information flow more effectively.
* **Emergent Network Structures:** Specialized pathways, feedback loops, and potentially modular structures might emerge within the causal network.
* **Learning and Memory:** Changes in `w` represent a form of network memory, encoding learned causal associations [[releases/archive/Information Ontology 1/0059_IO_Memory]].
* **Path Dependence:** The system's evolution will become strongly dependent on the history encoded in the causal weights.
## 6. Challenges
* **Parameter Tuning:** Requires tuning `Δw_base`, `decay_rate`, and `f_Θ_corr`.
* **Stability:** Dynamic weights can introduce new instabilities or runaway feedback if not carefully managed (e.g., need for `w_max` or normalization).
* **Computational Cost:** Storing and updating weights for all edges adds computational overhead, especially for densely connected networks.
## 7. Conclusion: Enabling Network Learning
Incorporating dynamic causal weights `w(j → i, S)`, updated via Theta (Θ) reinforcement based on predictive success and outcome stability, is a crucial step towards realizing the adaptive potential of the IO framework. This mechanism allows the network structure itself to learn and evolve, encoding historical information and shaping future dynamics. It moves the formalism beyond fixed interactions towards a system capable of genuine adaptation and potentially more complex forms of emergence. The next step is to implement this dynamic CA mechanism within the v2.2 simulation code.