# Formalizing Transition Rule with Target Probability Vector (κ Refinement)
## 1. Objective: Implementing the Richer κ State
Node [[releases/archive/Information Ontology 1/0100_IO_Kappa_Refinement]] proposed refining the IO state representation to better capture Potentiality (κ), suggesting a **Target Probability Vector `P_target`** as a tractable approach. This node implements that suggestion by modifying the unified state transition rule developed in [[releases/archive/Information Ontology 1/0098_IO_Unified_Rule]]. The goal is to create a formal rule where the *likelihood* of change is influenced by Η, Θ, K, and CA, while the *outcome* of change is determined probabilistically by `P_target`, which is itself influenced by M and CA.
## 2. Refined State Representation
As proposed in [[releases/archive/Information Ontology 1/0100_IO_Kappa_Refinement]], Option A:
* **State:** `State(i, S) = { ε(i, S), (P_target(i, S), Θ_val(i, S)) }`
* `ε(i, S)`: Current actualized state (e.g., binary {0, 1}, or generalized to `k` discrete states).
* `P_target(i, S)`: A vector of probabilities, `P_target = [p_0, p_1, ..., p_{k-1}]`, where `p_k` is the intrinsic potential probability of transitioning *to* state `k` *if* a change occurs. `Σ_k p_k = 1`.
* `Θ_val(i, S)`: Accumulated stability of the current state `ε(i, S)`.
## 3. Refined Unified State Transition Rule (v2)
The update for node `i` from `S` to `S+1`:
**Phase 1: Calculate Influences at Step S**
1. **Local Contrast (K Influence):** Calculate `K_local(i, S)` based on immediate neighbors' `ε(j, S)`. (Definition depends on generalization of ε; for binary, use [[0096]]).
2. **Causal Input (CA Influence):** Calculate weighted causal inputs for *each potential target state k*. This requires generalizing the input calculation from [[releases/archive/Information Ontology 1/0097_IO_Formal_Causality]]. Let `Influence_k(i, S)` be the total weighted causal input from neighbors currently in state `k`.
* `Influence_k(i, S) = Σ_{j | (j → i) ∈ E, ε(j,S)=k} w(j → i, S)`
* `Total_Causal_Weight(i, S) = Σ_k Influence_k(i, S)`
**Phase 2: Determine Probability of State Change (Η, Θ, K)**
3. **Calculate Change Probability `P_change(i, S)`:** This is the probability that the system leaves its *current* state `ε(i, S)`. It depends on Η (global `h`), Θ (`Θ_val`), K (`K_local`), and potentially the *overall* potential for change encoded implicitly in `P_target` (e.g., related to the probability `1 - P_target[ε(i, S)]` of *not* targeting the current state).
* *Refined Example Formula:*
Let `p_leave = 1 - P_target[ε(i, S)]` (intrinsic potential to leave current state).
`P_change(i, S) = f_H(h, p_leave) * f_Θ(Θ_val(i, S)) * f_K(K_local(i, S))`
*(Note: CA influence removed from P_change itself, moved entirely to target selection bias).*
**Phase 3: Execute State Transition (Probabilistic Update)**
4. **Determine Occurrence:** Generate random number `r ~ U(0, 1)`.
5. **If `r < P_change(i, S)` (Change Occurs):**
a. **Determine Target State (M bias via CA on `P_target`):**
i. Start with the intrinsic target probabilities `P_target(i, S) = [p_0, p_1, ...]`.
ii. Modify these probabilities based on Mimetic bias from causal inputs. Let `P_modified_target = [p'_0, p'_1, ...]`. A possible modification rule (needs refinement):
`p'_k ∝ p_k * (1 + p_M * Influence_k(i, S) / Total_Causal_Weight(i, S))`
(Normalize `P_modified_target` so `Σ_k p'_k = 1`). This rule increases the probability of targets that match strong causal inputs, controlled by Mimicry strength `p_M`.
iii.Sample the target state `ε_target` from the *modified* distribution `P_modified_target`.
iv. Set `ε(i, S+1) = ε_target`.
b. **Update Stability/Potential:** Reset/decay `Θ_val(i, S+1)`. Define rules for updating `P_target(i, S+1)` (see Section 4).
c. **Flag for CA Reinforcement:** Mark contributing edges `(j → i)` based on correlation between `ε(j, S)` and the transition to `ε(i, S+1)`.
6. **Else (No Change Occurs):**
a. **Maintain State:** `ε(i, S+1) = ε(i, S)`.
b. **Update Stability/Potential:** Increment `Θ_val(i, S+1)`. Define rules for updating `P_target(i, S+1)` (see Section 4).
c. **Flag for CA Reinforcement:** Mark incoming edges `(j → i)` where `ε(j, S) == ε(i, S)`.
**Phase 4: Update Causal Network (Θ Influence on CA)**
7. **Update Edge Weights `w(j → i, S+1)`:** Apply Θ reinforcement/decay based on flags from steps 5c and 6c [[releases/archive/Information Ontology 1/0097_IO_Formal_Causality]].
## 4. Dynamics of the Target Probability Vector (`P_target`)
Defining how `P_target(i, S)` evolves is crucial and complex. It represents the evolution of potentiality itself.
* **Influence of `ε(i, S)`:** The current actual state likely influences future potential. Perhaps `P_target[ε(i, S)]` tends to increase slightly even without change (self-reinforcement bias)?
* **Influence of `Θ_val(i, S)`:** High stability `Θ_val` for state `k` should strongly increase `P_target[k]` and decrease others. When `Θ_val` resets after a flip, `P_target` might revert to a more uniform or baseline distribution.
* **Influence of Neighbors (K/M/CA):** Persistent differences (high K) or similarities (high M bias) with neighbors might slowly shift the baseline `P_target` distribution over time, representing adaptation of potential based on context.
* **Η Influence:** Η might ensure probabilities don't go to exactly 0 or 1, maintaining some exploratory potential.
*Initial Simple Rule:* When `Θ_val` increments (no flip), slightly increase `P_target[ε(i, S)]` and renormalize. When `Θ_val` resets (flip occurred), reset `P_target` to a baseline distribution (e.g., uniform, or biased by the new `ε(i, S+1)`).
## 5. Advantages and Next Steps
* **Richer Dynamics:** Models directed potential and allows M/CA to bias outcomes more naturally.
* **Closer to QM:** `P_target` vector is conceptually closer to a quantum state vector's amplitude distribution.
* **Foundation for Generalization:** Easier to extend to non-binary `ε` states.
* **Next Steps:**
* Refine the update rules for `P_target`.
* Implement this v2 transition rule in simulations.
* Compare emergent behavior with the simpler v1 rule [[releases/archive/Information Ontology 1/0098_IO_Unified_Rule]].
* Investigate how to formally derive Contrast K from the difference between `P_target` vectors of interacting nodes.
## 6. Conclusion: Formalizing Potential's Influence
This refined transition rule incorporates a richer representation of Potentiality (κ) via the Target Probability Vector (`P_target`). It provides a more nuanced model where Η, Θ, and K influence the *likelihood* of change, while M and CA (acting on `P_target`) influence the *direction* of change. The dynamics of `P_target` itself, representing the evolution of potential, become a key area for further theoretical development. This formalism moves IO closer to a quantitative framework capable of modeling complex state transitions and emergent phenomena driven by the interplay of potentiality, actuality, and the core dynamic principles.