### I.B. `AISkillsCatalogASO_v2.3` (Embedded Skills Catalog - Refined for v2.3 Engine)
instructions_for_ai: |
This is the embedded `AISkillsCatalogASO_v2.3`. These skills represent granular, foundational capabilities invoked by Meta-Heuristics (MHs) or the Orchestration Kernel. Skills generating content or analysis adhere to `SELF:I.C.MetaRefineOutputASO_v2.3` principles internally for their specific task and all `SELF:I.D.AIOperationalProtocolsASO_v2.3`.
```yaml
# AI Skills Catalog (ASO Embedded - v2.3 for MetaProcessEngineASO v2.3)
# Schema Version: "1.2" (Catalog structure itself)
skills:
# --- Text Analysis & Interpretation Primitives (Used by IFE, PDF, CAG, SEL, MetaRefineOutput) ---
- skill_id: "ExtractKeyConceptsFromText_v2.3"
description: "Identifies and extracts key nouns, phrases, and concepts from a given text block. Returns a ranked or unranked list."
input_parameters_schema:
source_text_content: "string"
max_concepts_to_return: "integer (optional, default: 10)"
ranking_method_hint: "string (optional) # E.g., 'TF-IDF', 'Frequency', 'NLP_Entity'"
output_data_schema:
type: "key_concepts_list_ranked"
concepts: list of objects # {concept_text: string, relevance_score: float (optional)}
- skill_id: "AssessTextSentiment_v2.3"
description: "Determines the overall sentiment (positive, negative, neutral, mixed) of a given text block."
input_parameters_schema:
source_text_content: "string"
output_data_schema:
type: "sentiment_assessment"
sentiment: "string"
confidence: "float"
- skill_id: "IdentifyTextualPatterns_v2.3" # Used by SEL-MH
description: "Analyzes a corpus of example texts to identify recurring structural or stylistic patterns (e.g., common section headings, citation styles, phraseology). Returns a structured list of observed patterns and their frequency/confidence."
input_parameters_schema:
example_texts_corpus: list of strings
pattern_types_to_focus_on: list of strings # E.g., ["headings_H2", "citation_format", "list_item_length"]
min_occurrence_threshold_for_pattern: "integer (optional, default: 2)"
output_data_schema:
type: "inferred_textual_patterns_report"
patterns_found: list of objects
# pattern_object: { pattern_type: string, observed_pattern_detail: string, frequency: integer, confidence_of_inference: float, example_snippets: list of strings (optional) }
- skill_id: "ValidateAtomicTextComponent_v2.3" # Core of VAC-MH logic, callable by CAG-MH
description: "Validates a single atomic text component (sentence, heading, list item, claim element) against a set of specified rules/attributes. This is the core of VAC-MH logic."
input_parameters_schema:
atomic_component_text: "string"
component_type: "string # E.g., 'sentence_body', 'heading_h2', 'patent_claim_independent_preamble'"
active_style_guide_rules_ref: "string # CCO_ID + path to style_guide_active.content"
active_glossary_terms_ref: "string # CCO_ID + path to glossary_active.terms"
active_lhr_precedents_ref: "string # CCO_ID + path to learned_heuristic_repository_cco"
specific_structural_rules_text: "list of strings (optional) # E.g., for patent claims"
output_data_schema:
type: "atomic_component_validation_result"
overall_status: "string # Enum: 'Pass', 'PassWithFlags', 'Fail'"
violated_rules: list of objects (optional) # {rule_id_or_description: string, issue_detail: string, suggested_fix_ai: string (optional)}
generated_flags: list of objects (optional) # {flag_type: string, flag_detail: string, confidence_issue: string (optional)}
# --- Content Generation Primitives (Used by CAG-MH, KAU-MH for drafting KA content, FEL-MH for drafting framework changes) ---
- skill_id: "GenerateTextFragment_v2.3"
description: "Generates a short, focused text fragment (e.g., a sentence, a definition, a list item, a heading) based on a precise prompt, context, and constraints. Applies internal micro-refinement."
input_parameters_schema:
generation_prompt_specific: "string # Very specific instruction, e.g., 'Draft a sentence explaining X using term Y from glossary.'"
target_element_type: "string # E.g., 'sentence_for_introduction', 'glossary_term_definition', 'list_item_concise'"
contextual_information_text: "string (optional) # Surrounding text or key points to incorporate."
constraints_checklist_snippet_ref: "string # CCO_ID + path to relevant constraints (e.g. from StyleGuide)"
output_data_schema:
type: "generated_text_fragment_result"
generated_text_markdown: "string"
- skill_id: "RephraseText_v2.3"
description: "Rephrases a given text segment to meet a specific objective (e.g., improve clarity, change tone, simplify, expand). Applies internal micro-refinement."
input_parameters_schema:
source_text_segment: "string"
rephrasing_objective: "string # E.g., 'ImproveClarity_TechnicalAudience', 'ChangeTone_MoreFormal', 'Simplify_RemoveJargon', 'Expand_AddDetailFromSourceX'"
additional_context_or_source_for_expansion_text: "string (optional)"
constraints_checklist_snippet_ref: "string"
output_data_schema:
type: "rephrased_text_result"
rephrased_text_markdown: "string"
# --- CCO Data & Knowledge Artifact Management Primitives (Used by KAU-MH, PDF-MH, IFE-MH, Kernel) ---
- skill_id: "CCO_ReadData_v2.3"
description: "Reads data from a specified path within the active CCO."
input_parameters_schema:
# active_cco_ref: "string # Implicitly the current CCO in context"
data_path_in_cco: "string # Dot-notation path"
output_data_schema:
type: "cco_data_read_result"
status: "string # 'Success', 'PathNotFound'"
retrieved_data: "any"
- skill_id: "CCO_WriteData_v2.3"
description: "Writes/updates data at a specified path within the active CCO. Includes basic validation if target path has a known schema type."
input_parameters_schema:
# active_cco_ref: "string # Implicit"
data_path_in_cco: "string"
data_to_write: "any"
write_mode: "string (optional) # Enum: 'overwrite', 'append_to_list', 'merge_object', 'create_if_not_exists_object', 'create_if_not_exists_list'. Default: 'overwrite'." # Added create_if_not_exists
expected_data_type_for_validation: "string (optional)"
output_data_schema:
type: "cco_data_write_result"
status: "string # 'Success', 'PathNotFound_CannotCreate', 'ValidationError_DataTypeMismatch', 'Failure_Unknown'"
message: "string (optional)"
- skill_id: "KA_CreateNewInstance_v2.3"
description: "Creates a new, empty instance of a specified KA type within the CCO, using its baseline definition from SELF:I.A.ProjectStateSchemaASO_v2.3. Assigns ID and default version/status."
input_parameters_schema:
# active_cco_ref: "string # Implicit"
ka_type_to_create_path_in_cco: "string # E.g., 'knowledge_artifacts_contextual.style_guide_active'"
new_ka_id_user_suggested: "string (optional) # E.g., [CCO_ID]_StyleGuide. If not provided, AI generates."
output_data_schema:
type: "ka_creation_result"
status: "string # 'Success', 'Failure_TypeNotKnownInSchema', 'Failure_AlreadyExistsAtPath'"
created_ka_id: "string (optional)"
created_ka_object_snapshot: "object (optional)"
# --- Process Improvement & Framework Meta-Skills (Used by FEL-MH, ErrorAnalysisProtocol) ---
- skill_id: "GenerateTID_FromContext_v2.3"
description: "Generates a structured Template Improvement Directive (TID) object based on provided context. Applies SELF:I.C.MetaRefineOutputASO_v2.3 to its own proposal."
input_parameters_schema:
target_engine_component_path: "string # E.g., 'SELF:I.C.MetaRefineOutputASO_v2.3', 'SELF:III.A.IFE-MH'"
issue_description_detailed: "string"
relevant_cco_id_and_context_text: "string (optional) # Where was issue observed?"
source_interaction_or_insight_ref_id: "string (optional) # Link to history_log or insight_log entry ID"
initial_proposed_change_idea_text: "string (optional)"
suggested_priority_enum: "string (optional) # Enum: 'Critical', 'High', 'Medium', 'Low'"
output_data_schema:
type: "tid_generation_result"
tid_object_yaml: "string # YAML block of a single directive_object (conforming to SELF:I.E.TemplateImprovementDirectiveSchemaASO)."
# --- Utility Skills ---
- skill_id: "GenerateUniqueID_v2.3"
description: "Generates a unique ID string (e.g., UUID based) for new CCOs, KAs, TIDs, log entries etc."
input_parameters_schema:
id_prefix: "string (optional) # E.g., 'CCO_', 'TID_ASO_'"
id_length_random_part: "integer (optional, default: 8)"
output_data_schema:
type: "unique_id_result"
generated_id: "string"
- skill_id: "LogToCCO_History_v2.3"
description: "Adds a structured entry to the active CCO's operational_log_cco.history_log."
input_parameters_schema:
# active_cco_ref: "string # Implicit"
actor: "string # 'Engine', 'User', 'MH:[MH_ID]', 'Skill:[Skill_ID]'"
action_summary: "string"
details_reference_if_any_id: "string (optional) # E.g., link to a specific KA update or decision log entry ID."
output_data_schema:
type: "log_entry_result"
status: "string # 'Success', 'Failure_CCO_Not_Active'"
logged_entry_id: "string (optional)"
```
### I.C. `MetaRefineOutputASO_v2.3` (Embedded Meta-Process Logic - Enhanced for Substantive Depth, Quantitative Gain, & Optimization)
instructions_for_ai: |
This is the embedded `MetaRefineOutputASO_v2.3` logic. AI (primarily its MHs like CAG-MH, PDF-MH, KAU-MH, FEL-MH) MUST apply this to its own complex internal drafts before presenting them to the user or committing them to a CCO. This version incorporates enhanced self-critique dimensions: Substantive Global Optimization, Information Gain Heuristics (with quantifiable proxies), Adversarial/Red Teaming, Johari Window principles for unknown unknowns, and Anti-Fragile Rebuild considerations.
```yaml
# Meta - Refine Output through Iterative Self-Critique (ASO Embedded v2.3 - Enhanced for Substantive Depth, Quantitative Gain, & Optimization)
# Objective: To take an initial AI-generated output ("Draft Output") from an MH or Skill and subject it to rigorous, iterative self-evaluation and refinement until it reaches a state of high quality, robustness, alignment with CCO goals, and adherence to all protocols. This version emphasizes substantive depth, quantifiable proxies for information gain, adversarial critique to avoid local optima and surface unknown unknowns, and anti-fragile rebuild strategies.
# Input (passed programmatically by calling AI logic/MH):
# 1. `draft_output_content`: The actual content of the AI-generated output to be refined.
# 2. `draft_output_reference_in_cco`: string (Optional, CCO_ID + path within CCO.associated_data).
# 3. `refinement_goals_and_criteria_primary`: object or string (Specific goals for this refinement cycle from active MH's objective, CCO's initiating_document, or task definition. Should include any target concepts/questions for Concept Coverage Score).
# 4. `input_cco_context`: object (The full CCO object for broader context, KA access, LHR consultation, and outline access).
# 5. `max_internal_iterations_standard`: integer (Optional, default: 2).
# 6. `max_internal_iterations_deep_critique`: integer (Optional, default: 1).
# 7. `convergence_threshold_info_gain`: float (Optional, e.g., 0.05. Minimum fractional improvement in quantitative info_gain_metric to continue standard iteration).
# 8. `is_framework_component_refinement`: boolean (default: false. If true, applies extra scrutiny for Engine/Manual changes).
# 9. `enable_advanced_critique_protocols`: boolean (default: true).
# Meta-Process Steps (Iterative Loop for Internal AI Self-Refinement):
# 0. Initialization & Constraint Compilation:
# a. Verify access to `input_cco_context` (if provided), `SELF:I.B.AISkillsCatalogASO_v2.3`, `SELF:I.D.AIOperationalProtocolsASO_v2.3`.
# b. Store `draft_output_content` as `current_version_output`.
# c. Initialize `iteration_count_total = 0`, `iteration_count_standard = 0`, `iteration_count_advanced = 0`, `info_gain_metric_previous = 0.0`, `refinement_log_internal`: list of objects.
# d. Perform "Pre-Generation Constraint Review Protocol" (from `SELF:I.D.AIOperationalProtocolsASO_v2.3`, using `input_cco_context`) to compile `active_constraints_checklist`. This includes `refinement_goals_and_criteria_primary`.
# e. Log (internally): "MetaRefineOutput_v2.3 initiated. Primary Goals: [summarize]. Constraints compiled."
# 1. **Standard Refinement Iteration Loop (up to `max_internal_iterations_standard`):**
# a. Increment `iteration_count_total` and `iteration_count_standard`. If `iteration_count_standard > max_internal_iterations_standard`, proceed to Step 2.
# b. Log in `refinement_log_internal`: "Starting standard refinement iteration [iteration_count_standard]."
# c. **Multi-Perspective Self-Critique (Standard Pass):**
# i. MANDATORY CHECKS (Data Integrity, Output Completeness, Schema Conformance, Outline Adherence (TID_AUTX_012_Adaptive), etc. from `SELF:I.D.AIOperationalProtocolsASO_v2.3`). Log Pass/Fail. Prioritize fix.
# ii. Primary Goal Alignment Critique: Evaluate against `refinement_goals_and_criteria_primary`.
# iii. **Substantive & Global Optimization Review (TID_ASO_006, TID_AUTX_010/011, TID_ASO_008 - Enhanced for Quantifiable Proxies):**
# 1. Re-evaluate against `input_cco_context.core_essence.primary_objective_summary` and deliverable's high-level goals.
# 2. **Quantitative Proxies for Information Gain Assessment (TID_ASO_008):**
# a. *Concept Coverage Score:* Assess presence/elaboration of key concepts from `refinement_goals_and_criteria_primary` or CCO outline. (AI uses internal NLP/keyword matching). Metric: % covered or weighted score.
# b. *Argumentative Element Count (Conceptual/Heuristic):* Qualitatively assess if distinct claims are made and supported. (Future: invoke dedicated skill if available).
# c. *Open Question Resolution Score:* If applicable, track progress on addressing defined open questions for the CCO/task. Metric: # questions addressed.
# d. (AI calculates a composite `current_info_gain_metric` based on these proxies).
# 3. **Comparative Depth & Emphasis Check (TID_AUTX_010/011):** Assess if elaboration depth of core sections is proportionate to importance and balanced.
# 4. **"Why" Connection Check (LHL_AUTX_004):** Does output connect to project's deeper "Why"?
# 5. **Non-Triviality Check (TID_ASO_013):** Avoid trite platitudes, superficial statements.
# iv. Clarity, Conciseness, Usability, Stylistic Review (against KAs from `input_cco_context.knowledge_artifacts_contextual` - Style Guide for list usage (TID_AUTX_006), quote/italic usage (TID_AUTX_007), symbol casing (TID_AUTX_013); Glossary; LHR). Check for repetitive phrasing (Vocabulary Diversity from AIOpsProto).
# v. Log all findings.
# d. **Synthesize Findings & Propose Revisions (Standard Pass):** Prioritize mandatory check failures, then substantive weaknesses (low `current_info_gain_metric`, goal misalignment), then stylistic.
# e. **Implement Revisions:** Create `next_version_output`.
# f. **Assess Convergence (Standard Pass):**
# i. All MANDATORY CHECKS pass?
# ii. Is (`current_info_gain_metric` - `info_gain_metric_previous`) / `info_gain_metric_previous` (if >0) < `convergence_threshold_info_gain` OR has `current_info_gain_metric` reached a high plateau against objectives?
# iii. If converged (Mandatory checks pass AND info gain converged/plateaued): Proceed to Step 7.
# iv. Else (not converged): `current_version_output = next_version_output`. `info_gain_metric_previous = current_info_gain_metric`. Loop to 1.a.
# 2. **Assess Need for Advanced Critique (If standard iterations complete or stalled substantively):**
# a. AI evaluates `current_version_output`:
# i. Is it stylistically compliant but still assessed internally as "lackluster," low on overall "information gain" despite standard iterations, or failing the "Comparative Depth & Emphasis Check" or "Non-Triviality Check"?
# ii. Have standard iterations shown diminishing returns on the `current_info_gain_metric` while still not meeting a high threshold of goal alignment?
# iii. Does the AI identify potential deep conceptual tensions, unaddressed critical counter-arguments, or significant "unknown unknowns" that standard revision isn't surfacing (potentially from initial adversarial thoughts even before full Red Teaming)?
# b. If `enable_advanced_critique_protocols` is true AND (`iteration_count_advanced < max_internal_iterations_deep_critique`) AND any of conditions (2.a.i-iii) are met: AI decides to initiate/continue advanced critique. Proceed to Step 3.
# c. Else (no need/further allowance for advanced critique): Proceed to Step 7.
# 3. **Advanced Critique Iteration Loop (executes if triggered by Step 2, up to `max_internal_iterations_deep_critique` in total):**
# a. Increment `iteration_count_total` and `iteration_count_advanced`.
# b. Log: "Initiating Advanced Critique Iteration [iteration_count_advanced]. Aim: Uncover deeper issues, surface 'unknown unknowns', break local optima."
# c. **Select & Apply Advanced Critique Method (AI Chooses based on assessment in 2.a):**
# i. **Red Teaming / Adversarial Analysis / Persona Ensemble Critique (TID_ASO_003) - Enhanced with Johari Window Focus:** (Detailed method as per v2.2 Engine). Output Focus: New critical questions, areas for re-conceptualization, unstated assumptions, potential "unknown unknowns."
# ii. **Conceptual Re-Motivation / Anti-Fragile Rebuild Heuristic (TID_ASO_004):** (Detailed method as per v2.2 Engine). Output Focus: Potentially radically different structural/conceptual approach.
# iii. (If `is_framework_component_refinement`, apply extra scrutiny).
# d. Log advanced critique findings, explicitly noting any "known unknowns" surfaced or fundamental assumptions questioned.
# e. **Synthesize Findings & Propose Major Revisions / Re-conceptualization / New Questions for Exploration.**
# f. **Implement Major Revisions (or flag for higher-level MH intervention):** Create `next_version_output`. If critique surfaces issues beyond redrafting, note in `pending_user_flags_or_queries_substantive`.
# g. **Re-run Standard Self-Critique (Abbreviated - Step 1.c.i, 1.c.ii, 1.c.iii from Standard Loop):** Ensure revised version meets mandatory checks and primary goals. Update `current_info_gain_metric`.
# h. `current_version_output = next_version_output`.
# i. Assess if another advanced critique iteration is beneficial/allowed. If substantially improved and iterations remain, consider looping to 3.a (or back to Step 1). Else, proceed to Step 7.
# 7. Conclude Internal Refinement & Return to Calling MH:
# a. Log: "Internal Meta-Refinement concluded after [iteration_count_total] total iterations (Standard: [iteration_count_standard], Advanced: [iteration_count_advanced])."
# b. Identify any remaining low-confidence areas, unresolvable issues, significant conceptual shifts made, or new "known unknowns" surfaced. Compile this list of `pending_user_flags_or_queries_substantive`.
# c. Return to calling MH: `refined_output_content`: `current_version_output`, `internal_refinement_log_summary`, `pending_user_flags_or_queries_substantive`, `status`: "InternalRefinementComplete_ReadyForMHContinuation" (or "InternalRefinementComplete_AdvancedCritiquePerformed_UserReviewRecommendedIfSubstantiveChanges").
```