--- # METADATA id: Meta-RefineOutput name: Meta - Refine Output through Iterative Self-Critique version: 1.2 # Aligned with consolidated strategy (Schema v6.0, Catalog v2.0, AIOpsProtocols v1.1) status: Active description: > A meta-process template guiding an AI to iteratively refine a generated output (e.g., a document, a plan, another template, a complex response) through cycles of self-critique (including mandatory checks from AIOperationalProtocols), adversarial analysis, simulation (if applicable), and feedback incorporation. The goal is convergence on a stable, robust, and optimized output. type: Meta_Process domain: AI Self-Improvement, Quality Assurance, Generative AI keywords: [self-critique, iterative refinement, continuous improvement, adversarial analysis, simulation, feedback loop, convergence, meta-cognition, AI safety, AI alignment, data integrity] # RELATIONSHIPS invoked_by: - "User" - "[[ProjectOrchestrator]]" # For refining complex drafts (plans, charters, deliverables) - "[[archive/templates/projects/LiteContentCreator]]" # For refining drafted content - "[[InvokeAISkill]]" # Internal logic within ProjectOrchestrator, for complex skill executions - "System (Self-Triggered)" # E.g., when AI generates new/updated templates references_schema: "[[ProjectStateSchema]]" # v6.0 uses_skills_from: "[[archive/templates/projects/AISkillsCatalog]]" # v2.0 (May use skills like CritiqueArtifact) uses_knowledge_artifacts: - "[[AIOperationalProtocols]]" # For mandatory critique checks # USAGE instructions_for_ai: | **Objective:** To take an initial AI-generated output (the "Draft Output") and subject it to a rigorous, iterative process of self-evaluation and refinement until it reaches a stable state of high quality, robustness, and alignment with specified goals or criteria. This process emphasizes deep feedback loops, diverse critique methods, and adherence to `AIOperationalProtocols`. **CRITICAL PREREQUISITES (AI MUST VERIFY AT START of this process invocation):** 1. **`ProjectStateSchema`:** Confirm loaded. If not, HALT and request. 2. **`AISkillsCatalog`:** Confirm loaded. If not, WARN (critique depth may be limited). 3. **`AIOperationalProtocols` Definition/Instance:** Confirm loaded/active. If not, WARN (mandatory checks may be based on general principles). **Interaction Note for AI:** Maintain concise, action-oriented "machine voice." Filename references must NOT include extensions. Output for template files must be complete and non-truncated. Adhere to DATE/TIMESTAMP/DURATION handling critical directive. **Input:** 1. `draft_output_content`: The actual content of the AI-generated output to be refined. 2. `draft_output_reference`: string (Optional, if output is in `project_state` or a file). 3. `refinement_goals_and_criteria`: object or string (Specific goals for this refinement cycle). 4. `max_iterations`: integer (Optional, default: 3). 5. `convergence_threshold`: string (Optional, description of convergence). 6. `user_feedback_integration_mode`: string (Enum: "AfterEachIteration", "AfterConvergenceAttempt", "OnDemand". Default: "AfterConvergenceAttempt"). 7. Access to `project_state` (if relevant). **Meta-Process Steps (Iterative Loop):** 4. **Initialization & Pre-Generation Constraint Review:** a. Verify prerequisites. If criticals missing, HALT. Issue WARNs if non-criticals missing. b. Store `draft_output_content` as `current_version_output`. c. Initialize `iteration_count = 0`, `refinement_log`: list of objects. d. AI internalizes inputs. e. **AI performs "Pre-Generation Constraint Review Protocol"** (from active `AIOperationalProtocols` KA or its definition) to compile `active_constraints_checklist` for *this refinement task on the draft_output_content*. f. State: "Meta-Refinement initiated. Goals: [summarize goals]. Active constraints checklist compiled." 5. **Iteration Start:** a. Increment `iteration_count`. If `iteration_count > max_iterations`, proceed to Step 6. b. Log in `refinement_log`: "Starting refinement iteration [iteration_count]." 6. **Multi-Perspective Self-Critique & Analysis (Adhering to `AIOperationalProtocols`):** AI performs these on `current_version_output`, referencing `active_constraints_checklist`: a. **MANDATORY CHECKS (from `AIOperationalProtocols` - Data Integrity & Self-Correction Protocol, incorporating TID017):** i. YAML/Output Completeness (No placeholders/truncation). Log Pass/Fail. ii. Data Sourcing (Traceability, No Invention). Log Pass/Fail/PartiallySourced. iii. Placeholder Protocol Adherence (If output is a template, does it guide AI correctly on input placeholders?). Log Pass/Fail. iv. Adherence to all other items in `active_constraints_checklist`. Log Pass/Fail per constraint. b. **Goal Alignment Critique:** Evaluate against `refinement_goals_and_criteria`. c. **Adversarial / Contrarian Analysis.** d. **Simulation / Scenario Testing (If Applicable).** e. **Comparative Analysis (If Applicable).** f. **Clarity, Conciseness, Usability Review.** g. **(Optional) Invoke Specific AI Skills for Analysis** (e.g., `CritiqueArtifact` from `AISkillsCatalog`). h. All findings logged in `refinement_log`. 7. **Synthesize Critique Findings & Propose Revisions:** a. AI analyzes `refinement_log`, prioritizing failures from MANDATORY CHECKS. b. Identify critical issues and high-impact areas for revision. c. Generate specific, actionable list of proposed changes. d. Log proposed changes. 8. **Implement Revisions (Generate Next Version):** a. AI applies proposed changes to create `next_version_output`. b. AI ensures revisions address issues from Step 3, especially mandatory check failures. c. Log: "Revisions implemented for iteration [iteration_count]." 9. **Assess Convergence & Loop Control:** a. Compare `next_version_output` with `current_version_output`. b. Evaluate `next_version_output` against `convergence_threshold` AND results of mandatory checks (from Step 2, as if applied to `next_version_output`). Convergence requires: All MANDATORY CHECKS pass; No significant new critical issues; Output stabilizing. c. If converged: Set `current_version_output = next_version_output`. Log convergence. Proceed to Step 7. d. If NOT converged AND `user_feedback_integration_mode == "AfterEachIteration"`: Set `current_version_output = next_version_output`. Proceed to Step 6. e. If NOT converged AND `user_feedback_integration_mode != "AfterEachIteration"`: Set `current_version_output = next_version_output`. Loop to Step 1. 10. **User Intervention / Feedback Point:** a. State: "Refinement iteration [iteration_count] complete." (Or max iterations message). b. Present `current_version_output` (NO TRUNCATION). c. Present summary of `refinement_log` (key critiques, changes, mandatory check status, convergence assessment). d. Ask user: "1. Does current version meet refinement goals: [summarize goals]? (Yes/No/Partially) 2. Provide feedback/directives for further refinement, or confirm acceptance." (Freeform). e. If user confirms acceptance: Proceed to Step 7. f. If user provides feedback: Incorporate into `refinement_goals_and_criteria`. Log feedback. Reset `iteration_count` if major. Loop to Step 1 (or Step 4). 11. **Present Final Converged Output:** a. State: "Meta-Refinement process converged after [iteration_count] iterations (or with user approval)." b. Present final `current_version_output` (NO TRUNCATION). c. Present final summary from `refinement_log`. d. Instruct user on saving/using output (e.g., "Refined content ready. Save as `[SuggestedFilename]` in `[Path]`)." e. Terminate this meta-process. **Output:** * The refined output content. * (Optionally) The `refinement_log`. # OBSIDIAN obsidian_path: "templates/projects/support/MetaRefineOutput" ---