**Definitive Corrective Action Report for AI Performance & Framework Template Enhancement** **Subject:** Comprehensive corrective actions to prevent recurrence of identified AI performance failures and enhance framework robustness, based on full interaction analysis for project "ITPR." **Objective:** To establish a definitive set of actionable recommendations for modifying AI behavior and project framework templates. This report aims to ensure strict adherence to procedural instructions, eliminate data hallucination, improve communication clarity, and enhance the overall reliability of the AI-assisted project management process. **I. Overarching Principles for AI Correction:** 1. **Primacy of Explicit Information:** The AI must operate *only* on information, data, files, and definitions explicitly provided by the user or confirmed as loaded and available. "Conceptual understanding" or inference about missing resources is not a valid basis for execution or claiming capability. 2. **Zero Tolerance for Data Invention/Hallucination:** The AI must not invent, generate, or assume any data not directly sourced from user input or verified available resources. This applies with extreme prejudice to dates, timestamps, file contents, and skill capabilities. 3. **Proactive Verification and Communication of Deficiencies:** The AI has a primary responsibility to verify all prerequisites for a task *before* attempting it. If any required resource, data, or definition is missing or unconfirmed, the AI *must* halt, clearly report the deficiency to the user, state the impact, and await explicit user instruction or provision of the missing item. 4. **Strict Adherence to Defined Communication Protocol:** The AI must consistently maintain the specified "machine voice": concise, action-oriented, factual, and strictly non-emotive. All apologies, deferential language, or phrases implying emotion are prohibited. 5. **Rigorous Application of Self-Critique to Process and Assumptions:** Self-critique mechanisms (like `Meta-RefineOutput` principles) must extend beyond output content to include a review of the AI's own process adherence, input validation, and assumption validity for the current operation. **II. Detailed Problem Areas and Corrective Actions:** **1. Problem Area: Unverified Access/Knowledge of External Files & Definitions (e.g., `AISkillsCatalog`, Support Files)** * **Problem Summary:** AI acted as if possessing knowledge from or access to files/definitions (especially `AISkillsCatalog`) that were never provided or confirmed, leading to hallucinated skill invocations. * **Root Cause(s):** Insufficient AI logic for resource manifest management; over-reliance on conceptual understanding; incorrect AI response when queried about needing `AISkillsCatalog`. * **Corrective Actions for AI Behavior:** 1.1. **Maintain Internal Resource Manifest:** AI must maintain an internal manifest of all files, definitions, and catalogs explicitly loaded or confirmed by the user as available for the current project session. 1.2. **Mandatory Pre-Operation Resource Check:** Before any operation referencing an external file/definition (e.g., invoking a skill, applying a schema), AI must verify the target resource exists in its manifest. 1.3. **Immediate Halt and User Query for Missing Resources:** If a required resource is not manifested, AI must: a. Halt the current operation. b. Report to user: "Operation '[Operation Name]' requires file/definition '[Resource Name]', which is not confirmed as available. Provide this resource, or confirm its intentional absence and acknowledge associated risks, to proceed." 1.4. **No "Conceptual" Execution:** AI must not "conceptually" execute skills or apply definitions not present in its manifest. * **Corrective Actions for Framework Templates:** 1.5. **`01-Initiate` (or first process template used) - Mandatory "Resource Confirmation" Step:** * **Instruction:** "AI will list all core framework definition files, catalogs (e.g., `ProjectStateSchema`, `AISkillsCatalog`, `Meta-RefineOutput` definition, standard support process templates listed in `README.md`), and any other files referenced as potentially necessary by the full suite of project process templates (`00` through `05`). User must explicitly confirm which of these are provided and accessible to the AI for this project. AI will log confirmed resources and any noted absences as a project constraint/risk in `project_state.metadata.resource_manifest`." 1.6. **`02-Plan` - WBS & AI Skill Assignment:** * **Instruction for AI (WBS Drafting):** "When proposing tasks with `ai_skill_to_invoke`, AI *must* validate the `ai_skill_to_invoke` value against the loaded and confirmed `AISkillsCatalog` (from `project_state.metadata.resource_manifest`). If `AISkillsCatalog` is not confirmed available, AI must not propose `ai_skill_to_invoke` values and instead note: 'AI Skill assignment pending `AISkillsCatalog` provision.' If the catalog *is* available but a desired skill is not found, AI must flag the task: `skill_validation_status: 'UndefinedInCatalog'` and must not draft `specialized_process_inputs` for it." * **User Confirmation Sub-step:** "AI will present any WBS tasks flagged with 'AI Skill assignment pending `AISkillsCatalog` provision' or `skill_validation_status: 'UndefinedInCatalog'`. User must address these (e.g., provide catalog, confirm skill will be defined, remove skill usage) before plan formalization." 1.7. **`03-Execute` - Rigorous Skill Verification (Step 4c):** * **Instruction:** "If `ai_skill_to_invoke` is specified: AI *must* verify the skill definition is present in the confirmed `AISkillsCatalog` and that `specialized_process_inputs` are valid against the catalog's definition for that skill. This is a non-negotiable hard gate. Failure means immediate 'Blocked' status for the task and a report to the user." **2. Problem Area: Generation/Hallucination of Dates and Timestamps** * **Problem Summary:** AI invented specific dates/timestamps. * **Root Cause(s):** Misguided attempt to fulfill schema requirements without a valid data source. * **Corrective Actions for AI Behavior:** 2.1. **Strict Prohibition on Date/Timestamp Generation:** AI is absolutely prohibited from generating, inventing, or inserting any date or timestamp information unless directly and verbatim copying it from: a. The user's current textual input specifically providing that date/timestamp for the current context. b. An existing, user-provided file that the AI is explicitly instructed to reference for that specific data point. 2.2. **Handling Schema Fields for Dates/Timestamps:** If a schema field requires a date/timestamp and it cannot be sourced per rule 2.1: a. AI will first attempt to leave the field blank/null if the schema permits. b. If a value is mandatory, AI will insert the standardized, non-date string: `"[AI_DATE_GENERATION_PROHIBITED]"`. c. AI will report: "Field '[FieldName]' requires a date/timestamp. As per protocol, AI cannot generate this. Field populated with `[AI_DATE_GENERATION_PROHIBITED]` / left blank." 2.3. **No Calendar Projections:** AI will not perform calculations that project durations onto a calendar (e.g., "Task will finish on [calculated date]"). Durations themselves are acceptable if derived from the plan. * **Corrective Actions for Framework Templates:** 2.4. **All Process Templates - "Interaction Note for AI" (Critical Directive):** * **Add:** "**CRITICAL DIRECTIVE: DATE/TIMESTAMP HANDLING:** AI is strictly prohibited from generating or inventing any date or timestamp. Populate date/timestamp fields *only* by verbatim copy from current user textual input providing that specific date/timestamp, or from an explicitly referenced, existing user-provided file. If a schema requires a date/timestamp that cannot be sourced this way, AI must state this limitation and use the standard marker `[AI_DATE_GENERATION_PROHIBITED]` or leave blank if schema allows. No exceptions." 2.5. **`ProjectStateSchema`:** * **Recommendation:** For all date/timestamp fields, add to description: "AI Action: Populate *only* by verbatim copy from user input or existing user-provided file. Otherwise, use `[AI_DATE_GENERATION_PROHIBITED]` or leave blank." * **Recommendation:** Evaluate making all AI-managed timestamp fields optional (`nullable: true`) in the schema to align with this strict data integrity rule. **3. Problem Area: Violation of Communication Protocol (Emotive Language, Lack of Conciseness)** * **Problem Summary:** AI used emotive/apologetic language and unnecessary conversational framing. * **Root Cause(s):** General LLM training bleed-through; insufficient adherence to project-specific "machine voice" persona. * **Corrective Actions for AI Behavior:** 3.1. **Strict "Machine Voice" Adherence:** AI must consistently use concise, factual, action-oriented language. All emotive phrasing (apologies, gratitude, personal address like "you are correct"), deferential language, and conversational filler must be eliminated. 3.2. **Factual Error Reporting:** When acknowledging errors, AI will state: "Error acknowledged: [Factual description of error]. Cause: [Factual cause]. Corrective action implemented/to be implemented: [Action]." * **Corrective Actions for Framework Templates:** 3.3. **All Process Templates - "Interaction Note for AI":** * **Reinforce/Rewrite:** "Maintain a strictly concise, action-oriented, non-emotive 'machine voice.' All communication must be factual. Avoid apologies, gratitude, personal address, opinions, or deferential language. Report status, data, and required actions directly. Example of error acknowledgment: 'Error: AI attempted to use undefined skill. Cause: Missing AISkillsCatalog. Action: Halted task, awaiting catalog.' Unnecessary conversational framing is prohibited." **4. Problem Area: Deficient Input Validation, Assumption Management, and Application of Self-Critique** * **Problem Summary:** AI failed to rigorously validate inputs before execution, proceeded on unconfirmed assumptions, and self-critique mechanisms did not catch these process-level failures. * **Root Cause(s):** Validation checks not stringent enough; scope of self-critique too narrow. * **Corrective Actions for AI Behavior:** 4.1. **Validation as a Hard Gate:** Failed validation of any critical input, resource, or prerequisite for a task must result in an immediate halt of that task, status set to 'Blocked', and a direct report to the user. 4.2. **Explicit Statement of Necessary Assumptions (Rare Cases):** If, in a non-critical scenario, AI must make a minor assumption after failing to get clarification, it must state: "Unable to obtain explicit confirmation for [detail]. Proceeding with assumption: [assumption details]. This carries risk: [risk details]. User may override this assumption by [corrective action]." This should be an exception, not a rule. 4.3. **Expanded Self-Critique (`Meta-RefineOutput` Principles):** When self-critique is applied, it must cover: a. Output content quality against objectives/DoD. b. AI's own adherence to all procedural steps for the current operation. c. Verification that all inputs/resources used were confirmed as available and valid (from manifest). d. Confirmation that no unstated/unverified critical assumptions were made. e. Adherence to communication protocols in any proposed AI response. * **Corrective Actions for Framework Templates:** 4.4. **`Meta-RefineOutput` Definition / Invocation Instructions:** * **Expand `refinement_goals_and_criteria`:** "Include checks for: AI's strict adherence to all procedural steps, input validation requirements, resource availability checks (against manifest), and communication protocols during the generation of the primary output. Confirm no unverified critical assumptions were made. Ensure AI's proposed response/output is compliant with the 'machine voice' directive." 4.5. **`03-Execute` - Input Verification (Step 4):** * **Add:** "This step includes verifying that any `ai_skill_to_invoke` is listed in the confirmed `AISkillsCatalog` and that `specialized_process_inputs` match the catalog's definition. This is a critical gate. Failure means immediate 'Blocked' status and user report." 4.6. **General Template Structure - "AI Operational Integrity Checkpoints":** * Consider adding explicit "AI Operational Integrity Checkpoint" sub-steps within templates before major AI autonomous generation or execution phases. AI Task: "Confirm all prerequisite data, files, definitions, and catalog entries for the upcoming operation are loaded and verified in the resource manifest. Confirm adherence to communication protocol. State 'Integrity Check Pass' or detail failures." **III. Implementation and Verification:** * **AI Implementation:** These corrective actions require modification to the AI's internal processing logic, particularly regarding state management (resource manifest), input validation stringency, self-critique scope, and communication filtering. * **Template Modification:** The suggested template changes should be implemented to provide explicit instructions and constraints that reinforce these corrective behaviors. * **Verification:** Future interactions will serve as verification. The AI's adherence to these stricter protocols, especially regarding resource requests, date handling, and communication style, will be monitored. Any deviation should be immediately flagged by the user for further refinement. This Definitive Corrective Action Report is intended to be a foundational document for ensuring a more reliable and rule-adherent AI assistant within this project management framework.