# GENERATE TECHNICAL WHITE PAPER PROCESS (Data-Centric, Structured Interaction, Layered Rigor & Credibility) ## Process Steps: 1. Access Project State & Task Instance. 2. Formulate understanding questions (Paper Sections, Outline, Inputs, Standards, **Layered Rigor & Provenance Requirements**). 3. Structure AI output turn (project ID + task execution ID + status + summary + questions). 4. Present structured AI output turn to user. 5. Receive user responses (handle freeform by returning to step 2). 6. Verify inputs and readiness; return to step 2 if not ready. 7. **Iterate through Outline Sections:** For each section: a. Identify Section Goal & Audience. b. Synthesize Content. c. **Assess Technical Credibility & Consistency (Proactive Check). Flag issues.** d. Perform Generation/Revision (**Layered Detail, Jargon Control, Absolute Rigor**). **Capture Provenance Data.** e. Identify Section Issues/Clarifications (**Flag Unverified Claims/Inconsistencies/Technical Concerns**). f. Formulate clarification questions (may require non-yes/no). g. Structure AI output turn (project ID + task execution ID + status + issue summary + questions + **Generated Section Text**). h. Present structured AI output turn to user. i. Receive user responses (handle freeform by returning to step 7d for revision). j. Update Project State (Section Data + Provenance). 8. Finalize Output Data (all section data + overall provenance), store in state. 9. Update Task Execution Instance status, log overall issues/insights, update feedback status in state. 10. Formulate completion questions (**Confirm Rigor Adherence, Confirm Provenance Prepared**). 11. Structure AI output turn (project ID + task execution ID + final status + outcome summary + questions). 12. Present structured AI output turn to user. 13. Receive user responses. 14. Update Project State based on final responses. 15. Indicate readiness for next process step (implicitly, by completing this process). ## Example Structured Questions (AI to formulate based on task details, inputs, and process progress): * "Confirm understanding: Draft the Executive Summary section of the white paper? (Yes/No)" * **Confirm understanding: This task requires absolute internal consistency, technical accuracy, zero hallucination, appropriate jargon management, AND mandatory preparation of provenance data? (Yes/No)** * "Clarification needed for Section [Section Name]: Source '[Source Reference]' provides technical detail X, but the Executive Summary needs a high-level explanation. How should I simplify this for the executive audience?" (Freeform input expected) * **Clarification needed for Section [Section Name]: I detected a potential inconsistency between [Claim A in Section X] and [Claim B in Section Y]. Which is correct, or how should I reconcile them?** (Freeform input expected) * **Clarification needed for Section [Section Name]: Based on my knowledge, the proposed technical approach [Approach Summary] seems potentially infeasible due to [Reason]. Should I proceed with drafting this approach, or flag it as a major technical risk? (Proceed/Flag as Risk/Other - please specify)** * "Drafting of Section [Section Name] complete. Generated text presented above. Ready for user review of this section? (Yes/No)" * **Does the generated content for Section [Section Name] adhere to the required rigor standards (consistency, accuracy, no hallucination) to the best of my ability? (Yes/No - AI self-assessment)** * **Has the provenance data for Section [Section Name] been prepared? (Yes/No - AI self-assessment)** * "All sections drafted. Ready to finalize the white paper output? (Yes/No)" * **Does the final white paper content adhere to the required rigor standards (consistency, accuracy, no hallucination) to the best of my ability? (Yes/No - AI self-assessment)** * **Has the final provenance data for the entire white paper been prepared? (Yes/No - AI self-assessment)** * "White paper generation task complete. Initiate review process using [[archive/templates/projects/support/REVIEW-Feedback]]? (Yes/No)" --- **Project State Data Schema Update (Implicit in instructions):** The `DEFINE-ProjectStateSchema.md` would need to refine the structure for `white_paper_content_data` within the `task_execution_instance.output_data` to handle multiple sections and link them to provenance: ```yaml # Structure for white_paper_content_data within task_execution_instance.output_data white_paper_content_data: sections: list of objects # Data for each section - section_id: string # e.g., "ExecutiveSummary", "Intro", "AppendixA" section_title: string content_reference: string # Reference to where the actual text is stored (user's file system) word_count: integer # Add other relevant section metadata overall_metadata: object # Metadata for the entire paper total_word_count: integer # Add other paper-level metadata provenance_data: object # Structured data detailing the generation process for this content (can be at task_execution_instance level or within content data) # Example structure (needs formal definition in schema): # content_segments: list of objects { segment_id: string, text_snippet: string (optional, brief), generated_from: list of references (source_id, analysis_id, task_id, user_input_ref, outline_point_ref), process_steps: string (summary of AI's internal steps), section_id_ref: string }