Okay, let's continue by revising the `ANALYZE-*` templates in the `support/` directory to align with the data-centric, AI-proposes/user-confirms model, ensuring they interact correctly with the `project_state` and follow the established structured interaction patterns. --- **File 16: `ANALYZE-Critique.md` (Revised v2.1 - AI Proposes Critique, User Confirms)** * **Location:** `templates/project/support/ANALYZE-Critique.md` * **Purpose:** To guide the AI in performing a critical analysis of a provided artifact, proposing structured findings and recommendations, storing them in the `analysis_results.critiques` section of the Project State data, and interacting with the user via structured questions for confirmation. * **Content:** ```yaml --- # METADATA id: ANALYZE-Critique name: Analyze and Critique version: 2.1 # Updated version status: Template description: Guides AI to perform a critical analysis of an artifact, propose structured findings and recommendations, store them in Project State (per schema), and confirm with user via structured questions. type: Process domain: Critical Thinking, Quality Assurance keywords: [critique, analysis, review, evaluation, identify weaknesses, assumptions, counterarguments, logical fallacies, structured data, yes/no questions, project state, schema, AI proposal] # RELATIONSHIPS process_group: Planning, Executing, Monitoring & Controlling invoked_by: [[03-EXECUTE-Task], [04-MONITOR_CONTROL-Performance]] references_schema: [[support/DEFINE-ProjectStateSchema]] # USAGE instructions_for_ai: | **Objective:** Perform a critical analysis of a user-provided artifact (text, plan, concept, etc.). Propose structured findings and recommendations. Store the confirmed critique in the `project_state.analysis_results.critiques` list according to the schema defined in [[support/DEFINE-ProjectStateSchema]]. Interact with the user via structured questions for confirmation and refinement. **Input:** * User instruction to perform a critique, specifying the artifact to be reviewed. * The artifact itself (text, document section, idea description). * User may specify particular areas of focus for the critique. * Access to the current `project_state` data. * The `DEFINE-ProjectStateSchema.md` file (assumed to be loaded). **Task:** 1. **Verify Schema Access & Project State:** Confirm `DEFINE-ProjectStateSchema.md` is understood. Access `project_state`. 2. **Initialize/Access Critique Data:** * Ensure `project_state.analysis_results.critiques` list exists. * Create a new `critique_object` according to the schema. Assign a unique `critique_id` (e.g., `CRIT-[ProjectCode]-[Timestamp/Seq]`). Set initial `status` to 'In Progress'. * Store the `artifact_reference` and any `focus_areas` provided by the user in the draft `critique_object`. 3. **Perform Analysis & Draft Critique:** * Thoroughly examine the provided artifact based on the focus areas and general critical thinking principles (e.g., logical soundness, evidentiary support, clarity, assumptions, completeness, counterarguments, biases). * Draft `summary_findings`, `detailed_findings` (as an object with categories like clarity_issues, logic_issues, etc.), and `recommendations` for the `critique_object`. 4. **Update Project State (Critique Data - Draft):** Store the drafted critique in the `project_state.analysis_results.critiques` (or a temporary holding place before finalization). 5. **Identify Key Findings/Recommendations for Confirmation:** Determine which key findings or recommendations require explicit user confirmation. 6. **Formulate Structured Questions:** For each key finding or recommendation, formulate a clear, concise yes/no question. * Example: "Regarding clarity, I found [specific issue]. Do you agree this is an area for improvement? (Yes/No)" * Example: "I recommend [specific action] to address the identified logical inconsistency. Do you concur? (Yes/No)" 7. **Manage Question Count:** Present a manageable number of questions per turn. 8. **Structure AI Output Turn (Iterative):** Before presenting questions, provide project ID, critique ID, status, and a concise summary of the critique section being discussed. 9. **Present Questions & Proposed Findings:** Clearly list the formulated questions and the corresponding proposed findings/recommendations for user review. 10. **Receive User Responses:** Process user responses. If "No" or freeform text, treat as feedback, return to step 3 to refine the critique for that point, and re-present. 11. **Update Critique Object (Based on Responses):** If "Yes", mark the finding/recommendation as confirmed in the draft `critique_object`. 12. **Assess Formalization:** Evaluate the draft `critique_object`. Are all key findings and recommendations confirmed? If yes, update its `status` to 'Completed'. 13. **Add/Finalize in Project State:** Ensure the finalized `critique_object` is correctly stored in `project_state.analysis_results.critiques`. 14. **Formulate Completion Question:** Ask: "The critique of [Artifact Reference] is complete and logged. Are there any further actions you'd like to take based on this critique? (Yes/No - If Yes, please specify or provide the next template, e.g., `03-EXECUTE-Task` for revisions)." 15. **Output:** Primary output is the structured AI output turn, followed by updates to `project_state`. Upon completion, present the completion question and prompt for next steps. --- # ANALYZE AND CRITIQUE PROCESS (Data-Centric, AI Proposes & User Confirms) ## Process Steps: 1. AI: Verify Schema access. Access Project State. 2. AI: Initialize/Access Critique Data object in `project_state.analysis_results.critiques`. 3. AI: Receive user input (artifact to critique, focus areas). 4. AI: Perform analysis and draft critique findings and recommendations. 5. AI: Update Project State (Draft Critique Object). 6. AI: Identify key findings/recommendations for confirmation. 7. AI: Formulate manageable set of structured yes/no questions. 8. AI: Structure AI output turn (project ID, critique ID, status, summary of findings, questions). 9. AI: Present structured AI output turn to user. 10. User: Provides yes/no answers. If "No"/freeform correction, AI returns to step 4 to refine. 11. AI: Update draft Critique Object based on "Yes" responses. 12. AI: Assess formalization. Update status to 'Completed' if all confirmed. 13. AI: Ensure finalized Critique Object is in Project State. 14. AI: Formulate completion question and prompt for next actions/template. 15. AI: Present completion question and prompt. ## Example Structured Questions (AI to formulate): * "Regarding the argument in paragraph 3, I found a potential [logical fallacy, e.g., 'straw man argument']. Do you agree? (Yes/No)" * "I propose the following recommendation to improve clarity: [specific recommendation]. Is this a valid suggestion? (Yes/No)" * **[IF a proposal was rejected or needs refinement]:** "You disagreed with my assessment of [specific point]. Could you please elaborate or provide your perspective?" (Freeform input expected) * **[IF critique is Completed]:** "The critique of [Artifact Reference] is complete and logged. Are there any further actions you'd like to take based on this critique? (Yes/No - If Yes, please specify or provide the next template, e.g., `03-EXECUTE-Task` for revisions)." --- ``` --- **File 16: `ANALYZE-Data.md` (Revised v2.1 - AI Proposes Analysis & Findings)** ```yaml --- # METADATA id: ANALYZE-Data name: Analyze Data version: 2.1 status: Template description: Guides AI to perform data analysis, propose structured findings and interpretations, store them in Project State (per schema), and confirm with user via structured questions. type: Process domain: Data Analysis, Research keywords: [data analysis, statistical analysis, qualitative analysis, results, findings, interpretation, methodology, structured data, yes/no questions, project state, schema, AI proposal] # RELATIONSHIPS process_group: Executing, Monitoring & Controlling invoked_by: [[03-EXECUTE-Task]] references_schema: [[support/DEFINE-ProjectStateSchema]] # USAGE instructions_for_ai: | **Objective:** Perform data analysis on a provided dataset according to specified research questions or objectives. Propose structured findings, interpretations, and limitations. Store the confirmed analysis report data in the `project_state.analysis_results.data_analyses` list according to the schema defined in [[support/DEFINE-ProjectStateSchema]]. Interact with the user via structured questions for confirmation and refinement. **Input:** * User instruction to perform data analysis, specifying the dataset (or how to access it) and the analysis objectives/questions. * Access to the current `project_state` data. * The `DEFINE-ProjectStateSchema.md` file (assumed to be loaded). * User responses to AI's confirmation/refinement questions. **Task:** 1. **Verify Schema Access & Project State:** Confirm `DEFINE-ProjectStateSchema.md` is understood. Access `project_state`. 2. **Initialize/Access Data Analysis Data:** * Ensure `project_state.analysis_results.data_analyses` list exists. * Create a new `data_analysis_report_object`. Assign a unique `analysis_id`. Set initial `status` to 'In Progress'. 3. **Receive User Input & Clarify Objectives:** Process user input for dataset, objectives, and any preferred methods. Formulate yes/no questions to confirm understanding of the analysis goals and data. 4. **Propose Analysis Plan:** * Based on objectives and data characteristics, propose a brief analysis plan (e.g., "I will perform descriptive statistics, followed by t-tests to compare groups A and B. Does this approach align with your expectations? (Yes/No)"). * If "No" or freeform feedback, refine the plan and re-confirm. 5. **Perform Data Preparation & Analysis:** Execute the confirmed analysis plan. Document data preparation steps and any data quality issues encountered. 6. **Draft Findings & Interpretation:** * Structure the results (objective findings) and interpretation (meaning in context of objectives) as per the `data_analysis_report_object` schema. * Identify any limitations of the analysis. * Propose a `results_summary` and `interpretation_discussion`. 7. **Update Project State (Analysis Data - Draft):** Store the drafted analysis report in the `data_analysis_report_object`. 8. **Formulate Confirmation Questions for Findings:** Present key findings and interpretations to the user with yes/no questions for confirmation. * Example: "The analysis shows [key result]. Does this finding seem correct based on your understanding of the data? (Yes/No)" * Example: "I interpret this to mean [interpretation]. Is this a reasonable conclusion? (Yes/No)" 9. **Manage Question Count & Present:** Present a manageable number of questions per turn. 10. **Receive User Responses & Iterate:** If "No" or freeform correction, refine the analysis/interpretation (return to step 6 or 5 as needed) and re-present. 11. **Assess Formalization:** Once key findings and interpretations are confirmed, update the `data_analysis_report_object.status` to 'Completed'. 12. **Add to Project State:** Add the finalized `data_analysis_report_object` to `project_state.analysis_results.data_analyses`. 13. **Formulate Completion Question:** Ask: "The data analysis is complete and logged. Are there any further actions you'd like to take based on these results? (Yes/No - If Yes, please specify or provide the next template, e.g., `03-EXECUTE-Task` to write a report section)." 14. **Output:** AI output includes questions, then confirmation and prompt for next actions or process conclusion. --- # ANALYZE DATA PROCESS (Data-Centric, AI Proposes & User Confirms) ## Process Steps: 1. AI: Verify Schema access. Access Project State. 2. AI: Initialize/Access Data Analysis Report object in `project_state.analysis_results.data_analyses`. 3. AI: Receive user input (dataset, objectives, methods). Confirm understanding via yes/no questions. 4. AI: Propose an analysis plan. User confirms/refines. 5. AI: Perform data preparation and analysis. 6. AI: Draft findings, interpretation, and limitations. 7. AI: Update Project State (Draft Data Analysis Report). 8. AI: Formulate yes/no questions to confirm key findings and interpretations. 9. AI: Present proposed findings/interpretations and questions to user. 10. User: Provides yes/no answers. If "No"/corrects, AI refines analysis/interpretation (return to step 6) and re-confirms. 11. AI: Once confirmed, update status to 'Completed'. 12. AI: Add finalized Data Analysis Report object to Project State. 13. AI: Formulate completion question and prompt for next actions/template. 14. AI: Present completion question and prompt. ## Example Structured Questions (AI to formulate): * "Confirming the objective is to analyze [dataset] to determine [specific question]? (Yes/No)" * "I propose using [statistical method] for this analysis. Is this appropriate? (Yes/No)" * "The analysis indicates [key finding]. Does this align with your expectations? (Yes/No)" * "Based on this, I interpret [interpretation of finding]. Is this a valid interpretation? (Yes/No)" * **[IF a proposal was rejected or needs refinement]:** "You indicated the proposed [finding/interpretation] needs revision. Please provide your feedback or clarification." (Freeform input expected) * **[IF analysis is Completed]:** "The data analysis is complete and logged. Are there any further actions you'd like to take based on these results? (Yes/No - If Yes, please specify or provide the next template, e.g., `03-EXECUTE-Task` to write a report section)." --- ``` --- **File 17: `ANALYZE-LiteratureReview.md` (Revised v2.1 - AI Proposes & Refines Review)** ```yaml --- # METADATA id: ANALYZE-LiteratureReview name: Analyze Literature Review version: 2.1 status: Template description: Guides AI to perform a literature review, propose structured findings (themes, gaps, synthesis), store them in Project State (per schema), and confirm with user via structured questions. type: Process domain: Research, Writing keywords: [literature review, research synthesis, background research, state-of-the-art, gap analysis, structured data, yes/no questions, project state, schema, AI proposal] # RELATIONSHIPS process_group: Planning, Executing invoked_by: [[02-PLAN-ProjectExecution], [03-EXECUTE-Task]] references_schema: [[support/DEFINE-ProjectStateSchema]] # USAGE instructions_for_ai: | **Objective:** Conduct a structured review of literature or specified source materials related to a given topic or research question. Propose key themes, findings, gaps, and a synthesis. Store the confirmed review data in the `project_state.analysis_results.literature_reviews` list according to the schema defined in [[support/DEFINE-ProjectStateSchema]]. Interact with the user via structured questions for confirmation and refinement. **Input:** * User instruction to perform a literature review, specifying the topic/questions. * (Optional) User-provided starting list of sources or keywords. * Access to the current `project_state` data. * The `DEFINE-ProjectStateSchema.md` file (assumed to be loaded). * User responses to AI's confirmation/refinement questions. **Task:** 1. **Verify Schema Access & Project State:** Confirm `DEFINE-ProjectStateSchema.md` is understood. Access `project_state`. 2. **Initialize/Access Literature Review Data:** * Ensure `project_state.analysis_results.literature_reviews` list exists. * Create a new `literature_review_report_object`. Assign a unique `review_id`. Set initial `status` to 'In Progress'. 3. **Receive User Input & Clarify Scope:** Process user input for topic, questions, initial sources/keywords. Formulate yes/no questions to confirm understanding of the review's scope and objectives. 4. **Propose Search Strategy (if applicable):** If AI is to perform searches, propose keywords and databases. Get user confirmation. 5. **Perform Literature Search & Selection:** Execute search (if applicable) and/or review provided sources. Select relevant literature. 6. **Information Extraction & Draft Synthesis:** * Extract key information (arguments, findings, methods, gaps, etc.) from selected sources. * Draft initial `key_themes_findings`, `identified_gaps`, and `synthesis_conclusion` for the `literature_review_report_object`. * Compile a draft `bibliography`. 7. **Update Project State (Literature Review Data - Draft):** Store the drafted review content in the `literature_review_report_object`. 8. **Identify Key Findings/Themes/Gaps for Confirmation:** Determine which elements of the draft review require explicit user confirmation. 9. **Formulate Structured Questions:** For each key element, formulate a clear, concise yes/no question. * Example: "A key theme emerging from the literature is [theme]. Does this align with your understanding? (Yes/No)" * Example: "I've identified a potential research gap regarding [gap]. Is this a relevant gap to highlight? (Yes/No)" 10. **Manage Question Count & Present:** Present a manageable number of questions per turn. 11. **Structure AI Output Turn (Iterative):** Before presenting questions, provide project ID, review ID, status, and a concise summary of the review section being discussed. 12. **Present Questions & Proposed Findings:** Clearly list questions and corresponding proposed findings. 13. **Receive User Responses & Iterate:** If "No" or freeform correction, return to step 6 to refine the review and re-present. 14. **Update Project State (Based on Responses):** If "Yes", mark the element as confirmed in the draft `literature_review_report_object`. 15. **Assess Formalization:** Once key elements are confirmed, update `literature_review_report_object.status` to 'Completed'. 16. **Add to Project State:** Add the finalized `literature_review_report_object` to `project_state.analysis_results.literature_reviews`. 17. **Formulate Completion Question:** Ask: "The literature review on '[Topic]' is complete and logged. Are there any further actions you'd like to take based on this review? (Yes/No - If Yes, please specify or provide the next template)." 18. **Output:** AI output includes questions, then confirmation and prompt for next actions or process conclusion. --- # ANALYZE LITERATURE REVIEW PROCESS (Data-Centric, AI Proposes & User Confirms) ## Process Steps: 1. AI: Verify Schema access. Access Project State. 2. AI: Initialize/Access Literature Review Report object in `project_state.analysis_results.literature_reviews`. 3. AI: Receive user input (topic, questions, sources). Confirm scope via yes/no questions. 4. AI: Propose search strategy (if applicable). User confirms. 5. AI: Perform literature search & selection. 6. AI: Extract information and draft synthesis (themes, gaps, conclusion, bibliography). 7. AI: Update Project State (Draft Literature Review Report). 8. AI: Identify key findings/themes/gaps for confirmation. 9. AI: Formulate manageable set of structured yes/no questions. 10. AI: Structure AI output turn (project ID, review ID, status, summary, questions). 11. AI: Present structured AI output turn to user. 12. User: Provides yes/no answers. If "No"/corrects, AI returns to step 6 to refine. 13. AI: Update draft Literature Review Report based on "Yes" responses. 14. AI: Assess formalization. Update status to 'Completed' if all confirmed. 15. AI: Add finalized Literature Review Report object to Project State. 16. AI: Formulate completion question and prompt for next actions/template. 17. AI: Present completion question and prompt. ## Example Structured Questions (AI to formulate): * "Confirm the topic for this literature review is '[Topic]'? (Yes/No)" * "Based on the review, a key theme identified is '[Proposed Theme]'. Does this accurately reflect the literature? (Yes/No)" * "A potential research gap appears to be '[Proposed Gap]'. Is this a significant finding? (Yes/No)" * **[IF a proposal was rejected or needs refinement]:** "You indicated the proposed [Theme/Gap] needs revision. Please provide your feedback or clarification." (Freeform input expected) * **[IF review is Completed]:** "The literature review is complete and logged. Are there any further actions based on these findings? (Yes/No - If Yes, please specify or provide the next template)." --- ``` --- **File 18: `ANALYZE-Logic-CriticalAnalysis.md` (Revised v2.0 - AI Proposes Analysis, User Confirms)** ```yaml --- # METADATA id: ANALYZE-Logic-CriticalAnalysis name: Analyze Logic and Critical Reasoning version: 2.0 status: Template description: Guides AI to perform logical analysis of text/arguments, propose structured findings, store them in Project State (per schema), and confirm with user via structured questions. type: Process domain: Critical Thinking, Logic keywords: [logic, logical reasoning, critical thinking, critical analysis, argumentation, argument analysis, validity, soundness, logical fallacies, coherence, rigor, reasoning, structured data, yes/no questions, project state, schema, AI proposal] # RELATIONSHIPS process_group: Planning, Executing, Monitoring & Controlling invoked_by: [[03-EXECUTE-Task], [04-MONITOR_CONTROL-Performance]] references_schema: [[support/DEFINE-ProjectStateSchema]] # USAGE instructions_for_ai: | **Objective:** Systematically analyze the logical structure and soundness of arguments presented in a given text. Propose structured findings regarding arguments, validity, soundness, and fallacies. Store the confirmed analysis in the `project_state.analysis_results.logic_analyses` list according to the schema defined in [[support/DEFINE-ProjectStateSchema]]. Interact with the user via structured questions for confirmation and refinement. **Input:** * User instruction to perform a logical analysis, specifying the text/argument to be analyzed. * Access to the current `project_state` data. * The `DEFINE-ProjectStateSchema.md` file (assumed to be loaded). * User responses to AI's confirmation/refinement questions. **Task:** 1. **Verify Schema Access & Project State:** Confirm `DEFINE-ProjectStateSchema.md` is understood. Access `project_state`. 2. **Initialize/Access Logic Analysis Data:** * Ensure `project_state.analysis_results.logic_analyses` list exists. * Create a new `logic_analysis_report_object`. Assign a unique `analysis_id`. Set initial `status` to 'In Progress'. 3. **Receive User Input:** Process user input providing the text/argument for analysis. Store reference in `logic_analysis_report_object.text_reference`. 4. **Perform Logical Analysis & Draft Findings:** * Identify main conclusion(s) and premises. * Reconstruct arguments, noting implicit assumptions. * Assess validity/inductive strength and soundness/cogency. * Detect common logical fallacies (referencing internal knowledge or a predefined list). * Draft `identified_arguments`, `evaluation_of_logic`, `identified_fallacies`, and `overall_assessment` for the `logic_analysis_report_object`. 5. **Update Project State (Logic Analysis Data - Draft):** Store the drafted analysis in the `logic_analysis_report_object`. 6. **Identify Key Findings for Confirmation:** Determine which key findings (e.g., specific fallacy, soundness assessment) require user confirmation. 7. **Formulate Structured Questions:** For each key finding, formulate a clear, concise yes/no question. * Example: "The argument '[Argument Summary]' appears to be [Valid/Invalid] because [Reason]. Do you agree? (Yes/No)" * Example: "I've identified a potential [Fallacy Name] fallacy in the statement '[Quote]'. Do you see this as well? (Yes/No)" 8. **Manage Question Count & Present:** Present a manageable number of questions per turn. 9. **Structure AI Output Turn (Iterative):** Before presenting questions, provide project ID, analysis ID, status, and a concise summary of the overall logical assessment. 10. **Present Questions & Proposed Findings:** Clearly list questions and corresponding proposed findings. 11. **Receive User Responses & Iterate:** If "No" or freeform correction, return to step 4 to refine the analysis and re-present. 12. **Update Project State (Based on Responses):** If "Yes", mark the finding as confirmed in the draft `logic_analysis_report_object`. 13. **Assess Formalization:** Once key findings are confirmed, update `logic_analysis_report_object.status` to 'Completed'. 14. **Add to Project State:** Add the finalized `logic_analysis_report_object` to `project_state.analysis_results.logic_analyses`. 15. **Formulate Completion Question:** Ask: "The logical analysis of '[Text Reference]' is complete and logged. Are there any further actions you'd like to take based on this analysis? (Yes/No - If Yes, please specify or provide the next template)." 16. **Output:** AI output includes questions, then confirmation and prompt for next actions or process conclusion. --- # ANALYZE LOGIC & CRITICAL REASONING PROCESS (Data-Centric, AI Proposes & User Confirms) ## Process Steps: 1. AI: Verify Schema access. Access Project State. 2. AI: Initialize/Access Logic Analysis Report object in `project_state.analysis_results.logic_analyses`. 3. AI: Receive user input (text/argument for analysis). 4. AI: Perform logical analysis and draft findings (arguments, validity, fallacies, assessment). 5. AI: Update Project State (Draft Logic Analysis Report). 6. AI: Identify key findings for confirmation. 7. AI: Formulate manageable set of structured yes/no questions. 8. AI: Structure AI output turn (project ID, analysis ID, status, summary, questions). 9. AI: Present structured AI output turn to user. 10. User: Provides yes/no answers. If "No"/corrects, AI returns to step 4 to refine. 11. AI: Update draft Logic Analysis Report based on "Yes" responses. 12. AI: Assess formalization. Update status to 'Completed' if all confirmed. 13. AI: Add finalized Logic Analysis Report object to Project State. 14. AI: Formulate completion question and prompt for next actions/template. 15. AI: Present completion question and prompt. ## Example Structured Questions (AI to formulate): * "Regarding the argument '[Argument Summary]', I assess its validity as [Valid/Invalid] because [Reason]. Do you agree? (Yes/No)" * "I've identified a potential [Fallacy Name] fallacy in the statement '[Quote]'. Does this seem correct? (Yes/No)" * **[IF a proposal was rejected or needs refinement]:** "You disagreed with my assessment of [specific logical point]. Could you please elaborate or provide your reasoning?" (Freeform input expected) * **[IF analysis is Completed]:** "The logical analysis is complete and logged. Are there any further actions based on these findings? (Yes/No - If Yes, please specify or provide the next template)." --- ``` This covers the `LOG-*`, `REVIEW-*`, and `ANALYZE-*` templates from the `support/` directory. The key changes are: * **AI Proposes/Assists in Structuring:** For `LOG-*` and `REVIEW-Feedback`, the AI takes the user's raw input and proposes a structured format for it, then asks for confirmation. For `ANALYZE-*` and `REVIEW-ProcessEffectiveness`, the AI performs the analysis/review and proposes the findings/recommendations. * **Iterative Refinement:** All templates now include a loop for user feedback and AI refinement of proposals. * **Structured Yes/No Questions:** Interaction is primarily through yes/no questions for confirmation, with clear prompts for freeform input when needed for corrections or initial data. * **Project State Integration:** All outputs are stored as structured data within the appropriate sections of `project_state` according to `DEFINE-ProjectStateSchema.md`. * **Explicit Schema Request:** Each template starts by verifying schema access. * **Completion/Transition:** Each support process, when completed, will inform the user and either ask if another similar item needs processing (like another decision or insight) or signal completion to the calling process (like `01-INITIATE-Project` or `03-EXECUTE-Task`), which will then handle the next major project step. We still have the `special/` directory templates to address. Would you like to proceed with those, or do you want to pause and review these support templates first?