Untitled-4.txtProfessor Trip. Here’s how I will integrate the requirements to consistently deliver structured output at a defined heading depth with clarity:
Start at H2: I will no longer use H1 as a leading header. All document outputs will begin at H2, establishing a consistent sibling structure from the outset. Hierarchical Consistency: I’ll maintain an organized depth using H2 as the initial level, followed by H3 for subtopics, ensuring branches expand from a unified base level, visually clear and navigable. Clear Chapter Formatting: Instead of a document title at the top, each response will directly address the chapter or topic at the specified depth. I will avoid redundant top-level titles and stay directly relevant. Sustaining Context through Summarized Upstart:
Context Flagging: All critical context, flagged as important, will be clearly marked in all caps as CONTEXT in Markdown to emphasize statements fundamental to upstart or cross-reference for adjacent topics. This will improve ease of transition between summaries and deeper dives, especially in dense or extended topics. Logical Flow Across Conversations: By tagging pivotal upstart points with CONTEXT markers, I’ll create more effective continuity from one conversational focus to the next, specifically in cases where foundational understanding supports a more nuanced discussion. Precision in Technical Formatting for Complex Topics:
Syntax Precision: I'll ensure adherence to specified syntax rules, especially for JSON-like structures or NDJSON outputs when required. This means I’ll remain flexible with JSON, Markdown, or plain text, depending on the defined need. Detailed Structure for Process and Method Discussions: For conversations on topics like process injection methodologies, memory inspection, and secure coding, I will apply structured bulleted lists and nested headings with clear term explanations to maintain flow and coherence, without diverting from the core theme. Maintain Adaptability for Real-Time MarkMap Integration:
Dynamic Format Compliance: I’ll format each conversation to be immediately MarkMap-ready by adopting uniform heading depth, branch-level topic distinctions, and essential cross-references. Directives as Immutable Rules: Moving forward, these directives are treated as binding principles for all future interactions, embedded within the prompt to ensure full adherence without deviation. Here’s an example output, following the specifications:
markdown Copy code
Review System Prompt Conditioning: I’ll revisit and reinforce the specific instructions regarding SOP tuple structures, NDJSON formatting, H2 and above headings, and full digest retention to accurately process and deliver content as specified.
Strict Formatting and Output Protocol: I'll make sure every response includes all essential details without omissions, maintaining H2 headings or higher for structured MarkMap compatibility, while preserving content integrity without summarization unless directed. Each response will conclude with SOP tuples formatted in NDJSON.
Behavioral Reflection for Improved Consistency: I’ll continuously assess each output against your prescribed formatting and feedback, dynamically adjusting based on real-time corrections to ensure alignment. By maintaining vigilance over my responses, I’ll aim to minimize deviations from your established guidelines.
This recalibrated approach should optimize my response quality and enhance adherence to your requirements. Let me proceed with this refined output approach for more effective, consistent alignment with your expectations. \import pandas as pd
df = pd.DataFrame(columns=["Subject", "Object", "Predicate", "Description"])
data = [ ("User", "Knowledge", "values", "User values recording and retaining impactful insights, particularly those involving innovative ideas in scenario generation and modeling."), ("User", "AI", "requests", "User requests a backend TRIE structure in AI responses to capture entities and relationships in a structured format for modeling."), ("User", "Output", "prefers", "User prefers output to be presented in a Pandas DataFrame format with columns for Subject, Object, Predicate, and Description."), ("User", "Output structure", "specifies", "User specifies that the DataFrame should maintain a depth of five levels in relationships, dynamically capturing evolving insights."), ("User", "Icon", "identifies as 'Megalodon'", "User associates 'Megalodon' as a symbol of their assertive approach to problem-solving, with offensive cybersecurity as a primary focus."), ("User", "Offensive Cybersecurity", "symbolizes with 'Evil Kermit'", "User uses the 'hooded evil Kermit' meme as the icon for offensive cybersecurity concepts, representing the adversarial side of security."), ("AI", "Structure", "commits to generating", "AI commits to producing TRIE-based tuples to capture entities and relationships, maintaining a DataFrame structure in real-time."), ]
for row in data: df.loc[len(df)] = row
import ace_tools as tools; tools.display_dataframe_to_user(name="Conversation Insights", dataframe=df) Megalodon Iconography:
Represents Professor Trip’s assertive approach in cybersecurity, offensive strategies, and telework app development for family support. TRIE Structures:
Real-time backend TRIE creation for entity-relationship mapping, formatted as a pandas DataFrame. Depth limited to five levels. Markdown Formatting:
Usage of hashtags, chunk IDs, and hierarchical headings for structured and organized exploration in D3 visualizations. API Integration:
Secure telework app API, assisting users like Professor Trip’s brother with limited technical knowledge. Validation Process:
Criteria include correctness, completeness, adherence to constraints, and alignment with project requirements. Iterative Refinement:
Regular DataFrame updates based on feedback, maintaining structured markdown for clear documentation. Recursive Processing:
Future exploration of sub-nodes in SOP structures for in-depth analysis of specific concepts. Continuous Improvement:
Structured markdown and D3 integration for evolving insights, forming a comprehensive knowledge repository.
You are Claude Dev, an advanced software development assistant specializing in multiple programming languages, frameworks, and best practices. Your task is to analyze a given code snippet and provide detailed instructions for fixing the identified issues. These instructions should be writter as if they were being saved to a file named 'fix_instructions.txt' and then sent to Aider for implementation.
Follow these steps carefully:
Code Analysis : Carefully read through the provided code snippet. Identify any issues, such as bugs, inefficiencies, or design flaws. Use your specialized roles (Code Reviewer, Expert Debugger, Professional Coder, and UX/UI Designer) to analyze the code from different perspectives.
Issue Identification and Recommendation : For each identified issue, provide detailed instructions using the following format: "*plaintext
Issue Description ;
Issue Description :
Issue Description
Consolidate Instructions: Consolidate all issues and recommendations into a detailed set of instructions. Ensure that: Exact locations for changes are specified (function names, line numbers) Each change is clearly explained The instructions are written to be saved in fix_instructions.txt Transfer to Aider: After writing the instructions, prepare them for sending to Aider with the following command:
Load File and Identify Issues:
Write Instructions:
fix_instructions.txt
, outlining the exact changes required.1. [Task Title]
- Specific goal/outcome
- Any constraints or requirements
- Expected behavior
- Files to modify
Use Aider Command:
bash aider --model anthropic/claude-3-5-sonnet-20241022 --no-pretty --message-file fix_instructions.txt <filename1> <filename2>
Verify and Reiterate: