ai-notes
PromptFreenotes for software engineers getting up to speed on new AI developments. Serves as datastore for https://latent.space writing, and product brainstorming, but has cleaned up canonical references under the /Resources folder.
Capabilities14 decomposed
llm capability tracking and documentation
Medium confidenceMaintains a structured, continuously-updated knowledge base documenting the evolution, capabilities, and architectural patterns of large language models (GPT-4, Claude, etc.) across multiple markdown files organized by model generation and capability domain. Uses a taxonomy-based organization (TEXT.md, TEXT_CHAT.md, TEXT_SEARCH.md) to map model capabilities to specific use cases, enabling engineers to quickly identify which models support specific features like instruction-tuning, chain-of-thought reasoning, or semantic search.
Organizes LLM capability documentation by both model generation AND functional domain (chat, search, code generation), with explicit tracking of architectural techniques (RLHF, CoT, SFT) that enable capabilities, rather than flat feature lists
More comprehensive than vendor documentation because it cross-references capabilities across competing models and tracks historical evolution, but less authoritative than official model cards
image generation prompt engineering reference library
Medium confidenceCurates a collection of effective prompts and techniques for image generation models (Stable Diffusion, DALL-E, Midjourney) organized in IMAGE_PROMPTS.md with patterns for composition, style, and quality modifiers. Provides both raw prompt examples and meta-analysis of what prompt structures produce desired visual outputs, enabling engineers to understand the relationship between natural language input and image generation model behavior.
Organizes prompts by visual outcome category (style, composition, quality) with explicit documentation of which modifiers affect which aspects of generation, rather than just listing raw prompts
More structured than community prompt databases because it documents the reasoning behind effective prompts, but less interactive than tools like Midjourney's prompt builder
ai information sources and community tracking
Medium confidenceMaintains a curated guide to high-quality AI information sources, research communities, and learning resources, enabling engineers to stay updated on rapid AI developments. Tracks both primary sources (research papers, model releases) and secondary sources (newsletters, blogs, conferences) that synthesize AI developments.
Curates sources across multiple formats (papers, blogs, newsletters, conferences) and explicitly documents which sources are best for different learning styles and expertise levels
More selective than raw search results because it filters for quality and relevance, but less personalized than AI-powered recommendation systems
ai products landscape and use case mapping
Medium confidenceDocuments the landscape of AI products and applications, mapping specific use cases to relevant technologies and models. Provides engineers with a structured view of how different AI capabilities are being applied in production systems, enabling informed decisions about technology selection for new projects.
Maps products to underlying AI technologies and capabilities, enabling engineers to understand both what's possible and how it's being implemented in practice
More technical than general product reviews because it focuses on AI architecture and capabilities, but less detailed than individual product documentation
small models and efficient ai tracking
Medium confidenceDocuments the emerging movement toward smaller, more efficient AI models that can run on edge devices or with reduced computational requirements, tracking model compression techniques, distillation approaches, and quantization methods. Enables engineers to understand tradeoffs between model size, inference speed, and accuracy.
Tracks the full spectrum of model efficiency techniques (quantization, distillation, pruning, architecture search) and their impact on model capabilities, rather than treating efficiency as a single dimension
More comprehensive than individual model documentation because it covers the landscape of efficient models, but less detailed than specialized optimization frameworks
ai security and safety considerations documentation
Medium confidenceDocuments security, safety, and alignment considerations for AI systems in SECURITY.md, covering adversarial robustness, prompt injection attacks, model poisoning, and alignment challenges. Provides engineers with practical guidance on building safer AI systems and understanding potential failure modes.
Treats AI security holistically across model-level risks (adversarial examples, poisoning), system-level risks (prompt injection, jailbreaking), and alignment risks (specification gaming, reward hacking)
More practical than academic safety research because it focuses on implementation guidance, but less detailed than specialized security frameworks
semantic search and rag architecture documentation
Medium confidenceDocuments the architectural patterns and implementation approaches for building semantic search systems and Retrieval-Augmented Generation (RAG) pipelines, including embedding models, vector storage patterns, and integration with LLMs. Covers how to augment LLM context with external knowledge retrieval, enabling engineers to understand the full stack from embedding generation through retrieval ranking to LLM prompt injection.
Explicitly documents the interaction between embedding model choice, vector storage architecture, and LLM prompt injection patterns, treating RAG as an integrated system rather than separate components
More comprehensive than individual vector database documentation because it covers the full RAG pipeline, but less detailed than specialized RAG frameworks like LangChain
code generation model capability tracking
Medium confidenceMaintains documentation of code generation models (GitHub Copilot, Codex, specialized code LLMs) in CODE.md, tracking their capabilities across programming languages, code understanding depth, and integration patterns with IDEs. Documents both model-level capabilities (multi-language support, context window size) and practical integration patterns (VS Code extensions, API usage).
Tracks code generation capabilities at both the model level (language support, context window) and integration level (IDE plugins, API patterns), enabling end-to-end evaluation
Broader than GitHub Copilot documentation because it covers competing models and open-source alternatives, but less detailed than individual model documentation
ai infrastructure and scaling analysis
Medium confidenceDocuments computational requirements, hardware needs, and scaling laws for training and deploying AI models in INFRA.md, including FLOPS calculations, memory requirements, and cost-performance tradeoffs. Provides engineers with the technical foundation to estimate infrastructure needs for specific model sizes and deployment scenarios, covering both training infrastructure and inference optimization patterns.
Connects model architecture decisions (parameter count, sequence length) directly to hardware requirements and cost, enabling end-to-end infrastructure planning from model design through deployment
More practical than academic scaling law papers because it includes real hardware costs and availability, but less detailed than specialized infrastructure frameworks like Ray or vLLM
audio processing and speech-to-text capability reference
Medium confidenceDocuments advancements in speech recognition (Whisper), text-to-speech, and music generation models in AUDIO.md, tracking model capabilities, supported languages, and integration patterns. Covers both transcription accuracy across languages and the architectural approaches used in state-of-the-art audio models.
Organizes audio models by both capability (transcription, generation) and constraint (language support, real-time requirements), enabling targeted model selection
Broader than individual model documentation because it covers competing approaches (Whisper vs commercial APIs), but less detailed than specialized audio ML frameworks
ai agents and agentic systems architecture tracking
Medium confidenceDocuments developments in agentic AI systems in AGENTS.md, covering agent architectures, tool-use patterns, planning approaches, and multi-step reasoning frameworks. Tracks how agents decompose tasks, interact with external tools, and maintain state across multiple reasoning steps, providing engineers with patterns for building autonomous AI systems.
Treats agents as integrated systems combining LLM reasoning, tool orchestration, and state management, rather than treating each component separately
More comprehensive than individual agent framework documentation because it covers architectural patterns across multiple implementations, but less detailed than specialized agent frameworks like AutoGPT or LangChain Agents
instruction tuning and rlhf technique documentation
Medium confidenceDocuments the techniques for adapting base language models to follow instructions through instruction fine-tuning (IFT) and Reinforcement Learning from Human Feedback (RLHF), explaining how these techniques transform raw language models into chat-capable systems. Covers the architectural components (reward models, preference data collection, policy optimization) and their interaction in creating instruction-following models.
Explicitly documents the pipeline from base model → instruction tuning → RLHF → chat model, showing how each stage builds on previous work rather than treating them as isolated techniques
More accessible than academic papers on RLHF because it contextualizes techniques within practical model development, but less detailed than specialized alignment research
ai benchmarks and evaluation metrics reference
Medium confidenceMaintains documentation of AI benchmarks and evaluation metrics for assessing model performance across different domains (language understanding, code generation, image quality, etc.), enabling engineers to understand how models are compared and what metrics matter for specific use cases. Covers both standard benchmarks (MMLU, HumanEval) and domain-specific evaluation approaches.
Organizes benchmarks by both domain (language, code, vision) and evaluation dimension (accuracy, efficiency, robustness), enabling targeted benchmark selection
More comprehensive than individual benchmark papers because it covers the landscape of available benchmarks, but less detailed than specialized evaluation frameworks
ai datasets and training data reference library
Medium confidenceCatalogs key AI datasets used for training and evaluating models across different domains (language, vision, code, audio), documenting dataset characteristics, licensing, and use cases. Enables engineers to understand what training data is available for different tasks and how dataset choices affect model capabilities.
Organizes datasets by both domain and use case (training vs evaluation), with explicit documentation of dataset characteristics that affect model behavior
More curated than raw dataset repositories because it provides context and recommendations, but less detailed than individual dataset papers
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with ai-notes, ranked by overlap. Discovered automatically through the match graph.
Awesome-Prompt-Engineering
This repository contains a hand-curated resources for Prompt Engineering with a focus on Generative Pre-trained Transformer (GPT), ChatGPT, PaLM etc
issue
<a href="https://www.buymeacoffee.com/ikaijuaawesomeaitools" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" height="41" width="174"></a>
Gentrace
Optimize Generative AI Models with...
Align AI
Streamlines AI strategy alignment with business...
mcpm
** ([website](https://mcpm.sh)) - MCP Manager (MCPM) is a Homebrew-like service for managing Model Context Protocol (MCP) servers across clients by **[Pathintegral](https://github.com/pathintegral-institute)**
Prompt Engineering Guide
Guide and resources for prompt engineering.
Best For
- ✓AI engineers evaluating model selection for production systems
- ✓Developers building LLM-powered applications who need capability matrices
- ✓Teams migrating between model providers and need feature parity analysis
- ✓Product teams building image generation features
- ✓Designers prototyping visual concepts with AI
- ✓Developers optimizing prompt templates for production image generation
- ✓Engineers keeping pace with rapid AI developments
- ✓Researchers discovering relevant work in their domain
Known Limitations
- ⚠Documentation is manually curated and may lag behind rapid model releases by weeks
- ⚠No automated capability testing or verification — relies on community contributions and author research
- ⚠Lacks structured machine-readable format (YAML/JSON) for programmatic capability queries
- ⚠Prompts are model-specific and may not transfer between Stable Diffusion and DALL-E without modification
- ⚠No systematic evaluation of prompt effectiveness — relies on subjective quality assessment
- ⚠Lacks quantitative metrics on how prompt variations affect generation time or quality scores
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
Repository Details
Last commit: Feb 16, 2026
About
notes for software engineers getting up to speed on new AI developments. Serves as datastore for https://latent.space writing, and product brainstorming, but has cleaned up canonical references under the /Resources folder.
Categories
Alternatives to ai-notes
Are you the builder of ai-notes?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →