Symbolic Discovery of Optimization Algorithms (Lion)
Product* ⭐ 07/2023: [RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control (RT-2)](https://arxiv.org/abs/2307.15818)
Capabilities5 decomposed
symbolic-discovery-of-optimization-algorithms
Medium confidenceDiscovers novel optimization algorithms through symbolic regression and genetic programming by searching the space of mathematical expressions. The system uses tree-based symbolic representations to compose primitive operations (addition, multiplication, momentum terms, adaptive learning rates) into complete optimizer update rules, then evaluates candidates on benchmark optimization tasks to identify high-performing algorithms. This approach generates human-interpretable optimizer equations rather than black-box neural network policies.
Uses symbolic regression with tree-based genetic programming to compose interpretable optimizer update rules from primitive operations, rather than learning optimizers as black-box neural networks or hand-tuning hyperparameters. Generates human-readable mathematical equations that can be analyzed, modified, and transferred across domains.
Produces interpretable, transferable optimizer equations unlike meta-learning approaches (which generate opaque policies), while discovering task-specific improvements over hand-designed optimizers like Adam without requiring manual hyperparameter search.
vision-language-action-model-transfer-to-robotics
Medium confidenceTransfers knowledge from large-scale vision-language models (trained on web data) to robotic control by grounding language understanding in robot action spaces. The system leverages pre-trained multimodal representations to map visual observations and natural language instructions to robot motor commands, enabling robots to execute complex manipulation tasks described in language without task-specific retraining. This bridges the gap between internet-scale language-vision knowledge and embodied robotic control through action-grounded fine-tuning.
Directly grounds vision-language model representations in robot action spaces by learning a mapping from multimodal observations to motor commands, rather than treating robotics as a separate domain. Leverages internet-scale web knowledge (visual concepts, language semantics) to reduce dependence on large robot-specific datasets.
Achieves better generalization and sample efficiency than training robot policies from scratch or using task-specific imitation learning, by bootstrapping from foundation models while maintaining interpretability through language grounding.
learned-optimizer-generalization-across-tasks
Medium confidenceEvaluates and improves the generalization of discovered/learned optimizers by testing them on held-out optimization tasks with different loss landscapes, architectures, and problem structures. The system measures optimizer performance across diverse benchmarks (vision, language, reinforcement learning) to identify which discovered algorithms transfer well versus overfit to discovery-phase tasks. This capability enables filtering of discovered optimizers for real-world applicability and understanding of generalization boundaries.
Systematically evaluates optimizer generalization across diverse task distributions rather than reporting single-benchmark performance, using multi-domain evaluation to expose overfitting and identify robust algorithmic patterns.
Provides empirical generalization evidence that discovered optimizers work beyond their discovery tasks, unlike single-benchmark optimizer papers which may report inflated performance on cherry-picked problems.
multimodal-grounding-of-language-in-action-space
Medium confidenceMaps natural language descriptions to robot action sequences by learning joint embeddings of vision, language, and action modalities. The system encodes visual observations and language instructions into a shared representation space, then decodes this representation into executable robot actions through a learned action decoder. This enables the model to understand semantic relationships between language concepts and their corresponding motor behaviors, supporting compositional generalization to novel language-action combinations.
Learns joint embeddings across vision, language, and action modalities with explicit action grounding, enabling the model to map language semantics directly to motor commands rather than treating action prediction as a separate supervised learning problem.
Achieves better compositional generalization and language understanding than vision-only imitation learning, while being more sample-efficient than training separate language and action models due to shared multimodal representations.
task-specific-optimizer-discovery-via-benchmark-optimization
Medium confidenceDiscovers optimizers specialized for specific optimization problem classes by running symbolic regression on benchmark suites tailored to those domains. The system evaluates candidate optimizer expressions on representative tasks (e.g., training vision transformers, fine-tuning language models, RL policy optimization) and selects expressions that maximize convergence speed and final performance on those specific benchmarks. This produces domain-tuned optimizers that outperform general-purpose algorithms on their target problem class.
Tailors optimizer discovery to specific problem domains by using domain-representative benchmarks during symbolic search, rather than discovering general-purpose optimizers that work across all problem types.
Produces domain-specialized optimizers with better convergence properties than general-purpose algorithms like Adam, while maintaining interpretability and transferability compared to black-box meta-learning approaches.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with Symbolic Discovery of Optimization Algorithms (Lion), ranked by overlap. Discovered automatically through the match graph.
RT-2
Google's vision-language-action model for robotics.
Large Language Models as Optimizers (OPRO)
* ⏫ 10/2023: [Eureka: Human-Level Reward Design via Coding Large Language Models (Eureka)](https://arxiv.org/abs/2310.12931)
RT-1: Robotics Transformer for Real-World Control at Scale (RT-1)
## Historical Papers <a name="history"></a>
Agents
Library/framework for building language agents
Neural Networks/Deep Learning - StatQuest

CodeLlama 70B
Meta's 70B specialized code generation model.
Best For
- ✓ML researchers exploring optimizer design space
- ✓Teams optimizing for domain-specific loss landscapes (vision, NLP, robotics)
- ✓Organizations seeking interpretable alternatives to black-box meta-learning approaches
- ✓Robotics teams building manipulation systems with limited task-specific training data
- ✓Organizations deploying robots in dynamic environments requiring language-based task specification
- ✓Research groups exploring transfer learning from foundation models to embodied AI
- ✓Researchers validating symbolic optimizer discovery methods
- ✓Teams deploying learned optimizers across heterogeneous ML workloads
Known Limitations
- ⚠Discovered algorithms may overfit to training task distributions and not generalize to unseen problem types
- ⚠Symbolic search space is exponential; computational cost scales with expression complexity and benchmark diversity
- ⚠Generated equations may contain numerical instabilities or divergence modes not caught during discovery phase
- ⚠Requires careful benchmark selection — poor task representation leads to algorithms that exploit evaluation metrics rather than learning genuinely
- ⚠Requires high-quality action-labeled robot demonstration data to ground language in motor commands
- ⚠Transfer performance degrades when robot morphology differs significantly from training distribution
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
* ⭐ 07/2023: [RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control (RT-2)](https://arxiv.org/abs/2307.15818)
Categories
Alternatives to Symbolic Discovery of Optimization Algorithms (Lion)
Are you the builder of Symbolic Discovery of Optimization Algorithms (Lion)?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →