daily_stock_analysis
ModelFreeLLM驱动的 A/H/美股智能分析器:多数据源行情 + 实时新闻 + LLM决策仪表盘 + 多渠道推送,零成本定时运行,纯白嫖. LLM-powered stock analysis system for A/H/US markets.
Capabilities14 decomposed
multi-source stock data aggregation with tiered failover
Medium confidenceFetches OHLCV data, real-time quotes, and chip distribution across A-shares, HK, and US markets from a 7-tier provider hierarchy (EFinance → AkShare → Tushare → Pytdx → Baostock → YFinance → Longbridge) with automatic circuit-breaker failover and data validation. Each provider is prioritized by reliability and latency; if one fails or times out, the system transparently falls back to the next tier without interrupting the analysis pipeline.
Implements a 7-tier provider priority system with automatic circuit-breaker failover rather than simple round-robin or single-provider approaches; EFinance (Priority 0) is free and near real-time, eliminating the need for paid APIs for basic analysis. The system validates data quality and latency at each tier before falling back, ensuring analysis uses the freshest available data.
Outperforms single-provider solutions (e.g., yfinance-only) by guaranteeing data availability across market disruptions; more cost-effective than commercial data APIs (Bloomberg, FactSet) by leveraging free Chinese data sources (AkShare, Tushare) as primary tiers.
llm-driven multi-strategy stock analysis with embedded trading disciplines
Medium confidenceRoutes stock data through a unified LiteLLM interface to multiple LLM backends (Gemini, Claude, DeepSeek, OpenAI, Ollama) with embedded trading philosophy rules and 11 built-in strategies (Bull Trend, Golden Cross, Wave Theory, etc.). Each strategy is implemented as a 'skill' that guides the LLM's reasoning via system prompts and structured output templates, ensuring analysis adheres to quantitative trading principles rather than generating arbitrary commentary.
Embeds 11 quantitative trading strategies as reusable 'skills' with LLM-guided reasoning rather than hardcoded technical indicators; uses LiteLLM abstraction to support 5+ LLM backends (Gemini, Claude, DeepSeek, OpenAI, Ollama) with unified interface, enabling provider-agnostic analysis and cost optimization. Trading philosophy rules are enforced via system prompts, ensuring recommendations align with quantitative discipline.
More flexible than rule-based technical analysis (TA-Lib) because LLM reasoning adapts to market context; more disciplined than pure LLM chat because strategies constrain reasoning to specific trading frameworks. Supports local Ollama deployment for zero-cost inference, unlike cloud-only solutions (ChatGPT, Gemini API).
bot integration for telegram, discord, and wechat work
Medium confidenceIntegrates with messaging platform bots (Telegram Bot API, Discord Webhooks, WeChat Work Bot API) to enable interactive analysis queries and report delivery. Users can send commands to the bot (e.g., '/analyze AAPL' or '/portfolio') and receive analysis results directly in the chat. The bot supports slash commands, inline buttons for quick actions (buy/sell/hold), and rich message formatting (embeds, cards, rich text). Bots run as separate processes and poll for messages or listen to webhooks.
Implements native bot integrations for Telegram, Discord, and WeChat Work (Chinese platform) with slash commands, inline buttons, and platform-specific rich formatting. Enables interactive analysis queries directly in chat without leaving the messaging app. Supports group chat usage with optional rate limiting to prevent abuse.
More convenient than web UI because users don't need to open a browser; analysis is delivered in their existing chat workflow. More interactive than report-only notifications because users can query analysis on-demand and execute actions via inline buttons. Supports Chinese platforms (WeChat Work) natively, unlike most Western financial APIs.
github actions deployment for zero-cost scheduled execution
Medium confidenceEnables deployment of the analysis system to GitHub Actions, a free CI/CD platform that runs workflows on a schedule (cron) or on-demand. The system is packaged as a Docker container or Python script that runs in the GitHub Actions environment, fetches stock data, runs analysis, and sends notifications. No server hosting is required; GitHub Actions provides free compute for public repositories (2000 min/month) and paid plans for private repositories. Workflows are defined in YAML and version-controlled alongside the code.
Leverages GitHub Actions free tier (2000 min/month for private repos, unlimited for public) to run scheduled analysis without paying for cloud hosting. Workflows are defined in YAML and version-controlled alongside code, enabling reproducible deployments. Integrates with GitHub Secrets for secure credential management.
More cost-effective than cloud-based scheduling (AWS Lambda, Google Cloud Scheduler) because GitHub Actions is free for public repos and cheap for private repos. More maintainable than local cron jobs because workflows are version-controlled and visible in the GitHub UI. More scalable than single-machine deployments because GitHub Actions can run multiple workflows in parallel.
docker compose deployment for local and cloud hosting
Medium confidencePackages the entire analysis system (backend, frontend, database, notification services) as a Docker Compose stack that can be deployed locally or to cloud platforms (AWS, Google Cloud, DigitalOcean). The Compose file defines services for the FastAPI backend, React frontend, PostgreSQL database, and optional Redis cache. Deployment is as simple as 'docker-compose up', with all dependencies and configuration managed by the Compose file. Supports environment-based configuration (dev, staging, prod) via .env files.
Provides a complete Docker Compose stack (backend, frontend, database, cache) that enables single-command deployment ('docker-compose up') without manual service setup. Supports environment-based configuration (dev/staging/prod) via .env files. Enables local development with the same stack as production, reducing environment drift.
More convenient than manual service setup because all dependencies are defined in a single file. More reproducible than cloud-native deployments because the stack is version-controlled and can be deployed identically across environments. More accessible than Kubernetes because Docker Compose has a lower learning curve and is suitable for small to medium deployments.
systemd and cron-based scheduling for local deployment
Medium confidenceEnables deployment of the analysis system as a systemd service (Linux) or cron job that runs on a local machine or VPS. The system runs continuously as a background service, polling for scheduled analysis times and executing them. Systemd provides service management (start, stop, restart, status) and automatic restart on failure. Cron provides simple time-based scheduling without a persistent service. Both approaches require minimal infrastructure (just a Linux machine) and zero cloud hosting costs.
Provides both systemd service and cron job deployment options for Linux, enabling simple self-hosted scheduling without cloud infrastructure. Systemd provides service management (start/stop/restart) and automatic restart on failure. Cron provides simple time-based scheduling. Both approaches require minimal setup and zero cloud hosting costs.
More cost-effective than cloud-based scheduling because it runs on a cheap VPS or local machine. More reliable than manual script execution because systemd provides automatic restart and monitoring. More flexible than GitHub Actions because it supports long-running services and persistent state.
intelligent search and context enrichment for fundamental analysis
Medium confidenceAggregates news, risk alerts, earnings data, and capital flow from 4+ specialized search APIs (Anspire, Tavily, Bocha, SerpAPI) and enriches the LLM analysis context with up-to-date fundamental information. The search service queries for stock-specific news, regulatory filings, insider trading, and market sentiment, then embeds results into the LLM prompt as structured context to ground recommendations in real-world events rather than historical price patterns alone.
Implements a multi-API search strategy (Anspire, Tavily, Bocha, SerpAPI) with fallback logic similar to data fetching, ensuring news availability even if primary search API fails. Structures search results as context blocks for LLM prompts, enabling the AI to cite specific news events in recommendations. Supports market-specific search (A-shares, HK, US) with appropriate query formatting per market.
More comprehensive than single-source news APIs (e.g., NewsAPI alone) because it aggregates multiple providers and includes earnings/risk data. More efficient than manual news monitoring because search is automated and results are pre-structured for LLM consumption. Supports Chinese market news (via Anspire, Bocha) unlike most Western financial APIs.
multi-agent orchestrator for complex multi-turn strategy q&a
Medium confidenceImplements a multi-agent system that decomposes complex investment questions into sub-tasks, each handled by specialized agents (technical analyst, fundamental analyst, risk manager, sentiment analyzer). Agents communicate via a shared context store and iteratively refine recommendations through multi-turn reasoning. The orchestrator routes user queries to appropriate agents, aggregates their outputs, and synthesizes a final recommendation with consensus scoring and dissent tracking.
Implements agent specialization with explicit role separation (technical analyst, fundamental analyst, risk manager, sentiment analyzer) rather than a single monolithic LLM; agents share context via a structured store and produce scored outputs that are aggregated with dissent tracking. This enables explainable AI where users can see which agents support/oppose a recommendation and why.
More transparent than single-LLM analysis because users see reasoning from multiple specialized perspectives. More robust than simple prompt engineering because agent disagreement surfaces uncertainty. Enables cost optimization by routing simple queries to cheaper agents and complex queries to more capable (expensive) models.
multi-channel notification distribution with platform-specific formatting
Medium confidenceDistributes analysis reports to 10+ notification channels (Telegram, Discord, WeChat Work, Feishu, Email, Webhook, etc.) with platform-specific formatting and routing logic. Each channel has a dedicated formatter that adapts the structured analysis output (recommendation, reasoning, metrics) to the platform's native format (Telegram markdown, Discord embeds, WeChat rich text, Feishu cards). Supports group routing rules (e.g., send high-confidence recommendations to all channels, low-confidence only to email) and scheduling (immediate, daily digest, weekly summary).
Implements platform-specific formatters for 10+ channels (Telegram, Discord, WeChat Work, Feishu, Email, Webhook, etc.) with native rich formatting (Discord embeds, Feishu cards, WeChat rich text) rather than plain text. Supports group routing rules and scheduling, enabling different audiences to receive different report formats at different times. Handles platform-specific constraints (message size, rate limits, character encoding).
More comprehensive than generic notification services (e.g., Zapier, IFTTT) because it understands financial report structure and formats accordingly. More flexible than single-channel solutions because it supports simultaneous distribution to multiple platforms with audience-specific routing. Supports Chinese platforms (WeChat Work, Feishu) natively, unlike most Western notification APIs.
backtesting engine with 1-day validation and performance metrics
Medium confidenceEvaluates AI recommendation accuracy by comparing predicted buy/sell signals against actual 1-day forward returns. For each stock analyzed, the system records the recommendation (buy/hold/sell) and confidence score, then checks the next trading day's price movement to calculate hit rate, precision, recall, and Sharpe ratio. Results are aggregated per strategy and per LLM provider, enabling performance comparison and model selection. The backtesting engine runs continuously as new analysis is generated, building a historical performance database.
Implements continuous forward-testing (1-day validation) rather than historical backtesting, enabling real-time performance monitoring as new recommendations are generated. Aggregates performance metrics per strategy and per LLM provider, enabling A/B testing of different models and strategies. Builds a historical performance database that can be queried to identify which strategies/providers perform best in current market conditions.
More practical than historical backtesting because it validates recommendations against real market outcomes without look-ahead bias. More comprehensive than simple win-rate tracking because it calculates precision, recall, Sharpe ratio, and drawdown. Enables provider comparison (Gemini vs Claude) which most backtesting frameworks don't support.
portfolio p0 system for position tracking and risk management
Medium confidenceTracks open positions, entry prices, and current P&L for a user's stock portfolio. The system monitors each position against the AI's current recommendation (buy/hold/sell) and alerts when recommendations change (e.g., a held stock receives a sell signal). Calculates portfolio-level metrics (total P&L, concentration risk, sector exposure) and suggests rebalancing actions based on AI recommendations and risk thresholds. Supports multiple portfolio snapshots (e.g., 'aggressive', 'conservative') with different risk parameters.
Integrates portfolio tracking with AI recommendations, enabling users to see when their open positions conflict with current AI signals. Calculates portfolio-level risk metrics (concentration, sector exposure, Sharpe ratio) and suggests rebalancing based on both AI recommendations and risk thresholds. Supports multiple portfolio snapshots with different risk profiles (aggressive vs conservative).
More integrated than standalone portfolio trackers (e.g., Seeking Alpha, Yahoo Finance) because it connects position tracking to AI recommendations. More actionable than simple P&L tracking because it surfaces risk metrics and rebalancing suggestions. Enables multi-portfolio management with different risk profiles, unlike single-portfolio tools.
configuration registry with environment-based parameter management
Medium confidenceCentralizes all system configuration (API keys, LLM model selection, strategy parameters, notification channels, data source priorities) in a registry that reads from environment variables and config files. The registry supports environment-based overrides (dev, staging, prod) and enables dynamic reconfiguration without code changes. Configuration is validated at startup to catch missing API keys or invalid parameters early. Supports config inheritance (base config + environment-specific overrides) and secret management (API keys stored in .env, not in code).
Implements a centralized configuration registry (src/core/config_registry.py) that reads from .env files and environment variables, supporting environment-based overrides (dev/staging/prod) without code changes. Configuration is validated at startup to catch missing API keys or invalid parameters early. Supports secret management via .env files and enables dynamic provider selection (LLM, data source, notification channel) via configuration.
More flexible than hardcoded configuration because it supports environment-based overrides and dynamic provider selection. More secure than storing API keys in code because secrets are in .env (excluded from git). More maintainable than scattered config files because all configuration is centralized in a single registry.
command-line interface for ad-hoc analysis and scheduling
Medium confidenceProvides a CLI for running stock analysis on-demand or on a schedule (daily, weekly, monthly). Users can specify stock symbols, strategies, and notification channels via command-line arguments or interactive prompts. The CLI supports batch analysis (analyze 100 stocks in one run) and dry-run mode (show what would be analyzed without actually running). Integrates with system schedulers (cron on Linux, Task Scheduler on Windows) to enable zero-cost scheduled execution via GitHub Actions or local systemd.
Implements a full-featured CLI that supports on-demand analysis, batch processing, dry-run mode, and integration with system schedulers (cron, GitHub Actions, systemd). Enables zero-cost scheduled execution via GitHub Actions (free tier) or local systemd, eliminating the need for paid cloud servers. Supports both interactive prompts and command-line arguments for flexibility.
More accessible than web UI or API because it requires no server setup or web browser. More cost-effective than cloud-based scheduling services (AWS Lambda, Google Cloud Scheduler) because it integrates with free GitHub Actions or local cron. More flexible than single-run scripts because it supports batch analysis, dry-run, and multiple notification channels.
web ui with fastapi backend and react frontend for interactive analysis
Medium confidenceProvides a web-based dashboard for viewing stock analysis results, managing portfolios, and configuring analysis parameters. The backend is built with FastAPI and exposes REST APIs for fetching analysis history, portfolio data, and performance metrics. The frontend is a React SPA that displays charts (price, technical indicators, performance trends), recommendation cards, and portfolio summary. Users can trigger ad-hoc analysis, adjust strategy parameters, and view detailed reasoning for each recommendation without touching code.
Implements a full-stack web application with FastAPI backend and React frontend, enabling interactive analysis without CLI. Supports real-time chart rendering with technical indicators and portfolio visualization. Enables parameter adjustment via UI without code changes, making the system accessible to non-technical users.
More user-friendly than CLI because it provides visual feedback and interactive controls. More comprehensive than simple report generation because it enables exploration (drill-down into strategy details, compare stocks, adjust parameters). More polished than Jupyter notebooks because it's production-ready and doesn't require technical knowledge to use.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with daily_stock_analysis, ranked by overlap. Discovered automatically through the match graph.
MarketAlerts.ai
AI-powered tool delivers real-time market alerts and...
go-stock
🦄🦄🦄AI赋能股票分析:AI加持的股票分析/选股工具。股票行情获取,AI热点资讯分析,AI资金/财务分析,涨跌报警推送。支持A股,港股,美股。支持市场整体/个股情绪分析,AI辅助选股等。数据全部保留在本地。支持DeepSeek,OpenAI, Ollama,LMStudio,AnythingLLM,硅基流动,火山方舟,阿里云百炼等平台或模型。
Alpha
AI-driven tool for real-time investment insights and...
hoopsAI
Unlock personalized, AI-driven trading insights across multiple...
Uptrends.ai
The first AI stock market news monitoring platform made for DIY investors. Uptrends.ai analyzes chatter to help you find the trends & events that...
Stocknews AI
AI-curated real-time stock news from 100+...
Best For
- ✓Quantitative traders building automated analysis systems across multiple Asian and US markets
- ✓Teams deploying stock analysis agents that require high availability and minimal manual intervention
- ✓Quantitative traders who want LLM-augmented analysis but need to enforce trading discipline and strategy consistency
- ✓Teams evaluating multiple LLM providers for financial analysis and want A/B testing infrastructure built-in
- ✓Teams that use Telegram, Discord, or WeChat Work for communication and want analysis integrated into their chat workflow
- ✓Solo traders who want to check analysis on-the-go via mobile messaging apps
- ✓Solo developers and small teams who want to deploy analysis without cloud hosting costs
- ✓Open-source projects that want to leverage GitHub Actions free tier for continuous analysis
Known Limitations
- ⚠Circuit breaker adds ~500ms-2s latency per failover attempt; no caching between runs means repeated API calls for same symbols
- ⚠Data freshness varies by provider (EFinance near real-time, YFinance delayed 15-20 min); no unified timestamp normalization
- ⚠Chip distribution data only available from select providers; fallback may return incomplete fundamental data
- ⚠LLM reasoning adds 2-5s latency per stock per strategy; no caching of analysis results means repeated calls for same stock/strategy combination
- ⚠Strategy 'skills' are hardcoded prompts; customizing or adding new strategies requires modifying source code and redeploying
- ⚠LLM hallucination risk remains; no built-in fact-checking against real-time market data or earnings reports
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
Repository Details
Last commit: Apr 21, 2026
About
LLM驱动的 A/H/美股智能分析器:多数据源行情 + 实时新闻 + LLM决策仪表盘 + 多渠道推送,零成本定时运行,纯白嫖. LLM-powered stock analysis system for A/H/US markets.
Categories
Alternatives to daily_stock_analysis
Are you the builder of daily_stock_analysis?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →