federated multi-source query orchestration with parallel execution
Executes a single user query across 100+ heterogeneous data sources simultaneously using Celery workers and asynchronous task distribution, without copying or indexing data. The Search Orchestrator (swirl/models.py Search class) decomposes queries into source-specific formats, dispatches parallel tasks to Celery workers, and aggregates results as they complete. Uses Django ORM to manage Search objects with state tracking (RUNNING, COMPLETED, FAILED) and WebSocket communication for real-time progress updates to the Galaxy UI.
Unique: Uses Celery-based task distribution with per-source connector abstraction (swirl/connectors/) to parallelize queries across heterogeneous sources without data movement, combined with Django ORM state management for search lifecycle tracking. Unlike traditional metasearch engines that require data indexing, SWIRL queries live data in-place through connector adapters that translate queries to source-native formats (SQL, GraphQL, REST, Elasticsearch DSL).
vs alternatives: Faster than centralized data warehouse approaches for real-time queries because it eliminates ETL latency and data sync delays; more secure than cloud-based search services because data never leaves on-premises systems.
connector-based data source abstraction with format translation
Provides extensible connector framework (swirl/connectors/connector.py base class) that abstracts 100+ data sources (HTTP APIs, databases, search engines, Microsoft Graph) into a unified interface. Each connector translates SWIRL's normalized query format into source-native syntax (SQL WHERE clauses, Elasticsearch queries, REST API parameters, GraphQL), executes the query, and normalizes results back to SWIRL's unified schema. Supports HTTP connectors for REST/GraphQL APIs, database connectors for SQL/NoSQL, and specialized connectors for Salesforce, Jira, Microsoft 365, Slack, BigQuery, and others.
Unique: Implements connector base class (swirl/connectors/connector.py) with pluggable execute() and normalize_results() methods, allowing each source to define its own query translation and result mapping logic. Supports 100+ pre-built connectors covering HTTP APIs, SQL/NoSQL databases, Elasticsearch, Solr, Salesforce, Jira, Microsoft Graph, Slack, BigQuery, and more. Unlike generic API clients, each connector understands source-specific pagination, authentication, and result structure.
vs alternatives: More flexible than API aggregation libraries because connectors can implement source-specific optimizations (e.g., Elasticsearch filter context vs query context); more maintainable than custom query translation logic because connector interface is standardized.
galaxy web ui with search interface and result visualization
Provides Galaxy web-based user interface (Django templates, static files, JavaScript) accessible at port 8000 for searching and visualizing results. Implements real-time search progress tracking via WebSocket, progressive result display as sources complete, and result filtering/sorting. Supports both simple keyword search and advanced search with filters, date ranges, and field-specific queries. Includes result preview, source attribution, and relevance scoring visualization. Built with Django templates and vanilla JavaScript for minimal dependencies.
Unique: Implements Galaxy web UI as Django-based application (Django templates, static files, JavaScript) with WebSocket integration for real-time search progress and result streaming. Supports both simple keyword search and advanced search with filters and field-specific queries. Built with minimal dependencies (vanilla JavaScript) for easy customization.
vs alternatives: More integrated than separate frontend because it's part of SWIRL Search application; more real-time than traditional search UIs because it streams results via WebSocket; more customizable than SaaS search interfaces because source code is available.
asynchronous task execution with celery worker pool and result caching
Implements asynchronous search execution using Celery task queue (swirl/tasks.py) with configurable worker pool for parallel query execution across sources. Each source query is dispatched as separate Celery task, allowing independent execution and failure handling. Results are cached in Redis (configurable TTL) to avoid redundant queries for identical search parameters. Celery workers can be scaled horizontally to handle increased query load. Supports task monitoring, retry logic, and dead-letter queue for failed tasks.
Unique: Implements asynchronous search execution using Celery task queue (swirl/tasks.py) where each source query is dispatched as separate task for independent execution. Results are cached in Redis with configurable TTL to avoid redundant queries. Celery workers can be scaled horizontally to handle increased load. Supports task monitoring, retry logic, and dead-letter queue for failed tasks.
vs alternatives: More scalable than synchronous execution because it allows horizontal scaling of workers; more responsive than blocking execution because UI updates are pushed via WebSocket while tasks execute; more resilient than single-threaded execution because task failures don't block other queries.
source-specific authentication and credential management
Implements per-source authentication handling (swirl/connectors/) supporting multiple authentication methods: API keys, OAuth 2.0, basic auth, database credentials, and custom authentication schemes. Each connector manages its own authentication logic, allowing sources to use different authentication methods simultaneously. Credentials are stored in Django settings or environment variables (not in code). Supports OAuth token refresh for long-lived sessions. No centralized credential vault; requires external integration for enterprise credential management.
Unique: Implements per-source authentication handling (swirl/connectors/) supporting multiple authentication methods (API keys, OAuth 2.0, basic auth, database credentials) through connector-specific implementations. Each connector manages its own authentication logic, allowing sources to use different methods simultaneously. Credentials are stored in environment variables or Django settings, not in code.
vs alternatives: More flexible than single authentication method because each source can use different auth; more secure than hardcoded credentials because credentials are stored in environment variables; supports OAuth unlike basic auth-only solutions.
admin interface for source configuration and search management
Provides Django admin interface for configuring data sources, managing searches, and monitoring system health. Allows admins to add/edit/delete data sources, configure connector parameters, set authentication credentials, and manage search history. Includes admin guide (docs/Admin-Guide.md) for production deployment and troubleshooting. Supports bulk operations for managing multiple sources. Provides search analytics (query volume, source performance, result quality metrics).
Unique: Implements Django admin interface for source configuration and search management, allowing admins to add/edit/delete data sources without code changes. Includes admin guide (docs/Admin-Guide.md) for production deployment. Provides search analytics and system health monitoring through admin interface.
vs alternatives: More accessible than code-based configuration because it provides UI for non-developers; more integrated than separate admin tools because it's part of SWIRL Search application; more transparent than hidden configuration because all settings are visible in admin interface.
result normalization and relevance re-ranking across heterogeneous sources
Implements result processing pipeline (swirl/processors/) that normalizes results from different sources into unified schema, applies relevance re-ranking algorithms, and deduplicates results. The Mixer component (swirl/mixers/mixer.py) combines results from multiple sources using configurable ranking strategies (BM25, TF-IDF, LLM-based relevance scoring). Processors transform raw connector output into normalized Result objects with standardized fields, handle PII removal (swirl/processors/remove_pii.py), and apply source-specific post-processing. Results are re-ranked based on relevance scores, source credibility, and recency.
Unique: Implements pluggable processor pipeline (swirl/processors/processor.py base class) where each processor transforms results independently, enabling composition of normalization, ranking, and filtering logic. Mixer component (swirl/mixers/mixer.py) applies configurable ranking strategies (BM25, TF-IDF, or custom) to re-rank results from heterogeneous sources. PII removal processor uses pattern matching to detect and redact sensitive data before returning results.
vs alternatives: More flexible than fixed ranking algorithms because mixer strategies are pluggable; more comprehensive than simple result concatenation because it handles deduplication and PII removal in pipeline.
retrieval-augmented generation (rag) with llm-powered answer synthesis
Implements RAG pipeline (swirl/processors/rag.py) that uses LLM APIs (OpenAI, Anthropic, Ollama, Azure OpenAI) to synthesize answers from search results without moving data. The RAG processor takes normalized search results, constructs a prompt with result snippets as context, and calls the configured LLM to generate a natural language answer. Supports streaming responses via WebSocket to Galaxy UI for real-time answer generation. Integrates with search result ranking to prioritize high-relevance results in LLM context window.
Unique: Implements RAG as a processor in the result processing pipeline (swirl/processors/rag.py), allowing it to be composed with other processors (normalization, ranking, PII removal). Supports multiple LLM providers (OpenAI, Anthropic, Ollama, Azure) through pluggable LLM client abstraction. Streams responses via WebSocket to Galaxy UI for real-time answer generation without waiting for full LLM completion.
vs alternatives: More flexible than monolithic RAG systems because RAG is optional and composable with other processors; supports multiple LLM providers unlike single-model solutions; streams responses for better UX compared to batch answer generation.
+6 more capabilities