25 capabilities.
Five themes.
One runtime.
Everything you need to build, govern, and scale production AI workflows — in one open-source package.
Design, route, and execute intelligent workflows
From simple chains to complex DAGs — weaveIntel's orchestration engine handles it all.
Loom Engine
Visual DAG-based workflow orchestration. Define complex pipelines with branching, loops, and parallel execution paths.
Agent Runtime
Autonomous AI agent execution with tool calling, memory, and reasoning. Agents collaborate to solve complex problems.
Task Routing
Intelligent routing of work items across agents and humans. Priority queues, load balancing, and skill-based assignment.
Multi-Model Support
Use any LLM — OpenAI, Anthropic, Google, local models. Switch providers without changing a line of workflow code.
Human-in-the-Loop
Configurable approval gates for sensitive decisions. Humans review, override, or approve AI actions before execution.
Every decision tracked, every action policy-checked
Enterprise-grade governance built into the runtime, not bolted on after the fact.
Policy Engine
Declarative policy enforcement on every AI interaction. Define rules in YAML — applied consistently across all workflows.
Prompt Safety
Prompt injection detection across 11 categories. Content filtering, PII redaction, and harmful output prevention.
Audit Trail
Complete logging of every AI decision and action. Immutable audit records with full provenance chains.
Cost Governance
Token usage tracking per workflow, agent, and team. Budget limits, alerts, and automatic throttling.
Execution Policies
Approval workflows, scope boundaries, and delegation controls. Define what each agent can and cannot do.
Plug into anything, integrate with everything
Universal protocols and composable tools — no more custom adapter code.
Universal MCP
Model Context Protocol for standardised tool integration. One protocol, every tool — databases, APIs, SaaS platforms.
Tool Framework
Composable tool system with typed input/output contracts. Build tools once, reuse across any workflow.
Field Registry
Dynamic schema management for cross-system data mapping. Fields are first-class citizens with validation and access control.
Webhook Engine
Event-driven integrations with external systems. Inbound and outbound webhooks with retry logic and dead-letter queues.
Data Adapters
Connect to PostgreSQL, Redis, S3, REST APIs, GraphQL, and file systems. Unified interface for all data sources.
From prototype to production, effortlessly
Built for multi-tenant, high-throughput, real-world workloads from day one.
Multi-Tenancy
Isolated tenant environments with shared infrastructure. Data separation, config inheritance, and team boundaries.
Worker Pools
Distributed execution with configurable concurrency. Priority-based scheduling, retry policies, and graceful degradation.
Caching Layer
Intelligent response caching for cost reduction. Semantic deduplication cuts redundant LLM calls by up to 40%.
Metrics & Analytics
Real-time dashboards for performance, usage, and cost. Per-workflow, per-agent, and per-tenant visibility.
Auto-Scaling
Dynamic resource allocation based on demand. Scale workers, model routing, and queue depth automatically.
Make it yours — plugins, templates, APIs
weaveIntel is designed to be extended. Add capabilities without touching core code.
Plugin Architecture
Extend platform capabilities with custom plugins. Hot-loadable modules with lifecycle hooks and dependency injection.
Custom Evaluators
Build your own quality evaluation pipelines. Score agent outputs on accuracy, safety, relevance — any metric you define.
Template System
Reusable workflow and agent templates. Package best-practice patterns as shareable, versionable templates.
API Layer
RESTful APIs for programmatic access. Every capability exposed via typed, documented, versioned endpoints.
Cognitive Extensions
Add custom reasoning capabilities — chain-of-thought, reflection loops, planning strategies — as pluggable modules.
geneWeave: The runtime in action
A production-ready AI chatbot that uses all five themes. Clone it, study it, build on it.
Full-stack chatbot, zero boilerplate
geneWeave is a complete AI chatbot application that demonstrates orchestration, governance, tool integration, multi-tenancy, and extensibility — all powered by weaveIntel.
Three paths to your first workflow
Whether you want to explore the code, deploy a chatbot, or build a custom workflow — start here.
Read the Docs
Architecture overview, API reference, configuration guides, and tutorials. Start understanding the runtime.
Documentation →Deploy geneWeave
Clone the example chatbot, configure your models, and have a production AI assistant running in minutes.
Clone geneWeave →Star & Contribute
Join the community. Star the repo, open issues, submit PRs. weaveIntel is built in the open.
Star on GitHub →Built in the open, for everyone
weaveIntel is Apache 2.0 licensed. No CLA, no contributor agreements, no strings attached. Fork it, extend it, ship it.