The intelligent AI runtime with 25 production-ready capabilities. Orchestrate, govern, connect, scale, and extend — all from a single, MIT-licensed framework.
Declarative workflow engine for multi-step AI pipelines. Define complex agent interactions as composable Looms with automatic retry, branching, and parallel execution.
Intelligent routing layer that matches incoming requests to the optimal skill based on intent classification, cost constraints, and quality requirements.
Persistent, searchable memory layer for AI agents. Stores conversation history, learned facts, and contextual state across sessions with configurable retention.
Optimises context window usage across model providers. Automatically compresses, prioritises, and manages token allocation for maximum quality at minimum cost.
Complete lifecycle management for AI agents — from creation and configuration to versioning, deployment, and retirement with full audit history.
Declarative policy framework that enforces rules on every AI interaction. Define cost limits, content policies, access controls, and compliance requirements as code.
Multi-layer PII detection using named entity recognition, regex patterns, and ML classifiers. Automatically redacts sensitive data before it reaches third-party APIs.
Cryptographic audit trails for every AI decision. Track data lineage from input to output with tamper-evident logs and full reproducibility.
Granular access control for AI capabilities. Define who can use which models, skills, and data sources with tenant-aware permission hierarchies.
Multi-model content filtering that detects harmful, biased, or inappropriate content in both inputs and outputs with configurable severity thresholds.
Unified interface to 50+ model providers. Switch between OpenAI, Anthropic, Google, Cohere, and local models without changing application code.
Standardised tool integration layer for function calling, API access, and external system interactions with automatic schema validation and error handling.
Pre-built connectors for databases, file systems, cloud storage, and SaaS platforms. Access structured and unstructured data with unified query interfaces.
Integrated vector search with support for multiple embedding models and vector stores. Semantic retrieval for RAG pipelines with automatic chunking and indexing.
Asynchronous event system for decoupled agent communication. Publish/subscribe patterns with guaranteed delivery and event sourcing support.
Intelligent horizontal and vertical scaling based on request volume, latency targets, and cost budgets. Zero-downtime scaling with predictive capacity planning.
Intelligent response caching that matches semantically similar queries. Reduces redundant API calls by up to 40% while maintaining response freshness.
Token-level cost tracking across all providers. Allocate costs to teams, features, and users with budget alerts, forecasting, and optimisation recommendations.
Real-time performance dashboards with latency tracking, throughput analysis, and quality scoring. Identify bottlenecks and optimise AI pipeline performance.
Application-aware load balancing across model providers and agent instances. Weighted routing, health checks, and automatic failover for maximum reliability.
Build custom skills with the Skill Development Kit. Type-safe interfaces, automatic documentation generation, and integrated testing harness.
Create domain-specific governance policies with the Policy SDK. Define custom validators, transformers, and enforcement actions for your organisation’s requirements.
Pre-built React and Web Component library for AI-powered interfaces. Chat widgets, dashboard panels, and data visualisation components with full theme support.
Flexible webhook system for external integrations. Send notifications, trigger workflows, and synchronise state with third-party systems on any event.
Discover, install, and manage community extensions. Publish your own skills, policies, and adapters to the weaveIntel marketplace.
Built with weaveIntel
A production reference application demonstrating how weaveIntel powers complex, regulated AI in healthcare genomics.
geneWeave processes VCF files, annotates variants against ClinVar, and generates clinical-ready reports. Every step is governed by weaveIntel’s policy engine with full audit trails for regulatory compliance.
Getting Started
Choose the path that fits your experience level and deployment requirements.
Scaffold a new project and run your first Loom in under 5 minutes.
Comprehensive guides covering every capability, API reference, and deployment patterns.
Read the docs →Join the community on GitHub Discussions. Ask questions, share patterns, and contribute.
Join discussions →Community
weaveIntel is MIT licensed and community-driven. Every feature, bug fix, and improvement happens in the open.
Show your support and stay updated with releases, features, and community contributions.
GitHub →Submit skills, policies, adapters, or documentation improvements. Every contribution counts.
Contributing guide →Ask questions, share patterns, and connect with other weaveIntel developers in GitHub Discussions.
Discussions →