MIT Licensed · Open Source

weaveIntel

The intelligent AI runtime with 25 production-ready capabilities. Orchestrate, govern, connect, scale, and extend — all from a single, MIT-licensed framework.

terminal
$ npx create-weaveintel my-app # ✔ Scaffolding weaveIntel project... # ✔ Installing dependencies... Ready! cd my-app && npm start $ curl http://localhost:3500/health {"status":"healthy","capabilities":25}
🎲
Theme 1 · Orchestrate
Compose, route, and manage AI workflows
01

Loom Engine

Declarative workflow engine for multi-step AI pipelines. Define complex agent interactions as composable Looms with automatic retry, branching, and parallel execution.

  • Declarative YAML/JSON pipeline definitions
  • Conditional branching and parallel execution
  • Automatic retry with exponential backoff
  • Real-time pipeline visualisation
02

Skill Router

Intelligent routing layer that matches incoming requests to the optimal skill based on intent classification, cost constraints, and quality requirements.

  • Intent-based skill matching
  • Cost-aware routing decisions
  • Quality threshold enforcement
  • A/B testing for skill variants
03

Memory Fabric

Persistent, searchable memory layer for AI agents. Stores conversation history, learned facts, and contextual state across sessions with configurable retention.

  • Short-term and long-term memory stores
  • Semantic search over past interactions
  • Configurable retention and expiry policies
  • Memory sharing between agents
04

Context Window Manager

Optimises context window usage across model providers. Automatically compresses, prioritises, and manages token allocation for maximum quality at minimum cost.

  • Automatic context compression
  • Priority-based token allocation
  • Cross-provider token normalisation
  • Context overflow handling
05

Agent Lifecycle

Complete lifecycle management for AI agents — from creation and configuration to versioning, deployment, and retirement with full audit history.

  • Agent versioning and rollback
  • Blue/green agent deployments
  • Health monitoring and auto-restart
  • Graceful shutdown and state persistence
🛡
Theme 2 · Govern
Policy-first AI with built-in compliance
06

Policy Engine

Declarative policy framework that enforces rules on every AI interaction. Define cost limits, content policies, access controls, and compliance requirements as code.

  • Policy-as-code with YAML definitions
  • Pre-execution and post-execution gates
  • Cascading policy inheritance
  • Real-time policy violation alerts
07

PII Detection & Redaction

Multi-layer PII detection using named entity recognition, regex patterns, and ML classifiers. Automatically redacts sensitive data before it reaches third-party APIs.

  • NER + regex + ML hybrid detection
  • Configurable redaction strategies
  • Re-identification prevention
  • Compliance reporting (GDPR, HIPAA)
08

Audit & Lineage

Cryptographic audit trails for every AI decision. Track data lineage from input to output with tamper-evident logs and full reproducibility.

  • SHA-256 signed audit records
  • Full data lineage graphs
  • Decision replay and debugging
  • Export to SIEM systems
09

Role-Based Access Control

Granular access control for AI capabilities. Define who can use which models, skills, and data sources with tenant-aware permission hierarchies.

  • Hierarchical role definitions
  • Capability-level permissions
  • API key and JWT integration
  • Access pattern anomaly detection
10

Content Safety

Multi-model content filtering that detects harmful, biased, or inappropriate content in both inputs and outputs with configurable severity thresholds.

  • Toxicity and bias detection
  • Hallucination scoring
  • Custom safety classifiers
  • Configurable action policies (block/warn/log)
🔗
Theme 3 · Connect
Integrate any model, tool, or data source
11

Model Gateway

Unified interface to 50+ model providers. Switch between OpenAI, Anthropic, Google, Cohere, and local models without changing application code.

  • Provider-agnostic API surface
  • Automatic fallback chains
  • Cost-optimised model selection
  • Streaming and batch support
12

Tool Framework

Standardised tool integration layer for function calling, API access, and external system interactions with automatic schema validation and error handling.

  • OpenAPI/JSON Schema tool definitions
  • Automatic parameter validation
  • Rate limiting and circuit breakers
  • Tool usage analytics
13

Data Connectors

Pre-built connectors for databases, file systems, cloud storage, and SaaS platforms. Access structured and unstructured data with unified query interfaces.

  • SQL, NoSQL, and graph databases
  • S3, Azure Blob, GCS storage
  • REST and GraphQL API adapters
  • Real-time data streaming
14

Vector Search

Integrated vector search with support for multiple embedding models and vector stores. Semantic retrieval for RAG pipelines with automatic chunking and indexing.

  • Multiple embedding provider support
  • Automatic document chunking
  • Hybrid search (vector + keyword)
  • Index lifecycle management
15

Event Bus

Asynchronous event system for decoupled agent communication. Publish/subscribe patterns with guaranteed delivery and event sourcing support.

  • Pub/sub with topic routing
  • Guaranteed at-least-once delivery
  • Event sourcing and replay
  • Dead letter queues
📈
Theme 4 · Scale
From prototype to production in minutes
16

Auto-Scaling

Intelligent horizontal and vertical scaling based on request volume, latency targets, and cost budgets. Zero-downtime scaling with predictive capacity planning.

  • Request-based and schedule-based scaling
  • Predictive capacity planning
  • Cost-aware scaling policies
  • Zero-downtime deployments
17

Semantic Cache

Intelligent response caching that matches semantically similar queries. Reduces redundant API calls by up to 40% while maintaining response freshness.

  • Embedding-based similarity matching
  • Configurable similarity thresholds
  • Cache invalidation rules
  • Hit rate analytics
18

Cost Analytics

Token-level cost tracking across all providers. Allocate costs to teams, features, and users with budget alerts, forecasting, and optimisation recommendations.

  • Per-token cost attribution
  • Team and feature cost allocation
  • Budget alerts and forecasting
  • Cost optimisation recommendations
19

Performance Monitor

Real-time performance dashboards with latency tracking, throughput analysis, and quality scoring. Identify bottlenecks and optimise AI pipeline performance.

  • Latency percentile tracking (p50/p95/p99)
  • Throughput and concurrency metrics
  • Quality score trending
  • Distributed tracing integration
20

Load Balancer

Application-aware load balancing across model providers and agent instances. Weighted routing, health checks, and automatic failover for maximum reliability.

  • Weighted round-robin routing
  • Health check probes
  • Circuit breaker patterns
  • Provider-aware rate limiting
🛠
Theme 5 · Extend
Plugin architecture for infinite possibilities
21

Skill SDK

Build custom skills with the Skill Development Kit. Type-safe interfaces, automatic documentation generation, and integrated testing harness.

  • TypeScript-first SDK
  • Auto-generated OpenAPI docs
  • Built-in testing framework
  • Hot-reload during development
22

Custom Policies

Create domain-specific governance policies with the Policy SDK. Define custom validators, transformers, and enforcement actions for your organisation’s requirements.

  • Policy authoring framework
  • Custom validator functions
  • Policy testing and simulation
  • Version-controlled policy packages
23

UI Components

Pre-built React and Web Component library for AI-powered interfaces. Chat widgets, dashboard panels, and data visualisation components with full theme support.

  • React and Web Component support
  • Chat, dashboard, and analytics widgets
  • Theme customisation
  • Accessibility (WCAG 2.1 AA)
24

Webhook System

Flexible webhook system for external integrations. Send notifications, trigger workflows, and synchronise state with third-party systems on any event.

  • Event-driven webhook triggers
  • Retry with exponential backoff
  • HMAC signature verification
  • Webhook delivery analytics
25

Extension Registry

Discover, install, and manage community extensions. Publish your own skills, policies, and adapters to the weaveIntel marketplace.

  • Public and private registries
  • Semantic versioning enforcement
  • Security scanning for extensions
  • Usage analytics and ratings

Built with weaveIntel

Meet geneWeave

A production reference application demonstrating how weaveIntel powers complex, regulated AI in healthcare genomics.

geneWeave · Reference App

Clinical-grade genomic intelligence

geneWeave processes VCF files, annotates variants against ClinVar, and generates clinical-ready reports. Every step is governed by weaveIntel’s policy engine with full audit trails for regulatory compliance.

Variant Triage Clinical Reports Research Q&A Pharmacogenomics
geneweave-cli
$ geneweave analyse sample.vcf \ --policy hipaa-strict \ --output clinical-report # Parsing 42,817 variants... # Running ClinVar annotation... # Applying HIPAA governance... 3 pathogenic variants flagged Report: report-2026-01.pdf Audit: SHA-256 verified

Getting Started

Three paths to production

Choose the path that fits your experience level and deployment requirements.

⚡ Quick Start

Scaffold a new project and run your first Loom in under 5 minutes.

terminal
$ npx create-weaveintel my-app $ cd my-app && npm start

📚 Documentation

Comprehensive guides covering every capability, API reference, and deployment patterns.

Read the docs →

💬 Community

Join the community on GitHub Discussions. Ask questions, share patterns, and contribute.

Join discussions →

Community

Built in the open

weaveIntel is MIT licensed and community-driven. Every feature, bug fix, and improvement happens in the open.

Star on GitHub

Show your support and stay updated with releases, features, and community contributions.

GitHub →
📣

Contribute

Submit skills, policies, adapters, or documentation improvements. Every contribution counts.

Contributing guide →
💬

Discuss

Ask questions, share patterns, and connect with other weaveIntel developers in GitHub Discussions.

Discussions →