Make agentic workflows traceable, auditable, and reproducible.

Flowcept captures runtime provenance with minimal code changes and low overhead—linking tasks, data lineage, telemetry, and AI‑agent interactions so you can trace outcomes end‑to‑end.

Capture
Adapters + decorators
Stream
Redis · Kafka · RDMA
Query
API · Grafana · LLM agent

Core capabilities

Unify runtime signals into one provenance layer you can query, visualize, and automate.

Low-friction capture

Capture with decorators or adapters—keep services largely unchanged.

Streaming at scale

Stream via Redis, Kafka, or RDMA—decouple capture from analysis.

Store & query

Store + query via API/CLI/dashboards—no single backend required.

Telemetry in context

Attach CPU/GPU/memory + scheduling metadata directly to lineage.

Agents as first-class citizens

Record prompts, tool calls, and responses—linked to downstream tasks.

Architecture that scales from laptop to leadership-class HPC

Modular design: capture → stream → keep → query. Deploy anywhere on the Edge–Cloud–HPC continuum.

Architecture docs

1) Capture

Decorators, loop instrumentation, and adapters (e.g., MLflow, Dask, TensorBoard, file systems) emit compact provenance messages.

2) Stream

Messages buffer in-memory and flush asynchronously to a publish–subscribe hub (Redis / Kafka / RDMA‑optimized backends).

3) Keep

One or more “Provenance Keeper” services normalize and persist messages to your chosen storage backend.

4) Query

Access data via API/CLI, dashboards, or an LLM agent—supporting monitoring, analysis, and workflow steering.

Why this matters for agentic workflows

When agents make decisions, errors can propagate. Flowcept links prompts, tool calls, telemetry, and downstream impacts—making accountability practical.

Get started

Start locally with no external services. Add streaming and storage when you’re ready to scale.

Full documentation

2‑minute quickstart

pip install flowcept flowcept --init-settings python quickstart.py

Instrument plain Python functions with decorators to capture inputs/outputs and timing, then inspect the generated provenance messages.

Next steps for production

  • Enable a streaming hub (Redis or Kafka) for asynchronous publish–subscribe provenance ingestion.
  • Deploy a Provenance Keeper service and choose a backend (MongoDB, LMDB, or custom).
  • Turn on telemetry capture and attach it to lineage for performance-aware analysis.
  • Connect Grafana dashboards or use the LLM agent for natural-language provenance exploration.