Skip to main content

The Datamarkin Ecosystem

Datamarkin provides three powerful libraries that work seamlessly together to create a complete computer vision development platform. Each library serves a distinct purpose while integrating naturally with the others.

AgentUI

The BuilderVisual workflow creation and management

Mozo

The EngineModel serving and execution

PixelFlow

The FoundationCore CV primitives and visualization

How They Work Together

The three libraries form a complete pipeline for computer vision development:

The Complete Workflow

1

Build in AgentUI

Create visual workflows by connecting tools with a drag-and-drop interface. Choose from 35+ built-in tools for detection, segmentation, tracking, annotation, and more. Export workflows as JSON for version control.
2

Execute on Mozo

Deploy your workflows on Mozo’s model server. Access 35+ pre-configured models across 10 frameworks including Detectron2, YOLOv8, Florence-2, and more. Mozo handles memory management and lazy loading automatically.
3

Visualize with PixelFlow

Process results using PixelFlow’s powerful annotation and analysis tools. Draw bounding boxes, add labels, track objects across frames, monitor zones, and export results.

Library Comparison

FeatureAgentUIMozoPixelFlow
Primary PurposeVisual workflow builderModel serverCV primitives & visualization
Key FeatureDrag-and-drop interface35+ pre-configured models20+ annotators
DeploymentWeb UI + Python APIHTTP server + Python SDKPython library
DependenciesPixelFlow, Mozo (optional)PixelFlow (for output format)NumPy, OpenCV
Best ForRapid prototypingProduction deploymentsCustom CV pipelines
Model SupportVia Mozo integrationDetectron2, YOLO, Florence-2, OCRFramework agnostic

Integration Patterns

Pattern 1: Full Stack (All Three Libraries)

Use all three libraries for a complete solution from design to deployment to visualization.
from agentui import Workflow
from mozo import ModelManager
from pixelflow import annotate

# Load workflow built in AgentUI
workflow = Workflow.load("my_workflow.json")

# Execute on Mozo's model server
results = workflow.run(image, use_mozo=True)

# Visualize with PixelFlow
annotated = annotate.box(image, results.detections)
annotate.label(annotated, results.detections)

Pattern 2: AgentUI + PixelFlow (Local Execution)

Build workflows visually and run them locally without needing a model server.
from agentui import Workflow
from pixelflow import annotate

# Load and run workflow locally
workflow = Workflow.load("detection_workflow.json")
results = workflow.run(image)

# Annotate results
annotated = annotate.box(image, results.detections)

Pattern 3: Mozo + PixelFlow (API-First)

Use Mozo as a model serving layer with PixelFlow for visualization.
from mozo import predict
from pixelflow import annotate, Detections

# Call Mozo's API
response = predict("detectron2", "mask_rcnn_R_50_FPN", image)

# Convert to PixelFlow format
detections = Detections.from_mozo(response)

# Visualize
annotated = annotate.mask(image, detections)
annotate.box(annotated, detections)

Pattern 4: PixelFlow Standalone

Use PixelFlow independently for custom computer vision pipelines.
from pixelflow import annotate, Detections, Tracker, Zones

# Your custom detection logic
detections = your_model.predict(image)

# Use PixelFlow for tracking and analysis
tracker = Tracker()
tracked = tracker.update(detections)

# Monitor zones
zone = Zones.rectangle((100, 100), (500, 500))
in_zone = detections.filter_by_zone(zone)

# Annotate
annotated = annotate.box(image, tracked)
annotate.label(annotated, tracked)

Data Flow

Understanding how data flows between the libraries:
Format: JSON workflow definitionAgentUI exports workflows as JSON that specify which models to use, how to connect them, and what parameters to apply. Mozo can parse these workflows and execute them using its model registry.
Format: Unified Detections objectMozo returns results in PixelFlow’s Detections format, which provides a consistent interface regardless of the underlying model framework (Detectron2, YOLO, etc.).
Format: NumPy arrays with metadataPixelFlow processes and annotates images as NumPy arrays. All metadata (bounding boxes, labels, masks) is preserved in the Detections object for downstream use.

Architectural Benefits

Loose Coupling

Each library can be used independently. You’re not forced to use all three - choose what fits your needs.

Shared Standards

All libraries use PixelFlow’s Detections format as a common data structure, ensuring seamless interoperability.

Incremental Adoption

Start with one library and add others as your needs grow:
  • Begin with PixelFlow for basic CV needs
  • Add Mozo when you need pre-configured models
  • Include AgentUI for visual workflow management

Real-World Use Cases

Surveillance System

Libraries: All three
  • Build detection + tracking workflows in AgentUI
  • Deploy on Mozo for efficient model serving
  • Use PixelFlow for zone monitoring and alerts

Document Processing

Libraries: Mozo + PixelFlow
  • Use Mozo’s OCR models (PaddleOCR, EasyOCR)
  • Extract layout with PP-Structure
  • Visualize results with PixelFlow annotators

Quality Inspection

Libraries: AgentUI + Mozo
  • Design inspection workflows in AgentUI
  • Run on Mozo with custom defect detection models
  • Export results for analysis

Custom CV Pipeline

Libraries: PixelFlow standalone
  • Integrate with your existing ML models
  • Use PixelFlow’s annotators and trackers
  • Build custom analysis workflows

Getting Started

Choose your entry point based on your use case:
  • I want to build workflows visually
  • I need pre-configured models
  • I'm building a custom pipeline
Start with AgentUI:
pip install agentui
agentui serve
Visit http://localhost:8000 to access the visual workflow builder.View AgentUI Quickstart →

Next Steps