Initialize a project using a pre-built workflow or a custom workflow you've specified to Sieve.
Push a video
Submit a video to Sieve API using a signed URL pointing to a storage bucket.
Query for data
Find intervals of video that match a given query, or retrieve all processed information as a JSON.
How it works
Use a pre-built workflow
Sieve comes with plug-and-play workflows built for your use case. No need to train models, manage datasets, or set up infrastructure. Production-ready out of the gate.
Detect people and track location, speed, size, and more.
Detect vehicles and find make, model, license plates, and more.
Detect brands and logos. Add new logos to search for on the fly.
Track player and ball movement in any sports footage.
Tag inspection and promotional videos with condition, furniture, appliances, and more.
Detect products, demographics, background, and more.
Group content such as advertisements into various topical categories.
Track game statistics and player movements in esports footage.
Track common street landmarks, car accidents, and license plates.
Perform rotoscoping effects by dynamically removing objects.
Detect explicit, hateful, and dangerous content.
Track animals, their joints, breed, actions, and more.
Get visual descriptions of what's happening within a clip.
Remove scene backgrounds without needing a green screen.
Use custom models and workflows
Put your own models and workflows on Sieve's infrastructure. Custom computer vision models trained using Sieve building blocks or imported via Sieve's API. Achieve better results by fine-tuning models using simple API calls to Sieve as a way to provide iterative feedback and improve accuracy over time.
Pick a set of building blocks.
Pick from state-of-the-art ML models for object detection, classification, segmentation, and visual search or import your own.
Build a workflow.
String together building blocks in specific logic and order to meet your specific needs using a simple JSON.
You can think of Sieve almost like a database for video data. Push video, and make queries. Below, we introduce key concepts to help you understand the power of Sieve's query system.
Everything in a video is an object.
A person, a car, a dog, and even the frame itself. Videos are just objects defined by various properties. Detecting boxes, classifying frame-level information, or drawing masks is just one step. To count items, detect actions, remove objects dynamically, or categorize videos these properties must be consolidated over time.
Objects have different properties that change over time.
Objects can more specifically be defined by properties that do and don't change over time. For example, every object might have a "class" attribute such as "person", "car", or something else which doesn't change. However, it also has other properties such as "position", "speed", and "lighting" that do.
Sieve is a database for video which allows you to query in a way that makes sense.
Traditionally, videos could only be "queried" by a timestamp to find the information in that frame. Sieve instead takes an object-first approach.
Sieve automatically tracks objects across frames in parallel, and wraps these features around an easy-to-use query language.