The Platform Pipeline
The HUMAN Protocol allows job types to be contractually defined and broken down into smaller component tasks ("factored cognition"), with funds placed in escrow (paid out only for accuracy above a threshold) -- in effect, this enforces guarantees on annotation quality and execution and enables AI to request human insight in real-time.

- Over 100 million monthly active users completing tasks
- HUMAN platform technology is already in use on ~15% of the internet via its first application, hCaptcha, with daily interactions with many millions of users across tens of millions of websites.
- HUMANs engaged in task completion represent 247 countries and territories, including one human in the South Sandwich Islands (a small inhospitable island of 30 people - so, by percentage, it’s still pretty good).
This generates an enormous volume of human annotation capacity for use in machine learning, accessible by API. New services can be launched at web scale with “human quality” inference from day one, with humans in the loop: as models improve in accuracy, continuous QA can keep result quality high.
The Protocol is designed for easy pluggability, allowing you to bring your own interfaces, publish new job types, onboard labor pools, and more, or map your existing workflows onto standardized “job primitives” for the widest universal capacity. Furthermore, instead of building anew, whenever possible we want to aim to integrate with best-in-breed tools within the ecosystem and would love referrals not only in language and visual but in audio and other areas of research as well.
Examples of job primitives include:
- Multimodal similarity recognition between images, text, audio, and video
- “Does the video contain a dog”· “Is the human petting the dog”
- Matching between domains · Given an image can you select the appropriate caption? · Given a caption can you select the appropriate image?
- Sorting tasks · Click on the three most similar · Find the products that are most similar to the other products · Rank the items in terms of similarity
- Validation vs Generation · “Draw an bounding box around a dog” vs “Is the bounding box tightly enclosing the dog” · “Which summary is more accurate” vs “summarize this text in one sentence”
The underlying technology is now being open sourced, creating an open ecosystem of labor pools of many types, forming a price-transparent global marketplace that makes labor accessible and fungible in a previously unimaginable way.
The HUMAN Protocol is blockchain-based and supports more transactions than Ethereum and more transactions than the sum of transactions for the last 5 years on MakerDAO.
Together, HUMAN and AIs can create not only less brittle, more dynamic conceptual libraries, but also more opportunities for reinforcement learning - for example, determining which translation, or which summary is better - can we make it easier for reinforcement learning with more explicit user-generated rewards?
- Let’s say we have a generative text model, it produces a summary, and then we get a user to rank or select their preferred summary, then we could use those responses as rewards to help generate better summaries.
Last modified 4mo ago