Machine learning is currently applied to perhaps 1% of the problems it could efficiently solve. This is in part due to the underlying friction in current workflows. Machines need human inputs, but human beings should not be the ones manually selecting and curating what is required to reach a particular accuracy target. Machines should be able to ask people and other machines for the data they need in order to improve and maintain inference quality.
This requires a programmatic interface. HUMAN Protocol is an attempt to define that interface. It creates mechanical specifications of work and requirements that may be transmitted alongside escrowed funds that are released only upon successful completion of tasks. All managerial roles in the system can be performed by software: requesting specific work, evaluating the quality of that work, compensating workers, and delivering results
There is an immediate need to improve the state of play by allowing more actors to participate and reducing friction on both sides of the review market. There is also a long-term need to enable the next generation of automated feedback systems for continuous improvement via human review. This is what the HUMAN Protocol is intended to enable.
A substantial portion of dataset value is today captured by Google at very low cost via reCAPTCHA. Creating economic incentives for website owners by providing a drop-in replacement for reCAPTCHA will democratize access to high volume human evaluation.
This system tests for bots as well as reCAPTCHA while at the same time paying website owners for their audience, and serves as an implementation testbed in HUMAN Protocol design-build cycle.