With Representation Learning (attaching meaning to words, phrases, visual information), we assume we have effective communication, but often we are misunderstanding one another.
- How many times have you left a meeting, only to later realize you were not on the ‘same page’ with a colleague?
- Example: “Beverly Hills”:
- Beverly Hills could be properly classified as sensitive/identifying information for redaction from a document (high performance of algorithm for the task), and yet still be misunderstood by the system:
- Beverly Hills could be either a name or a place dependent on contextual clues
- In this example, the algorithm could return the correct response (high model performance) and yet, there is not shared ‘understanding’. Until you pursue clarification, and search for silent failures, you cannot be certain you’re talking about the same thing.
And, until you can translate from English to Chinese reliably, we can tell that we have not yet achieved comprehension or effective communication.
In many ways, it appears that humans and machines are still ‘talking past one another,’ and we must make it a priority to ensure that HUMANs and Machines not only speak to one another, but understand one another, as early as possible (and for all of us who are not yet ready for a brain implant). The HUMAN Foundation Team believes this can be achieved by leveraging human inference to train artificial intelligence and machine learning systems.