Motivating Questions
- 1.If you’re building machine learning models, could the software determine that it doesn't know what it doesn’t know? Why couldn't the AI ask directly for clarification?
- 2.When we see an edge case or error, can we ask: “what is the logical fallacy or gap that has led to this error,” and use that information to ask massively distributed networks of people for clarification?
- 3.Can we build a decentralized web-scale marketplace that solves these problems?
Last modified 8mo ago