Building AI Products
Building products around probabilistic models
- Source
- Benedict Evans
- Category
- AI/ML Product
- Format
- Article
- Published
- June 1, 2024
Summary
This case study examines the fundamental product challenges of building AI products using generative models, using Benedict Evans' experience with ChatGPT's inaccurate visa information as a starting point. The core problem is that Large Language Models (LLMs) are probabilistic systems that excel at generating plausible-sounding answers rather than providing precise, factual information - yet many AI products present outputs with false certainty.
Evans identifies three strategic approaches for building useful AI products around inherently unreliable models. First, the general-purpose chatbot approach, which is problematic because it promises to answer anything while the interface fails to communicate limitations or guide users toward appropriate queries. Second, domain-specific AI tools that constrain functionality to narrow use cases, enabling custom UIs that clearly communicate capabilities and limitations - exemplified by coding assistants and marketing tools. Third, embedded AI where users never see the model directly; instead, AI powers features within traditional software without being explicitly "AI-branded."
The key insight for product managers is treating AI limitations as a product design challenge rather than purely a technical problem. Success requires moving toward users rather than expecting them to learn prompt engineering, similar to how consumer computing evolved away from command lines. Evans suggests the most promising approach may be "unbundling" general-purpose AI into single-purpose tools and experiences, comparing it to how electric motors became successful through specific applications like drills and blenders rather than being sold as general-purpose motors.