When I built ChatPlayground AI, It started with research I ran at UC Berkeley—pitting the biggest AI chatbots against each other:
- ChatGPT-4
- Gemini 1.5 Pro
- Claude 3.5 Sonnet
- Perplexity
- Llama 3.2
Stacking and comparing models boosts your chances by 73% of getting spot-on answers.
Here’s the tea on what I found during testing:
- Perplexity: Writes in complete, thoughtful answers. No bullet points—just solid prose.
- Claude 3.5 Sonnet: Cites sources and even drops video recommendations like a pro… but it’s hit or miss 20–30% of the time.
- ChatGPT-4: Loves bullet points. Great for data tables, too—something Gemini 1.5 Pro struggles with.
- Gemini 1.5 Pro: Gets the context better, with smoother, more conversational replies.
- Llama 3.2: Good in theory, but still needs polishing.
- Pro tip: There’s no reason to touch ChatGPT-3.5 or Gemini 1.0 now—unless you really like settling for less.
Compare models, crush your tasks, and get the best response, every time.