Hedging Answers Experiment
In this experiment, we test how often commonly-used models respond with hedging answers.
August 17, 2023
A research initiative ranking the strengths and weaknesses of large language model offerings from industry leaders like OpenAI, Anthropic, and Meta as well as other open source models.
We'll periodically update the page with our newest, insightful findings on the rapidly-evolving LLM landscape
Bench is our solution to help teams evaluate the different LLM options out there in a quick, easy and consistent way.