🏆 Online-Mind2Web Leaderboard

Online-Mind2Web is a benchmark designed to evaluate the real-world performance of web agents on live websites, featuring 300 tasks across 136 popular sites in diverse domains. Based on the number of steps required by human annotators, tasks are divided into three difficulty levels: Easy (1–5 steps), Medium (6–10 steps), and Hard (11+ steps).

Leaderboard

Our goal is to conduct a rigorous assessment of the current state of web agents. We maintain two leaderboards—one for automatic evaluation and another for human evaluation. Please click "Submission Guideline" for details.

Claude Computer Use
OpenAI Computer-Using Agent
Emergence AI
OSU NLP
83.1
58.0
43.2
61.3
2025-3-22

Visualization

This figure presents a fine-grained heatmap illustrating task-level completion across different agents. Each row corresponds to a specific agent, and each column represents a task (identified by its task ID). Blue bars indicate successful completions, while white spaces denote failures. Any agent: A task is considered successful if at least one agent is able to complete it. (This style of visualization is inspired by HAL.)

In certain scenarios, testing on the full Online-Mind2Web dataset may not be feasible due to cost, privacy, or legal constraints. To facilitate fair and apple-to-apple comparisons, we release both our human evaluation labels and auto-eval details.

  • Human Evaluation: Task-level human evaluation labels are provided in the file.
  • Auto-Evaluation: The results of WebJudge are available in the folder.