🏆 Online-Mind2Web Leaderboard

Online-Mind2Web is a benchmark designed to evaluate the real-world performance of web agents on live websites, featuring 300 tasks across 136 popular sites in diverse domains with reliable LLM-as-a-Judge (WebJudge) automactic evaluation. Based on the number of steps required by human annotators, tasks are divided into three difficulty levels: Easy (1-5 steps), Medium (6-10 steps), and Hard (11+ steps).

Leaderboard

Our goal is to conduct a rigorous assessment of the current state of web agents. We maintain two leaderboards—one for automatic evaluation and another for human evaluation.

When using our benchmark or submitting results, please first carefully review the important notes to ensure proper usage and obtain reliable evaluation results and follow the "Submission Guideline".

We usually need about one week to review the results. If your results require urgent verification, please let us know in advance. Thank you for your understanding.

âš  Important Notes for Reliable Evaluation:

  • Start from the specified websites, not Google Search: To enable fair comparisons, please ensure that each task starts from the specified website in our benchmark. Starting from Google Search or alternative websites can lead agents to use different websites to solve the task, resulting in varying difficulty levels and potentially skewed evaluation results.
  • Include only factual actions, not agent outputs: The action history should contain only the factual actions taken by the agent to complete the task (e.g., Clicking elements and Typing text). Do not include the final response or any other agent's outputs, as they may contain hallucinated content and result in a high rate of false positives.
  • Use o4-mini for WebJudge: WebJudge powered by o4-mini demonstrates a higher alignment with human judgment, achieving an average agreement rate of 85.7% and maintaining a narrow success rate gap of just 3.8%. Therefore, please use o4-mini as the backbone for automatic evaluation.

To obtain more reliable automatic evaluation results, the action representation should be as detailed as possible, including only factual actions and excluding any agent outputs. Here is an example script to process the element's HTML as the action representation. It can preserve valuable information while filtering out irrelevant attributes.

Please do not use it as training data for your agent.

Visualization

This figure presents a fine-grained heatmap illustrating task-level completion across different agents. Each row corresponds to a specific agent, and each column represents a task (identified by its task ID). Blue bars indicate successful completions, while white spaces denote failures. Any agent: A task is considered successful if at least one agent is able to complete it. (This style of visualization is inspired by HAL.)

In certain scenarios, testing on the full Online-Mind2Web dataset may not be feasible due to cost, privacy, or legal constraints. To facilitate fair and apple-to-apple comparisons, we release both our human evaluation labels and auto-eval details.

  • Human Evaluation: Task-level human evaluation labels are provided in the file.
  • Auto-Evaluation: The results of WebJudge are available in the folder.