Holistic Evaluation of Language Models
Welcome! The crfm-helm
Python package contains code used in the Holistic Evaluation of Language Models project (paper, website) by Stanford CRFM. This package includes the following features:
- Collection of datasets in a standard format (e.g., NaturalQuestions)
- Collection of models accessible via a unified API (e.g., GPT-3, MT-NLG, OPT, BLOOM)
- Collection of metrics beyond accuracy (efficiency, bias, toxicity, etc.)
- Collection of perturbations for evaluating robustness and fairness (e.g., typos, dialect)
- Modular framework for constructing prompts from datasets
- Proxy server for managing accounts and providing unified interface to access models
The code is hosted on GitHub here.
To run the code, refer to the User Guide's chapters:
To add new models and scenarios, refer to the Developer Guide's chapters:
Papers
This repository contains code used to produce results for the following papers:
- Holistic Evaluation of Vision-Language Models (VHELM) - paper, leaderboard, documentation
- Holistic Evaluation of Text-To-Image Models (HEIM) - paper, leaderboard, documentation
- Enterprise Benchmarks for Large Language Model Evaluation - paper, documentation
The HELM Python package can be used to reproduce the published model evaluation results from these papers. To get started, refer to the documentation links above for the corresponding paper, or the main Reproducing Leaderboards documentation.