promptbenchBenchmark30/100 via “dataset-loader-with-multi-format-support”
PromptBench is a powerful tool designed to scrutinize and analyze the interaction of large language models with various prompts. It provides a convenient infrastructure to simulate **black-box** adversarial **prompt attacks** on the models and evaluate their performances.
Unique: Provides a unified DatasetLoader interface that handles both language datasets (GLUE, MMLU, BIG-Bench) and vision datasets (ImageNet, COCO) with automatic preprocessing, caching, and format conversion, rather than requiring separate loaders for each modality.
vs others: More convenient than manual dataset loading because it handles caching, preprocessing, and batching automatically. Supports both LLM and VLM evaluation datasets in one framework, unlike task-specific loaders.