Amazon's AI Advancements: Human Benchmarking, Model Choice, and a Leap Forward in 2024
Originally Published 2 years ago — by The Verge

Amazon is introducing Model Evaluation on Bedrock, a preview feature that allows users to test and evaluate AI models. The platform includes automated evaluation and human evaluation components, enabling developers to assess model performance on metrics like accuracy and toxicity. Users can choose to work with an AWS human evaluation team or their own, and can bring their own data into the benchmarking platform. The goal is to provide companies with a way to measure the impact of AI models on their projects and guide development decisions. AWS will only charge for model inference used during the evaluation.