RIME v14 Changelog

AI Firewall

  • Data Imputation - The AI Firewall now automatically corrects model erroneous datapoints in real-time to prevent model downtime and reduce model performance deviations.

  • Firewall Score - Every datapoint that is passed to the AI Firewall is now given a Firewall Score that is a holistic measure of its quality.

AI Continuous Testing

  • Key Insights - The most significant underlying data errors that may be causing a degradation in model performance are now surfaced in the main Continuous Testing results page. Attribution of model behavior declines is now easier than ever.

  • Production SDK - Continuous Testing is ready for production deployment with full API support. This includes functionality to create CT endpoints, incrementally upload data, and query results.

AI Stress Testing

  • Selectable Performance Metrics - Users now have the flexibility to choose between performance metrics used in Tabular Abnormal Input and Subset Performance tests.

  • Additional LTR Metric - There is now a new performance metric (NDCG at K) in the LTR use case in addition to Mean Reciprocal Rank and Rank Correlation. The new NDCG at K metric is the new default metric used in the LTR use case is applied across Abnormal Input and Subset Performance tests.

  • RIME Images - RIME now supports the Image Classification use case. Reach out to the RI team to schedule a demo.

  • Custom Tests - Users can now create their own custom stress tests and run them on RIME.

General

  • RIME Hosted Trial - Trials can now be conducted in a secure hosted environment that removes the requirement to download RIME.

  • Performance Improvements - There have been general improvements to RIME Performance that have led to a large increase in RIME production performance.

    • RAM Usage Improvements - RIME Engine has been optimized to use less memory to prevent OOM errors on data upload.

    • Partitioned Dataset Upload - Users can now upload large partitioned datasets to RIME.