(MIT: Cambridge, MA) -- Powerful machine-learning models are being used to help people tackle tough problems, such as identifying disease in medical images or detecting road obstacles for autonomous vehicles. But machine-learning models can make mistakes, so it’s critical that humans know when to trust a model’s predictions—especially in high-stakes settings.
ADVERTISEMENT |
Uncertainty quantification is one tool that improves a model’s reliability; the model produces a score along with the prediction that expresses a confidence level that the prediction is correct. Although uncertainty quantification can be useful, existing methods typically require retraining the entire model to give it that ability. Training involves showing a model millions of examples so it can learn a task. Retraining then requires millions of new data inputs, which can be expensive and difficult to obtain, and also uses huge amounts of computing resources.
…
Add new comment