How to find a misbehaving model
Monitoring machine learning models once they are deployed can make the difference between a creating competitive advantage with ML and suffering setbacks that erode trust with your users and customers. But measuring ML model quality in production environments requires a different perspective and toolbox than monitoring normal software applications. In this talk, I share some practical techniques for identifying decaying models, along with strategies for providing this protection at scale in large organizations.
Tristan Spaulding, Senior Director of Product Management @ DataRobot