- Speakers
Chelsea Troy
- Date
- June 2-3
- Description
In the style of Andrew Ng's book Machine Learning Yearning, this workshop endeavors to introduce participants to the reasoning and architecture surrounding the productionization of a machine learning model. It will NOT delve into the specific architectural details of various types of machine learning models themselves (though I'll be prepared to answer questions, within a time box, on this) - the point is to understand:
- What does the loop look like of running, evaluating, and changing a machine learning model, and what technical details does the API of this workflow abstract?
- How might we put a model in production, and how will users access it?
- How can we swap out this model behind the API?
- How might we hide the specific language of ML behind good interfaces for consumers?
- What monitoring and testing should we implement during development, deployment, and running in production?
- What role does our context play in our choices about monitoring and error analysis?
Participants will come away having done their own error analysis and made rudimentary changes to a model, then put that model in production in a manner very similar to that which might be used on the job. They'll have a firsthand experience with useful software engineering patterns more sophisticated than "fling model file on S3 and open it in a flask app," and they'll be able to speak to the engineering challenges specific to working with machine learning models
About Chelsea Troy
MLOps at Mozilla; Python and Pedagogy at UChicago; Software Maintenance and AI at O'Reilly.