Using machine learning in production requires a sophisticated set of cooperating technologies. A majority of resources that are available for understanding how to design and operate these platforms are focused on either simple examples that don’t scale, or over-engineered technologies designed for the massive scale of big tech companies. In this episode Jacopo Tagliabue shares his vision for "ML at reasonable scale" and how you can adopt these patterns for building your own platforms.
- Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.
- Do you wish you could use artificial intelligence to drive your business the way Big Tech does, but don’t have a money printer? Graft is a cloud-native platform that aims to make the AI of the 1% accessible to the 99%. Wield the most advanced techniques for unlocking the value of data, including text, images, video, audio, and graphs. No machine learning skills required, no team to hire, and no infrastructure to build or maintain. For more information on Graft or to schedule a demo, visit themachinelearningpodcast.com/graft today and tell them Tobias sent you.
- Your host is Tobias Macey and today I’m interviewing Jacopo Tagliabue about building "reasonable scale" ML systems
- How did you get involved in machine learning?
- How would you describe the current state of the ecosystem for ML practitioners? (e.g. tool selection, availability of information/tutorials, etc.)
- What are some of the notable changes that you have seen over the past 2 – 5 years?
- How have the evolutions in the data engineering space been reflected in/influenced the way that ML is being done?
- What are the challenges/points of friction that ML practitioners have to contend with when trying to get a model into production that isn’t just a toy?
- You wrote a set of tutorials and accompanying code about performing ML at "reasonable scale". What are you aiming to represent with that phrasing?
- There is a paradox of choice for any newcomer to ML. What are some of the key capabilities that practitioners should use in their decision rubric when designing a "reasonable scale" system?
- What are some of the common bottlenecks that crop up when moving from an initial test implementation to a scalable deployment that is serving customer traffic?
- How much of an impact does the type of ML problem being addressed have on the deployment and scalability elements of the system design? (e.g. NLP vs. computer vision vs. recommender system, etc.)
- What are some of the misleading pieces of advice that you have seen from "big tech" tutorials about how to do ML that are unnecessary when running at smaller scales?
- You also spend some time discussing the benefits of a "NoOps" approach to ML deployment. At what point do operations/infrastructure engineers need to get involved?
- What are the operational aspects of ML applications that infrastructure engineers working in product teams might be unprepared for?
- What are the most interesting, innovative, or unexpected system designs that you have seen for moderate scale MLOps?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on ML system design and implementation?
- What are the aspects of ML systems design that you are paying attention to in the current ecosystem?
- What advice do you have for additional references or research that ML practitioners would benefit from when designing their own production systems?
- From your perspective, what is the biggest barrier to adoption of machine learning today?
- Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email email@example.com) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- The Post-Modern Stack: ML At Reasonable Scale
- NLP == Natural Language Processing
- Part of speech tagging
- Markov Model
- YDNABB (You Don’t Need A Bigger Boat)
- Information Retrieval
- Modern Data Stack
- Spark SQL
- AWS Athena
- AWS Fargate
- AWS Sagemaker
- Recommendations At Reasonable Scale
- KNN == K-Nearest Neighbors
- Pinterest Engineering Blog
Support The Machine Learning Podcast
Graft™ is a cloud-native platform that aims to make the AI of the 1% accessible to the 99%. Wield the most advanced techniques for unlocking the value of data, including text, images, video, audio, and graphs. No machine learning skills required, no team to hire, and no infrastructure to build or maintain.
For more information on Graft or to schedule a demo go to themachinelearningpodcast.com/graft today! And tell them Tobias sent you.