Build A Full Stack ML Powered App In An Afternoon With Baseten
June 28th, 2022 · 46 mins 26 secs
About this Episode
Building an ML model is getting easier than ever, but it is still a challenge to get that model in front of the people that you built it for. Baseten is a platform that helps you quickly generate a full stack application powered by your model. You can easily create a web interface and APIs powered by the model you created, or a pre-trained model from their library. In this episode Tuhin Srivastava, co-founder of Basten, explains how the platform empowers data scientists and ML engineers to get their work in production without having to negotiate for help from their application development colleagues.
- Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.
- Data powers machine learning, but poor data quality is the largest impediment to effective ML today. Galileo is a collaborative data bench for data scientists building Natural Language Processing (NLP) models to programmatically inspect, fix and track their data across the ML workflow (pre-training, post-training and post-production) – no more excel sheets or ad-hoc python scripts. Get meaningful gains in your model performance fast, dramatically reduce data labeling and procurement costs, while seeing 10x faster ML iterations. Galileo is offering listeners a free 30 day trial and a 30% discount on the product there after. This offer is available until Aug 31, so go to themachinelearningpodcast.com/galileo and request a demo today!
- Do you wish you could use artificial intelligence to drive your business the way Big Tech does, but don’t have a money printer? Graft is a cloud-native platform that aims to make the AI of the 1% accessible to the 99%. Wield the most advanced techniques for unlocking the value of data, including text, images, video, audio, and graphs. No machine learning skills required, no team to hire, and no infrastructure to build or maintain. For more information on Graft or to schedule a demo, visit themachinelearningpodcast.com/graft today and tell them Tobias sent you.
- Predibase is a low-code ML platform without low-code limits. Built on top of our open source foundations of Ludwig and Horovod, our platform allows you to train state-of-the-art ML and deep learning models on your datasets at scale. Our platform works on text, images, tabular, audio and multi-modal data using our novel compositional model architecture. We allow users to operationalize models on top of the modern data stack, through REST and PQL – an extension of SQL that puts predictive power in the hands of data practitioners. Go to themachinelearningpodcast.com/predibase today to learn more and try it out!
- Your host is Tobias Macey and today I’m interviewing Tuhin Srivastava about Baseten, an ML Application Builder for data science and machine learning teams
- How did you get involved in machine learning?
- Can you describe what Baseten is and the story behind it?
- Who are the target users for Baseten and what problems are you solving for them?
- What are some of the typical technical requirements for an application that is powered by a machine learning model?
- In the absence of Baseten, what are some of the common utilities/patterns that teams might rely on?
- What kinds of challenges do teams run into when serving a model in the context of an application?
- There are a number of projects that aim to reduce the overhead of turning a model into a usable product (e.g. Streamlit, Hex, etc.). What is your assessment of the current ecosystem for lowering the barrier to product development for ML and data science teams?
- Can you describe how the Baseten platform is designed?
- How have the design and goals of the project changed or evolved since you started working on it?
- How do you handle sandboxing of arbitrary user-managed code to ensure security and stability of the platform?
- How did you approach the system design to allow for mapping application development paradigms into a structure that was accessible to ML professionals?
- Can you describe the workflow for building an ML powered application?
- What types of models do you support? (e.g. NLP, computer vision, timeseries, deep neural nets vs. linear regression, etc.)
- How do the monitoring requirements shift for these different model types?
- What other challenges are presented by these different model types?
- What are the limitations in size/complexity/operational requirements that you have to impose to ensure a stable platform?
- What is the process for deploying model updates?
- For organizations that are relying on Baseten as a prototyping platform, what are the options for taking a successful application and handing it off to a product team for further customization?
- What are the most interesting, innovative, or unexpected ways that you have seen Baseten used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on Baseten?
- When is Baseten the wrong choice?
- What do you have planned for the future of Baseten?
- From your perspective, what is the biggest barrier to adoption of machine learning today?
- Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email firstname.lastname@example.org) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- React Monaco
- Dall-E 2
- Weights and Biases
The intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
Support The Machine Learning Podcast
Graft™ is a cloud-native platform that aims to make the AI of the 1% accessible to the 99%. Wield the most advanced techniques for unlocking the value of data, including text, images, video, audio, and graphs. No machine learning skills required, no team to hire, and no infrastructure to build or maintain.
For more information on Graft or to schedule a demo go to themachinelearningpodcast.com/graft today! And tell them Tobias sent you.
Data powers machine learning, but poor data quality is the largest impediment to effective ML today.
Galileo is a collaborative data bench for data scientists building Natural Language Processing (NLP) models to programmatically inspect, fix and track their data across the ML workflow (pre-training, post-training and post-production) – no more excel sheets or ad-hoc python scripts.
Get meaningful gains in your model performance fast, dramatically reduce data labeling and procurement costs, while seeing 10x faster ML iterations.
Galileo is offering listeners a free 30 day trial and a 30% discount on the product there after. This offer is available until Aug 31, so go to themachinelearningpodcast.com/galileo and request a demo today!
Predibase’s founders saw the pain of getting ML models developed and in-production, taking up to a year even at leading tech companies like Uber, so they built internal platforms that drastically lowered the time-to-value and increased access. The key was taking a “declarative approach” to machine learning, which Piero Molino (CEO) introduced with Ludwig, an open source framework to create deep learning models with 8,400+ GitHub stars, more than 100 contributors, and thousands of monthly downloads. With Ludwig, tasks that took months-to-years were handed off to teams in thirty minutes and just six lines of human-readable configuration that can define an entire machine learning pipeline.
Now with Predibase, we are bringing the power of declarative machine learning built on top of Ludwig to broader organizations with our enterprise platform. Like Infrastructure as Code simplified IT, Predibase’s machine learning (ML) platform allows users to focus on the “what” of their ML models rather than the “how”, breaking free of the usual limits in low-code systems and bringing down the time-to-value of ML projects from years to days.
Click here to learn more and try it for yourself!