Build ML Model Containers. Automatically.
Turn your machine learning models into portable container images that can run just about anywhere using Chassis. For a deeper dive and better understanding of what Chassis is, learn more here.
|Easy to Connect||Requires Installation|
|Quickest and easiest way to start using Chassis.
No DevOps experience required.
Connect to Chassis Service
|More involved installation.
Requires a moderate understanding of
Kubernetes, Helm, and Docker.
Install Chassis on Your Machine
How it Works
After connecting to the Chassis service, your workflow will involve a few simple steps:
Set Up Environment
Create your workspace environment, open a Jupyter Notebook or other Python editor, and install the Chassisml SDK.
Load Your Model
Train your model or load your pre-trained model into memory (
.joblib, or other file format - all model types and formats are supported!).
Write Process Function
process function will use your model to perform any required preprocessing and inference execution on the incoming
Initialize Client and Create Chassis Model
NOTE: Depending on how you connect to the service, you will need to identify the URL on which the service is running and can be accessed. If you are connecting to the publicly-hosted version, make sure to sign up to access this URL. Otherwise if you are deploying manually and connecting to a locally running instance, your URL will look something like http://localhost:5000.
Once you have this URL, replace
<chassis-instance-url> in the below line with your URL.
Publish Chassis Model
Run and Query Model
Run your model locally or on your preferred serving platform and begin making inference calls right away.