Build ML Model Containers. Automatically.
Turn your machine learning models into portable container images that can run just about anywhere using Chassis. For a deeper dive and better understanding of what Chassis is, learn more here.
Easy to Connect | Requires Installation |
---|---|
Quickest and easiest way to start using Chassis. No DevOps experience required. Preferred Connect to Chassis Service |
More involved installation. Requires a moderate understanding of Kubernetes, Helm, and Docker. Install Chassis on Your Machine |
How it Works
After connecting to the Chassis service, your workflow will involve a few simple steps:
Set Up Environment
Create your workspace environment, open a Jupyter Notebook or other Python editor, and install the Chassisml SDK.
Load Your Model
Train your model or load your pre-trained model into memory (.pth
, .pkl
, .h5
, .joblib
, or other file format - all model types and formats are supported!).
Write Process Function
The process
function will use your model to perform any required preprocessing and inference execution on the incoming input_bytes
data.
def process(input_bytes):
# preprocess
data = preprocess(input_bytes)
# run inference
predictions = model.predict(data)
# post process predictions
formatted_results = postprocess(predictions)
return formatted_results
Initialize Client and Create Chassis Model
NOTE: Depending on how you connect to the service, you will need to identify the URL on which the service is running and can be accessed. If you are connecting to the publicly-hosted version, make sure to sign up to access this URL. Otherwise if you are deploying manually and connecting to a locally running instance, your URL will look something like http://localhost:5000.
Once you have this URL, replace <chassis-instance-url>
in the below line with your URL.
chassis_client = chassisml.ChassisClient("<chassis-instance-url>")
chassis_model = chassis_client.create_model(process_fn=process)
Publish Chassis Model
response = chassis_model.publish(
model_name="Sample ML Model",
model_version="0.0.1",
registry_user=dockerhub_user,
registry_pass=dockerhub_pass,
)
Run and Query Model
Run your model locally or on your preferred serving platform and begin making inference calls right away.



