Image classification from Amazon SageMaker to your phone with TensorFlowJS

Pierre Lucas
3 min readNov 5, 2020

Amazon SageMaker offers built-in algorithms that allow you to train machine learning models quickly, hence enabling you to have a better time to market.

The built-in algorithms from Amazon SageMaker used for image classification are ResNets, as described in their documentation.

Once trained, these models are saved to the MXNet format. This format works well for many applications, but when it comes to load a model on a React Native mobile app developed with Expo, the number of possible formats narrows to one: TensorFlowJS.

This article describes the three steps to convert the Amazon SageMaker image classification built-in algorithm from MXNet to TensorFlowJS:

  • From MXNet to ONNX,
  • From ONNX to TensorFlow,
  • From TensorFlow to TensorFlowJS.

To leverage the simplicity of Amazon SageMaker, these steps should be performed on a Jupyter Notebook hosted on a ml.t2.medium instance.

Step 1 — From MXNet to ONNX

Amazon SageMaker saves the trained model in a S3 bucket as a model.tar.gz file. The latter contains two files describing the model: its architecture (.json) and its weights (.params). This is the MXNet format.

Unfortunately, Amazon SageMaker saves the model in an old version of MXNet, thus creating conversion issues if newer MXNet versions are used.

An easy workaround is to do the conversion on a Jupyter Notebook with a conda_mxnet_p27 kernel, which provides the right versions of MXNet and ONNX, respectively 1.6.0 and 1.3.0.

The following code converts your model from MXNet to ONNX.

Once the model is successfully converted to ONNX, you can proceed to the second stage.

Step 2 — From ONNX to TensorFlow

Alike the first conversion, the conversion from ONNX to TensorFlow requires specific versions of the libraries:

  • TensorFlow: 2.2.0,
  • ONNX: 1.7.0,
  • ONNX-TF: 1.6.0.

However, unlike the first conversion, this conversion should be done on a Jupyter Notebook with a conda_tensorflow2_p36 kernel.

The following code converts your model from ONNX to TensorFlow SavedModel format.

Once the model is successfully converted to the TensorFlow SavedModel format, you can proceed to the third stage.

Step 3 — From TensorFlow to TensorFlowJS

The last conversion should be done on the same kernel as the second conversion, namely conda_tensorflow2_p36.

The following code converts your model from TensorFlow to TensorFlowJS.

Once successfully completed, you should obtain a folder with:

  • A model.json file containing the model topology and a manifest of the weight files,
  • A set of sharded weight files saved in binary format.

These files are used to load the model on a mobile app with TensorFlowJS.

Conclusion

This article explained how to convert a built-in image classification model trained on Amazon SageMaker in three easy steps, from MXNet to TensorFlowJS, enabling an easy integration on a React Native mobile App developed with Expo.

An upcoming article will explain how to load this model and achieve inferences on an Expo App.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Pierre Lucas
Pierre Lucas

Written by Pierre Lucas

As an engineer, I do not consider technology as a panacea but rather use it creatively and with parsimony to address social and environmental challenges.

No responses yet

Write a response