TensorFlow.js does just that. Browsers such as Chrome pioneer and support standards such as Web Assembly, WebGL, WebGPU and JS upon which frameworks such as TensorFlow.js are built.
Web Machine Learning or WebML is the fastest growing ML ecosystem and in a few years time could become the largest and most popular ecosystem for deploying Machine Learning on the web.
There are several advantages to integrating Machine Learning on the client side. Some of them being:
Privacy – nothing sent to server
Run offline on device
Latency is reduced since there is no server roundtrip
Reduced Cost since there is no server side hiring of expensive GPUs/CPUs and RAM along with bandwidth of sending data
Zero installation of tools/libraries
Leverage the scale and potential of the web
Decision Forest models available on the client side
You can use state of the art decision forest models such as Random forests and gradient boosted trees trained on your own data within the Tensorflow.js ecosystem
You can run a customized decion forest model on a dataset and then train the model in a Python colab notebook. This model can then be exportd from the Python sved model using the regular TensorFlow JS converter into a WebML capable model in the model.json format that can run in the browser.
The original Decision forest algorithms were written in C++ but have now been compiled to WebAssembly and integrate with Tensorflow.js. This allows TensorFlow.js to support all the latest features and is very performant
MediaPipe provides customizable on-device solutions. MediaPipe also offers web based implementations that are part of the WebML ecosystem. MediaPipe develops models in C++ and then compiles them to WASM so that they can run on the browser. Popular models in MediaPipe are ported to TensorFlow.js.
With MediaPipe Studio you have solutions that run entirely in the browser such as models which enable hand gesture recognition. You can integrate these solutions into your web app with just a few lines of code.
Visual blogs for ML
This is low code/no code JS framework which can be used to build web apps.
Performance improvements to WebGL
Performance improvements made to the WebGL backend at the individual op level for ops like Conv2D, Conv2DTranspose, ScatterND when you use the latest version of Tensorflow.js in your projects.
Models such as the AR Portrait depth, BlazePose Detector and Universal Sentence Encoder benefit from the above optmizations.
See https://storage.googleapis.com/tfjs-performance/3d_photo/index.html for more details.
Chrome’s support for WebGPU
Chrome is adding support for WebGPU in Chrome stable. Brings the power of a billion GPUs to the web. This means you can run larger models such as the Diffusion models in the browser via TensorFlow.js. A standard 512×512 image from this generative model can be created in around 10 seconds. WebGL backend was 3 times slower. This means we are going to see larger and more complex models being pushed to the client side resulting in cost savings.
Backend parity and compatibility with other forms of TensorFlow
Almost 93% of op parity has been achieved across all backends such as WebGL, WebGPU, WASM and JS so users of TensorFlow.js can rest assured that the code they write will work on any platform.
TensorFlow.js is the only form of TensorFlow which can digest all other forms of TensorFlow be it TensorFlow Lite, or TensorFlow Python models. So we can run TensorFlow Lite models, Tensor Flow Python models in the browser via TensorFlow.js.