This fork (demo)
- Added sd-turbo image-to-image mode using the vae encoder to initialize latents
- Added camera input option for the above (I can get up to 3 fps with an RTX 3070 Ti laptop GPU on Windows)
- Added a helper script for casting instance-normalization nodes to float32 (which has critical numerical issues with float16)
- Removed sd-turbo safety checks ;)
- Run times of VAE encoder and UNET are inconsistent and are sometimes slower by 15x
Benjamin Netanyahu holding a white pigeon
Run ONNX models in the browser with WebNN. The developer preview unlocks interactive ML on the web that benefits from reduced latency, enhanced privacy and security, and GPU acceleration from DirectML.
WebNN Developer Preview website (was originially).
NOTE: Currently, the supported platforms are Edge/Chromium (support for other platforms is coming soon).
The website provides four scenarios based on different ONNX pre-trained deep learning models.
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
This Stable Diffusion 1.5 model has been optimized to work with WebNN. This model is licensed under the CreativeML Open RAIL-M license. For terms of use, please visit here. If you comply with the license and terms of use, you have the rights described therin. By using this Model, you accept the terms.
This model is meant to be used with the corresponding sample on this repo for educational or testing purposes only.
SD-Turbo is a fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation. In the demo, you can generate an image in 2s on AI PC devices by leveraging WebNN API, a dedicated low-level API for neural network inference hardware acceleration.
This Stable Diffusion Turbo model has been optimized to work with WebNN. This model is licensed under the STABILITY AI NON-COMMERCIAL RESEARCH COMMUNITY LICENSE AGREEMENT. For terms of use, please visit the Acceptable Use Policy. If you comply with the license and terms of use, you have the rights described therin. By using this Model, you accept the terms.
This model is meant to be used with the corresponding sample on this repo for educational or testing purposes only.
Segment Anything is a new AI model from Meta AI that can "cut out" any object. In the demo, you can segment any object from your uploaded images.
This Segment Anything Model has been optimized to work with WebNN. This model is licensed under the Apache-2.0 License. For terms of use, please visit the Code of Conduct. If you comply with the license and terms of use, you have the rights described therin. By using this Model, you accept the terms.
This model is meant to be used with the corresponding sample on this repo for educational or testing purposes only.
Whisper Base is a pre-trained model for automatic speech recognition (ASR) and speech translation. In the demo, you can experience the speech to text feature by using on-device inference powered by WebNN API and DirectML, especially the NPU acceleration.
This Whisper-base model has been optimized to work with WebNN. This model is licensed under the Apache-2.0 license. For terms of use, please visit the Intended use. If you comply with the license and terms of use, you have the rights described therin. By using this Model, you accept the terms.
This model is meant to be used with the corresponding sample on this repo for educational or testing purposes only.
MobileNet and ResNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes.
Enabling WebNN
- Requires Windows 11 v21H2 (DML 1.6.0) or higher and a GPU, for support of "expand" and "layerNormalization" ops. See WebNN implementation status
- Requires Chromium desktop (e.g. Chrome, Edge) 129 or higher
- Enable
chrome://flags/#web-machine-learning-neural-network
and relaunch the browser - Enable graphics/hardware acceleration browser setting
- Make sure your system graphics settings for the browser uses high performance / discrete GPU
Install Dependencies:
cd webnn-developer-preview
npm install
Run the website in localhost:
npm run dev
This will start a dev server and run WebNN Developer Preview demos with the WebNN enabled browser on your localhost.
Please run the following command to download demos required models if you run the demos the first time. You can also modify fetch_models.js to add network proxy configuration when needed.
npm run fetch-models
WebNN is a living specification and still subject to breaking changes, which may impact the samples depending on your browser version. The following are recent:
- 2024-07-24
MLContextOptions::MLPowerPreference
renameauto
todefault
Chromium change - 2024-07-24 Allow
MLGraphBuilder.build()
to be called only once - spec change, Chromium change, ORT change, sample change - 2024-07-22
LSTM
/GRU
activation enumMLRecurrentNetworkActivation
- spec change, Chromium change - 2024-07-22
argMin
/argMax
change to take scalaraxis
parameter - spec change, Chromium change - 2024-07-15
argMin
/argMax
addoutputDataType
parameter - spec change, Chromium change, sample change - 2024-06-12
softmax
axis argument - spec change, Chromium change - 2024-06-07 Remove incompatible
MLActivations
for recurrent ops spec change, Chromium change, baseline change
Model | Known compatible Chromium version |
---|---|
Segment Anything | 129.0.6617.0 |
Stable Diffusion Turbo | 129.0.6617.0 |
Stable Diffusion 1.5 | 129.0.6617.0 |
Whisper Base | 129.0.6617.0 |
ResNet50 | 129.0.6617.0 |
MobileNet V2 | 129.0.6617.0 |
EfficientNet Lite4 | 129.0.6617.0 |
You can check the version via "about://version" in the address bar. In Chrome, look for "Google Chrome". In Edge, heed the "Chromium version".
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.