Pinned Loading
-
huggingface/optimum
huggingface/optimum Publicπ Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization tools
-
huggingface/optimum-onnx
huggingface/optimum-onnx Publicπ€ Optimum ONNX: Export your model to ONNX and run inference with ONNX Runtime
-
huggingface/optimum-neuron
huggingface/optimum-neuron PublicEasy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.
-
Reducing_the_Transformer_Architecture_to_a_Minimum
Reducing_the_Transformer_Architecture_to_a_Minimum PublicAn implementation of the techniques introduced in the paper "Reducing the Transformer Architecture to a Minimum"
Python 5
-
oussamakharouiche/Language-Assisted-RL-Agent-
oussamakharouiche/Language-Assisted-RL-Agent- PublicPython 1
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.