onnxruntime
@onnxruntime
Cross-platform training and inferencing accelerator for machine learning models.
ID:1041831598415523841
http://onnxruntime.ai 17-09-2018 23:29:44
291 Tweets
1,3K Followers
43 Following
Run PyTorch models in the browser, on mobile and desktop, with #onnxruntime , in your language and development environment of choice ๐onnxruntime.ai/blogs/pytorch-โฆ
#ONNX Runtime saved the day with our interoperability and ability to run locally on-client and/or cloud! Our lightweight solution gave them the performance they needed with quantization & configuration tooling. Learn how they achieved this in this blog!
cloudblogs.microsoft.com/opensource/202โฆ
Join us live TODAY! We will be talking to Akhila Vidiyala and Devang Aggarwal on AI Show with Cassie! We will show how developers can use #huggingface #optimum #Intel to quantize models and then use #OpenVINO for #ONNXRuntime to accelerate performance.
๐
aka.ms/aishowlive
Imagine the frustration of, after applying optimization tricks, finding that the data copying to GPU slows down your 'MUST-BE-FAST' inference...๐ฅต
๐ค Optimum v1.5.0 added onnxruntime IOBinding support to reduce your memory footprint.
๐ github.com/huggingface/opโฆ
More โฌ๏ธ
Want to use TensorRT as your inference engine for its speedups on GPU but don't want to go into the compilation hassle? We've got you covered with ๐ค Optimum! With one line, leverage TensorRT through onnxruntime! Check out more at hf.co/docs/optimum/oโฆ
๐ฃThe new version of #ONNXRuntime v1.13.0 was just released!!!
Check out the release note and video from the engineering team to learn more about what was in this release!
๐github.com/microsoft/onnxโฆ
๐ฝ๏ธyoutu.be/vo9vlR-TRK4
Finally tokenization with Sentence Piece BPE now works as expected in #NodeJS #JavaScript with tokenizers library ๐! Now getting 'invalid expand shape' errors when passing text tokens' encoded ids to the MiniLM onnxruntime converted Microsoft Research model huggingface.co/microsoft/Multโฆ
๐ญ The hardware optimization floodgates are open!๐ฅ
Diffusers 0.3.0 supports an experimental ONNX exporter and pipeline for Stable Diffusion ๐จ
To find out how to export your own checkpoint and run it with onnxruntime, check the release notes:
github.com/huggingface/diโฆ
๐กSenior Research & Development Engineer per Deltatre, @tinux80 รจ anche #MicrosoftMVP e Intel Software Innovator.
๐Non perderti il suo speech su #AzureML e #Onnx Runtime a #WPC2022 !
๐๐๐๐ช๐ฎ๐ข๐ฌ๐ญ๐ ๐ข๐ฅ ๐ญ๐ฎ๐จ ๐๐ข๐ ๐ฅ๐ข๐๐ญ๐ญ๐จ: wpc2022.eventbrite.it
Microsoft Italia
Gerald Versluis What about a video on ONNX runtime?
Here is the official documentation devblogs.microsoft.com/xamarin/machinโฆ
And MAUI example:
github.com/microsoft/onnxโฆ
The natural language processing library Apache OpenNLP is now integrated with ONNX Runtime! Get the details and a tutorial explaining its use on the blog: msft.it/6013jfemt #OpenSource
In this article, a community member used #ONNXRuntime to try out GPT-2 model which generates English sentences from Ruby language:
dev.to/kojix2/text-geโฆ
Come join us for the hands on lab(September 28, 1-3pm)to learn about accelerating your ML models via ONNXRunTime frameworks on Intel CPUs and GPUs..some surprise goodies as well #IntelON #iamintel #intelarc Intel Graphics Intel Software Lisa Pearce
intel.com/content/www/usโฆ