onnxruntime(@onnxruntime) 's Twitter Profileg
onnxruntime

@onnxruntime

Cross-platform training and inferencing accelerator for machine learning models.

ID:1041831598415523841

linkhttp://onnxruntime.ai calendar_today17-09-2018 23:29:44

291 Tweets

1,3K Followers

43 Following

onnxruntime(@onnxruntime) 's Twitter Profile Photo

Run PyTorch models in the browser, on mobile and desktop, with , in your language and development environment of choice ๐Ÿš€onnxruntime.ai/blogs/pytorch-โ€ฆ

account_circle
Szymon ๏ฃฟ(@Szymon_Lorenz) 's Twitter Profile Photo

Developers, don't overlook the power of Swift Package Manager! It simplifies dependency management and promotes modularity. Plus, exciting news: ONNXRuntime just added support for SPM!

account_circle
onnxruntime(@onnxruntime) 's Twitter Profile Photo

Runtime saved the day with our interoperability and ability to run locally on-client and/or cloud! Our lightweight solution gave them the performance they needed with quantization & configuration tooling. Learn how they achieved this in this blog!
cloudblogs.microsoft.com/opensource/202โ€ฆ

account_circle
onnxruntime(@onnxruntime) 's Twitter Profile Photo

Give yourself a treat (like this adorable๐Ÿถ deserves) and read this blog on how to use Runtime on !

devblogs.microsoft.com/surface-duo/onโ€ฆ

Give yourself a treat (like this adorable๐Ÿถ deserves) and read this blog on how to use #ONNX Runtime on #Android! devblogs.microsoft.com/surface-duo/onโ€ฆ
account_circle
onnxruntime(@onnxruntime) 's Twitter Profile Photo

Join us live TODAY! We will be talking to Akhila Vidiyala and Devang Aggarwal on AI Show with Cassie! We will show how developers can use to quantize models and then use for to accelerate performance.
๐Ÿ‘‡
aka.ms/aishowlive

account_circle
onnxruntime(@onnxruntime) 's Twitter Profile Photo

In this blog, we will discuss how to make huge models like smaller and faster with , Neural Networks Compression Framework (NNCF) and Runtime through !

๐Ÿ‘‡
cloudblogs.microsoft.com/opensource/202โ€ฆ

account_circle
ONNX(@onnxai) 's Twitter Profile Photo

We are seeking your input to shape the ONNX roadmap! Proposals are being collected until January 24, 2023 and will be discussed in February.

Submit your ideas at forms.microsoft.com/pages/responseโ€ฆ

account_circle
Jingya Huang(@Jhuaplin) 's Twitter Profile Photo

Imagine the frustration of, after applying optimization tricks, finding that the data copying to GPU slows down your 'MUST-BE-FAST' inference...๐Ÿฅต

๐Ÿค— Optimum v1.5.0 added onnxruntime IOBinding support to reduce your memory footprint.

๐Ÿ‘€ github.com/huggingface/opโ€ฆ

More โฌ‡๏ธ

account_circle
efxmarty(@efxmarty) 's Twitter Profile Photo

Want to use TensorRT as your inference engine for its speedups on GPU but don't want to go into the compilation hassle? We've got you covered with ๐Ÿค— Optimum! With one line, leverage TensorRT through onnxruntime! Check out more at hf.co/docs/optimum/oโ€ฆ

Want to use TensorRT as your inference engine for its speedups on GPU but don't want to go into the compilation hassle? We've got you covered with ๐Ÿค— Optimum! With one line, leverage TensorRT through @onnxruntime! Check out more at hf.co/docs/optimum/oโ€ฆ
account_circle
onnxruntime(@onnxruntime) 's Twitter Profile Photo

๐Ÿ“ฃThe new version of v1.13.0 was just released!!!

Check out the release note and video from the engineering team to learn more about what was in this release!

๐Ÿ“github.com/microsoft/onnxโ€ฆ
๐Ÿ“ฝ๏ธyoutu.be/vo9vlR-TRK4

account_circle
Loreto Parisi(@loretoparisi) 's Twitter Profile Photo

Finally tokenization with Sentence Piece BPE now works as expected in with tokenizers library ๐Ÿš€! Now getting 'invalid expand shape' errors when passing text tokens' encoded ids to the MiniLM onnxruntime converted Microsoft Research model huggingface.co/microsoft/Multโ€ฆ

account_circle
Anton Lozhkov(@anton_lozhkov) 's Twitter Profile Photo

๐Ÿญ The hardware optimization floodgates are open!๐Ÿ”ฅ

Diffusers 0.3.0 supports an experimental ONNX exporter and pipeline for Stable Diffusion ๐ŸŽจ

To find out how to export your own checkpoint and run it with onnxruntime, check the release notes:

github.com/huggingface/diโ€ฆ

๐Ÿญ The hardware optimization floodgates are open!๐Ÿ”ฅ Diffusers 0.3.0 supports an experimental ONNX exporter and pipeline for Stable Diffusion ๐ŸŽจ To find out how to export your own checkpoint and run it with @onnxruntime, check the release notes: github.com/huggingface/diโ€ฆ
account_circle
OverNet Education(@OverNetE) 's Twitter Profile Photo

๐Ÿ’กSenior Research & Development Engineer per Deltatre, @tinux80 รจ anche e Intel Software Innovator.
๐Ÿ“ŠNon perderti il suo speech su e Runtime a !
๐Ÿ‘‰๐€๐œ๐ช๐ฎ๐ข๐ฌ๐ญ๐š ๐ข๐ฅ ๐ญ๐ฎ๐จ ๐›๐ข๐ ๐ฅ๐ข๐ž๐ญ๐ญ๐จ: wpc2022.eventbrite.it
Microsoft Italia

๐Ÿ’กSenior Research & Development Engineer per @deltatre, @tinux80 รจ anche #MicrosoftMVP e Intel Software Innovator. ๐Ÿ“ŠNon perderti il suo speech su #AzureML e #Onnx Runtime a #WPC2022! ๐Ÿ‘‰๐€๐œ๐ช๐ฎ๐ข๐ฌ๐ญ๐š ๐ข๐ฅ ๐ญ๐ฎ๐จ ๐›๐ข๐ ๐ฅ๐ข๐ž๐ญ๐ญ๐จ: wpc2022.eventbrite.it @microsofitalia
account_circle
Santosh Dahal(@exendahal) 's Twitter Profile Photo

Gerald Versluis What about a video on ONNX runtime?
Here is the official documentation devblogs.microsoft.com/xamarin/machinโ€ฆ
And MAUI example:
github.com/microsoft/onnxโ€ฆ

account_circle
Open at Microsoft(@OpenAtMicrosoft) 's Twitter Profile Photo

The natural language processing library Apache OpenNLP is now integrated with ONNX Runtime! Get the details and a tutorial explaining its use on the blog: msft.it/6013jfemt

The natural language processing library Apache OpenNLP is now integrated with ONNX Runtime! Get the details and a tutorial explaining its use on the blog: msft.it/6013jfemt #OpenSource
account_circle
onnxruntime(@onnxruntime) 's Twitter Profile Photo

In this article, a community member used to try out GPT-2 model which generates English sentences from Ruby language:

dev.to/kojix2/text-geโ€ฆ

account_circle
Hisham Chowdhury(@hishamchow) 's Twitter Profile Photo

Come join us for the hands on lab(September 28, 1-3pm)to learn about accelerating your ML models via ONNXRunTime frameworks on Intel CPUs and GPUs..some surprise goodies as well Intel Graphics Intel Software Lisa Pearce
intel.com/content/www/usโ€ฆ

account_circle