𝕏Jin𝕏(@Jin_Neuron) 's Twitter Profile Photo

今日は学校の起業家工房Fab室にあるAIのワークステーションで、H100を使ってSV3Dを動かしてみました。デフォルトの設定しかまだ試せていませんが、背景のある画像でも背景を切り取ってうまく3D動画を生成してくれます。
←入力画像 出力動画→

account_circle
Ripple(66%)(@ycdjuduztc) 's Twitter Profile Photo

老雷获利了结英伟达,去年2月建仓至今2.5倍

1、当下能够继续让世界前行的只有AI(纳指的决定因素)

2、英伟达的财报,增长是降速的,原因是H100现在全世界该买的大客户基本上都已经买了,而小客户商业化还没完成,持观望态度

3、三季度大概率向下走,到770-850再上车

4、最终英伟达还会上1300

account_circle
Alex(@TickerSymbolYOU) 's Twitter Profile Photo

Elon Musk’s xAI just raised $6B to develop Grok AI and build a supercomputer with 100,000 $NVDA H100 chips.

Here was Jensen’s reaction (probably):

account_circle
Collins⚡(@CollinsDeFipen) 's Twitter Profile Photo

𝟱. Pre-Configured and Custom VMs:

InfraX offers pre-configured virtual machines for quick deployment tailored to specific use cases.

𝟲. H100 Access:

Users can access the high-performance H100 GPU for AI, data analytics, and computationally intensive workloads.

account_circle
$AIGPU(@aigputoken) 's Twitter Profile Photo

$AIGPU Proof of Compute Thread 📈

In this thread we will dive deep into the performance & optimization metrics of 9 RTX 4090’s achieving similar KPI’s as 1 H100 for half the cost.

Continue reading as we disclose the proof of compute & KPI’s⚡️

account_circle
FHILY👑(@Oluwaphilemon1) 's Twitter Profile Photo

infraX | $INFRA Alright, let's dive into the fascinating features of infraX | $INFRA!

5️⃣ 𝗙𝗘𝗔𝗧𝗨𝗥𝗘𝗦 𝗢𝗙 𝗜𝗻𝗳𝗿𝗮𝗫:

- GPU Lending
- GPU Rental
- AI Node Rental
- Pay as you go Rental
- Custom VM
- Staking
- H100 access

Let's take a close look at how they function…

account_circle
Tao Ceτi(@ceti_ai) 's Twitter Profile Photo

They're heeeeeeeere!

The $CETI team is busy setting up racks and getting the machines ready because the h100's are HERE.

Who is ready for the next evolution of decentralized AI?  Let's see those comments ⬇️

account_circle
FS.com UK(@FScom_UK) 's Twitter Profile Photo

Revolutionize your network management with AmpCon™! Effortlessly deploy and automate PicOS® switches remotely, while seamlessly managing your H100 networks through a unified platform.
Learn More: bit.ly/4bFoAPa

account_circle
const(@const_reborn) 's Twitter Profile Photo

People underestimate how inefficient things are:

Tensorplex beat Meta's llama-model with a cluster 50x smaller on a 6 month old Bittensor market.

That's equivalent to five A100 -> H100 hardware upgrades in half a year.

and the difference came from allocation not compute.

account_circle
Eternal Sammie | ✍️🛠️🔺💙(@eternal_sammie) 's Twitter Profile Photo

Other 𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝘀 include:

- Access to AI functionalities through API endpoints.
- Staking $INFRA tokens to earn rewards in Ethereum.
- Pre-configured or custom virtual machines for specific needs.
- Instant access to H100 computing power.

account_circle
Trelis Research(@TrelisResearch) 's Twitter Profile Photo

🎶Fine-tuning on Wikipedia Datasets🎶

I extract a dataset from wikipedia to fine-tuning Llama 3 on the Irish language.

Thanks to Daniel Han and Unsloth AI - I was able to fine-tune Llama 3 and get through 50k+ rows of data in just over an hour on an H100.

TIMESTAMPS:
0:00

account_circle
かみるぽ(@KIHA40_101) 's Twitter Profile Photo

夕暮れ時を行く、元上り普通終列車スジの4532D…
普通列車の終電が延びて約5年。夜については便利になりました。

そして、今日は、石北線ラッピングのデクモでした。

 

account_circle
TheMacroSift.eth(@themacrosift) 's Twitter Profile Photo

🔎 xAI confirms raise of $6B at $24B post-money valuation from Valor, a16z, Sequoia, Fidelity, others

• 100K NVIDIA H100 “gigafactory of compute” planned for the fall of 2025, Elon told investors.

•4x larger than Meta’s current supercomputer (but not necessarily their 2025

🔎 xAI confirms raise of $6B at $24B post-money valuation from Valor, a16z, Sequoia, Fidelity, others

• 100K NVIDIA H100 “gigafactory of compute” planned for the fall of 2025, Elon told investors. 

•4x larger than Meta’s current supercomputer (but not necessarily their 2025
account_circle
Yuchen Jin(@Yuchenj_UW) 's Twitter Profile Photo

I trained GPT-2 (124M) using Andrej Karpathy's llm.c in just 43 minutes with 8 x H100 GPUs.

This is 2.1x faster than the 90 minutes it took with 8 x A100 GPUs. Currently, the cost of renting an H100 GPU is around $2.50/hr (under 1-year commitment), which reduces the training cost for

I trained GPT-2 (124M) using @karpathy's llm.c in just 43 minutes with 8 x H100 GPUs.

This is 2.1x faster than the 90 minutes it took with 8 x A100 GPUs. Currently, the cost of renting an H100 GPU is around $2.50/hr (under 1-year commitment), which reduces the training cost for
account_circle