RunPod (@runpod_io) 's Twitter Profile
RunPod

@runpod_io

GPU Cloud built for production.

ID: 1506200368010649603

linkhttps://www.runpod.io calendar_today22-03-2022 09:25:39

364 Tweet

4,4K Followers

127 Following

Tim Pietrusky (@nerddisco) 's Twitter Profile Photo

You can now run Black Forest Labs FLUX.1 schnell & dev as serverless endpoints on RunPod with ComfyUI via the 3.1.0 release of runpod-worker-comfy. ➡️ Quickstart github.com/blib-la/runpod…

You can now run <a href="/bfl_ml/">Black Forest Labs</a> FLUX.1 schnell &amp; dev as serverless endpoints on <a href="/runpod_io/">RunPod</a> with <a href="/ComfyUI/">ComfyUI</a> via the 3.1.0 release of runpod-worker-comfy. 

➡️ Quickstart github.com/blib-la/runpod…
Alpin (@alpindale) 's Twitter Profile Photo

Instructions for finetuning (FFT!) Mistral Large 123B on RunPod's MI300X compute, along with an axolotl config file to get you started. Should cost you about ~$500 for ~60M tokens. huggingface.co/anthracite-org…

kalomaze (@kalomaze) 's Twitter Profile Photo

For those unaware, MI300x AMD GPUs are the most cost effective way to fully finetune a large (~123b) model (for a group of individuals, at least), by far. We've published the shell setup to get AMD training working, as well as the axolotl config used (same for the datasets).

Andrey Cheptsov (@andrey_cheptsov) 's Twitter Profile Photo

dstack AMD RunPod While testing, we were blown away by the MI300X’s memory capacity—it’s perfect for LLMs! Also, check out the model playground UI, which you automatically get when you run a model via dstack. Tested it with Llama 3.1 70B running on a single node at full precision! 🤯

<a href="/dstackai/">dstack</a> <a href="/AMD/">AMD</a> <a href="/runpod_io/">RunPod</a> While testing, we were blown away by the MI300X’s memory capacity—it’s perfect for LLMs! Also, check out the model playground UI, which you automatically get when you run a model via <a href="/dstackai/">dstack</a>. Tested it with Llama 3.1 70B running on a single node at full precision! 🤯
Geronimo (@geronimo_ai) 's Twitter Profile Photo

RunFlux - a Google Colab Notebook to spin up a FLUX1-dev LoRA training at RunPod - Upload images+captions - Choose a GPU - Click Run, lean back and watch LoRAs and sample images pushed to your Hugging Face repo Colab Notebook: colab.research.google.com/github/geronim…

RunFlux - a Google Colab Notebook to spin up a FLUX1-dev LoRA training at <a href="/runpod_io/">RunPod</a> 

- Upload images+captions 
- Choose a GPU 
- Click Run, lean back and watch LoRAs and sample images pushed to your Hugging Face repo

Colab Notebook: colab.research.google.com/github/geronim…
Geronimo (@geronimo_ai) 's Twitter Profile Photo

A typical train-on-my-face LoRA run is ~1 USD For example, training on 14 images for 3000 steps (rank 16, batch size 1) took - 4hrs at 22c/hr on a RTX 3090 - 2hrs at 44c/hr on a RTX 4090

A typical train-on-my-face LoRA run is ~1 USD

For example, training on 14 images for 3000 steps (rank 16, batch size 1) took 
- 4hrs at 22c/hr on a RTX 3090
- 2hrs at 44c/hr on a RTX 4090
delltechcapital (@delltechcapital) 's Twitter Profile Photo

Thanks, Philadelphia Business Journal, for highlighting RunPod's inclusion on Forbes' Next Billion-Dollar Startups list. Keep reading to see how the company's globally distributed GPU cloud platform for developing and deploying AI enhances developers' daily lives shorturl.at/QeuXg

delltechcapital (@delltechcapital) 's Twitter Profile Photo

It's great to see RunPod, Pecan AI, MinIO, and Domino Data Lab shortlisted in the inaugural 2024 A.I. Awards! Check out the full list of honorees broken down by categories here: cloud-awards.com/2024-ai-awards…

It's great to see <a href="/runpod_io/">RunPod</a>, <a href="/pecan_ai/">Pecan AI</a>, <a href="/Minio/">MinIO</a>, and <a href="/DominoDataLab/">Domino Data Lab</a> shortlisted in the inaugural 2024 A.I. Awards! Check out the full list of honorees broken down by categories here: cloud-awards.com/2024-ai-awards…
camenduru (@camenduru) 's Twitter Profile Photo

🦋 CogVideoX-5B: Text-to-Video Diffusion Models with An Expert Transformer 📽 Jupyter Notebook 🥳 + 🥪 🥪 Tost AI + 🍇 RunPod serverless Thanks to Zhuoyi Yang ❤ Jiayan Teng ❤ Wendi Zheng ❤ Ming Ding ❤ Shiyu Huang ❤ Jiazheng Xu ❤ Yuanming Yang ❤ Wenyi Hong ❤ Xiaohan

The AI Conference (@aiconference) 's Twitter Profile Photo

🌟A Huge Thank You to Our Incredible Sponsors!🌟 We’re grateful to have the support of these amazing companies at The AI Conference 2024. Their contributions are driving the future of AI innovation and making this event possible. From tech giants to innovative startups, our

🌟A Huge Thank You to Our Incredible Sponsors!🌟

We’re grateful to have the support of these amazing companies at The AI Conference 2024. Their contributions are driving the future of AI innovation and making this event possible. From tech giants to innovative startups, our
Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

Self-Hosting LLaMA 3.1 70 on RunPod with vLLM Inference Engine For a 70B parameter model, to deploy in 16-bit floating point precision we’ll need ~140GB of memory, and for something like 4-bit (INT4) we only need ~45GB And you need additional memory for - Context

Self-Hosting LLaMA 3.1 70 on <a href="/runpod_io/">RunPod</a>  with <a href="/vllm_project/">vLLM</a> Inference Engine

For a 70B parameter model, to deploy in 16-bit floating point precision we’ll need ~140GB of memory, and for something like 4-bit (INT4) we only need ~45GB

And you need additional memory for

- Context
Tim Pietrusky (@nerddisco) 's Twitter Profile Photo

turns out Gamma wasn't the right fit for my slides because it couldn’t generate a theme and update all slides at once, so I ended up using v0 again ¯\_(ツ)_/¯ here's what I did: 1. created 4 different slides in v0 2. manually recreated them in Google slides i

turns out <a href="/MeetGamma/">Gamma</a> wasn't the right fit for my slides because it couldn’t generate a theme and update all slides at once, so I ended up using <a href="/v0/">v0</a> again ¯\_(ツ)_/¯ 

here's what I did:

1. created 4 different slides in <a href="/v0/">v0</a>  
2. manually recreated them in <a href="/Google/">Google</a> slides

i