Ruoshi Liu (@ruoshi_liu) 's Twitter Profile
Ruoshi Liu

@ruoshi_liu

Building better 👁️ and 🧠 for 🤖 | PhD Student @Columbia

ID: 1370948127944015880

linkhttp://ruoshiliu.github.io calendar_today14-03-2021 04:01:29

277 Tweet

1,1K Takipçi

620 Takip Edilen

Ruoshi Liu (@ruoshi_liu) 's Twitter Profile Photo

Thanks Emad for tweeting our work!! Shout out to the Stable Video Diffusion team for the amazing open-source effort👏 Video models will get better and robots will get smarter with them!

Zeyi Liu (@liu_zeyi_) 's Twitter Profile Photo

🔊 Audio signals contain rich information about daily interactions. Can our robots learn from videos with sound? Introducing ManiWAV, a robotic system that learns contact-rich manipulation skills from in-the-wild audio-visual data. See thread for more details (1/4) 👇

Pascal Mettes (@pascalmettes) 's Twitter Profile Photo

Vision-language models benefit from hyperbolic embeddings for standard tasks, but did you know that hyperbolic vision-language models also have surprising properties? Our new #TMLR paper shows 3 intriguing properties. w/ Sarah Ibrahimi Mina Ghadimi Nanne van Noord marcel worring

Vision-language models benefit from hyperbolic embeddings for standard tasks, but did you know that hyperbolic vision-language models also have surprising properties?

Our new #TMLR paper shows 3 intriguing properties.

w/ <a href="/sarahibrahimi_/">Sarah Ibrahimi</a> <a href="/GhadimiAtigMina/">Mina Ghadimi</a> <a href="/nannevn/">Nanne van Noord</a> <a href="/marcelworring/">marcel worring</a>
Mohit Shridhar (@mohito1905) 's Twitter Profile Photo

Image-generation diffusion models can draw arbitrary visual-patterns. What if we finetune Stable Diffusion to 🖌️ draw joint actions 🦾 on RGB observations? Introducing 𝗚𝗘𝗡𝗜𝗠𝗔 paper, videos, code, ckpts: genima-robot.github.io 🧵Thread⬇️

Yunzhu Li (@yunzhuliyz) 's Twitter Profile Photo

Check out our #RSS2024 paper (also the Best Paper Award at the #ICRA2024 deformable object manipulation workshop) on dynamics modeling of diverse materials for robotic manipulation. 🤖 We considered a diverse set of objects, including ropes, clothes, granular media, and rigid

Huy Ha (@haqhuy) 's Twitter Profile Photo

I’ve been training dogs since middle school. It’s about time I train robot dogs too 😛 Introducing, UMI on Legs, an approach for scaling manipulation skills on robot dogs🐶It can toss, push heavy weights, and make your ~existing~ visuo-motor policies mobile!

Ruoshi Liu (@ruoshi_liu) 's Twitter Profile Photo

Author-reviewer discussions on openreview are like couples in toxic relationships. They seem to be having equal and civil discussions, but the underlying power dynamics are so imbalanced.

Ruoshi Liu (@ruoshi_liu) 's Twitter Profile Photo

I personally respect Graeber a lot, but I have to say “building a robot that could take your laundry down, wash it, and bring it back again” is the perfect example of something that’s as easy as it may sound but as hard as it may be. I empathize with the disappointment that

Stephen Tian (@stephentian_) 's Twitter Profile Photo

Learned visuomotor robot policies are sensitive to observation viewpoint shifts, which happen all the time. Can visual priors from large-scale data help? Introducing VISTA: using zero-shot novel view synthesis models for view-robust policy learning! #CoRL2024 🧵👇