Zoey Chen (@zoeyc17) 's Twitter Profile
Zoey Chen

@zoeyc17

PhD student at the University of Washington. I blog about computer vision, robotics and artificial intelligence at:qiuyuchen14.github.io

ID: 908162147351273472

linkhttps://qiuyuchen14.github.io/ calendar_today14-09-2017 02:55:03

107 Tweet

966 Followers

526 Following

Kevin Zakka (@kevin_zakka) 's Twitter Profile Photo

Introducing 𝗥𝗼𝗯𝗼𝗣𝗶𝗮𝗻𝗶𝘀𝘁 🎹🤖, a new benchmark for high-dimensional robot control! Solving it requires mastering the piano with two anthropomorphic hands. This has been one year in the making, and I couldn’t be happier to release it today! Some highlights below:

Wenxuan Zhou (@wenxuan_zhou) 's Twitter Profile Photo

How can robots learn generalizable manipulation skills for diverse objects? Going beyond pick-and-place, our recent work “HACMan” enables complex interactions for unseen objects, such as flipping, pushing, or tilting, using spatial action maps + RL with point clouds. (w/ @MetaAI)

Zoey Chen (@zoeyc17) 's Twitter Profile Photo

I put together some slides on "How to train your robot with limited data" for a class at UW, sharing them in case it's useful for anyone who is interested. It covers some aspects of data augmentation, domain adaptation, and sim2real for robotics. tinyurl.com/aytnwp

I put together some slides on "How to train your robot with limited data" for a class at UW, sharing them in case it's useful for anyone who is interested. It covers some aspects of data augmentation, domain adaptation, and sim2real for robotics.  tinyurl.com/aytnwp
Vikash Kumar (@vikashplus) 's Twitter Profile Photo

#𝗥𝗼𝗯𝗼𝗔𝗴𝗲𝗻𝘁 -- A universal multi-task agent on a data-budget 💪 with 12 non-trivial skills 💪 can generalize them across 38 tasks 💪& 100s of novel scenarios! 🌐robopen.github.io w/ Homanga Bharadhwaj Jay Vakil Mohit Sharma, Abhinav Gupta, Shubham Tulsiani

Jiafei Duan (@djiafei) 's Twitter Profile Photo

🚨Is it possible to devise an intuitive approach for crowdsourcing trainable data for robots without requiring a physical robot🤖? Can we democratize robot learning for all?🧑‍🤝‍🧑 Check out our latest #CoRL2023 paper-> AR2-D2: Training a Robot Without a Robot

Chen Wang (@chenwang_j) 's Twitter Profile Photo

How to chain multiple dexterous skills to tackle complex long-horizon manipulation tasks? Imagine retrieving a LEGO block from a pile, rotating it in-hand, and inserting it at the desired location to build a structure. Introducing our new work - Sequential Dexterity 🧵👇

Jiafei Duan (@djiafei) 's Twitter Profile Photo

For large-scale robotic deployment🤖 in the real-world 🌏, robots must adapt to changes in environment and objects. Ever questioned the generalizability of your robot's manipulation policy? Put it to the test with The Colosseum 🏛️. Check out our project: robot-colosseum.github.io

Marcel Torné (@marceltornev) 's Twitter Profile Photo

How can we train robust policies with minimal human effort?🤖 We propose RialTo, a system that robustifies imitation learning policies from 15 real-world demonstrations using on-the-fly reconstructed simulations of the real world. (1/9)🧵 Project website: real-to-sim-to-real.github.io/RialTo/

Chen Wang (@chenwang_j) 's Twitter Profile Photo

Can we use wearable devices to collect robot data without actual robots? Yes! With a pair of gloves🧤! Introducing DexCap, a portable hand motion capture system that collects 3D data (point cloud + finger motion) for training robots with dexterous hands Everything open-sourced

Zoey Chen (@zoeyc17) 's Twitter Profile Photo

Really diverse robot manipulation dataset collected in the wild, with great effort across many institutes! It was fun to participate and I'm really excited to see all the tasks this enables!

Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

So you want to do robotics tasks requiring dynamics information in the real world, but you don’t want the pain of real-world RL? In our work to be presented as an oral at ICLR 2024, Marius Memmel showed how we can do this via a real-to-sim-to-real policy learning approach. A 🧵 (1/7)

Homanga Bharadhwaj (@mangahomanga) 's Twitter Profile Photo

Track2Act: Our latest on training goal-conditioned policies for diverse manipulation in the real-world. We train a model for embodiment-agnostic point track prediction from web videos combined with embodiment-specific residual policy learning homangab.github.io/track2act/ 1/n

Zoey Chen (@zoeyc17) 's Twitter Profile Photo

come to check out our new work URDFormer for cheaply generating interactive simulation content from real-world images! paper, code, website: urdformer.github.io, 👇detailed thread from Abhishek Gupta

Wentao Yuan (@tonywentaoyuan) 's Twitter Profile Photo

Humans use pointing to communicate plans intuitively. Compared to language, pointing gives more precise guidance to robot behaviors. Can we teach a robot how to point like humans? Introducing RoboPoint 🤖👉, an open-source VLM instruction-tuned to point. robo-point.github.io