Daphne Cornelisse (@daphne_cor) 's Twitter Profile
Daphne Cornelisse

@daphne_cor

PhD student at @nyuniversity

ID: 903960922703659008

linkhttp://www.daphne-cornelisse.com calendar_today02-09-2017 12:40:53

395 Tweet

849 Followers

457 Following

Timon Willi (@timonwilli) 's Twitter Profile Photo

1/13 Excited to share our new paper: "Scaling Opponent Shaping to High Dimensional Games"! We introduce Shaper, a method for scaling opponent shaping to complex, high-dimensional games with long time horizons and temporally extended actions. 🧵

Marcel Hussing (@marcel_hussing) 's Twitter Profile Photo

I might be biased because I think Claas is awesome but we need more papers that focus on understanding how things work in practice not just "number goes up". This is a great example of such a paper.

Alexandre Moufarek (@amoufarek) 's Twitter Profile Photo

GPUDrive (Kazemkhani et al., 2024) an open-source GPU-accelerated multi-agent driving simulator that can generate over 1 million FPS, unlocking multi-agent planning research which requires billions of experience steps. Code: github.com/Emerge-Lab/gpu… Paper: arxiv.org/abs/2408.01584

Joseph Suarez (e/🐡) (@jsuarez5341) 's Twitter Profile Photo

Train GPUDrive at >250k ASPS, ~5x original baseline. Hyperparams + sweeps coming soon! Star to support! ⭐️pufferai/pufferlib ⭐️Emerge-Lab/gpudrive 📦docker pull pufferai/puffertank:1.0

Train GPUDrive at >250k ASPS, ~5x original baseline. Hyperparams + sweeps coming soon! Star to support!
⭐️pufferai/pufferlib
⭐️Emerge-Lab/gpudrive
📦docker pull pufferai/puffertank:1.0
Daphne Cornelisse (@daphne_cor) 's Twitter Profile Photo

I’m at #RLC2024 for the next few days, message me if you want to chat! 🙂 Here are a few topics I’m interested in: - few shot learning in multi-agent settings - continual learning - ⁠representations for adaptive planning and generalisation

David Abel (@dabelcs) 's Twitter Profile Photo

Excited to to present Three Dogmas of RL on the last day of RL_Conference! > Talk in Room 168, 11.30am-12.30pm > Poster in Room 162, 12.30pm-2.30pm Anna Harutyunyan and I will be around, looking forward to your questions + challenges! 😄

Excited to to present Three Dogmas of RL on the last day of <a href="/RL_Conference/">RL_Conference</a>!

&gt; Talk in Room 168, 11.30am-12.30pm

&gt; Poster in Room 162, 12.30pm-2.30pm

<a href="/aharutyu/">Anna Harutyunyan</a> and I will be around, looking forward to your questions + challenges! 😄
Daphne Cornelisse (@daphne_cor) 's Twitter Profile Photo

Since moving from The Netherlands to NYC, I've had numerous phone calls with my family where they are like "It's been gloomy and raining for 5 days straight," and I almost feel bad because it is sunny here almost every day

Daphne Cornelisse (@daphne_cor) 's Twitter Profile Photo

It would be nice if chatbots were integrated with educational videos on YouTube. Imagine having the ability to ask questions or request clarification, and YouTube could quickly generate a short explainer in the style of the original video.

Daphne Cornelisse (@daphne_cor) 's Twitter Profile Photo

The eternal programmer's dilemma: it's hard to stop when you're on a roll, but with probability epsilon (increasing function of fatigue) you will hit a problem -- and it's even worse to quit right then!

Joseph Suarez (e/🐡) (@jsuarez5341) 's Twitter Profile Photo

PufferAI is currently looking for a couple more academic labs to support with reinforcement learning tools and infra. It's all free and open source. Our focus is on high-perf envs and automated experimentation. DM me and let's chat. For industry labs, we sell support packages!

Daphne Cornelisse (@daphne_cor) 's Twitter Profile Photo

What are key papers to read on Transformers with "memory", meaning architectures that can generate outputs based on inputs beyond what's in the current context?

vmoens (@vincentmoens) 's Twitter Profile Photo

Today we're opensourcing a LeanRL, a simple RL library that provides recipes for fast RL training using torch.compile and cudagraphs. Using these, we get >6x speed-ups compared to the original CleanRL implementations. github.com/pytorch-labs/l… A thread ⬇️

Today we're opensourcing a LeanRL, a simple RL library that provides recipes for fast RL training using torch.compile and cudagraphs.
Using these, we get &gt;6x speed-ups  compared to the original CleanRL implementations.
github.com/pytorch-labs/l…
A thread ⬇️