Sourav Garg (@sourav_garg_) 's Twitter Profile
Sourav Garg

@sourav_garg_

Research Fellow @TheAIML @UniofAdelaide previously @QUTRobotics @QUT Triangulating robotic vision, machine learning and language! oravus.github.io

ID: 2298327384

calendar_today18-01-2014 18:21:29

432 Tweet

859 Followers

329 Following

Aran Komatsuzaki (@arankomatsuzaki) 's Twitter Profile Photo

Segment Anything without Supervision Exceeds SAM’s AR by over 6.7% and AP by 3.9% using only 1% of labels repo: github.com/frank-xwang/Un… abs: arxiv.org/abs/2406.20081

Segment Anything without Supervision

Exceeds SAM’s AR by over 6.7% and AP by 3.9% using only 1% of labels

repo: github.com/frank-xwang/Un…
abs: arxiv.org/abs/2406.20081
Ted Xiao (@xiao_ted) 's Twitter Profile Photo

Announcing one of the most unique and ambitious robot competitions ever: 🏁The Earth Rover Challenge at #IROS2024! ✅Globally distributed in-the-wild evaluation ✅Real world navigation task settings ✅Large training dataset provided Details below 👇

Michael Milford FTSE 🤖🚘🐀🧠📘🗣️ (@maththrills) 's Twitter Profile Photo

Congrats to ahmad khaliq for his accepted #ECCV2024 paper!!! 𝐕𝐋𝐀𝐃-𝐁𝐮𝐅𝐅: 𝐁𝐮𝐫𝐬𝐭-𝐚𝐰𝐚𝐫𝐞 𝐅𝐚𝐬𝐭 𝐅𝐞𝐚𝐭𝐮𝐫𝐞 𝐀𝐠𝐠𝐫𝐞𝐠𝐚𝐭𝐢𝐨𝐧 𝐟𝐨𝐫 𝐕𝐢𝐬𝐮𝐚𝐥 𝐏𝐥𝐚𝐜𝐞 𝐑𝐞𝐜𝐨𝐠𝐧𝐢𝐭𝐢𝐨𝐧 w. Sourav Garg lead supervisor + co-authors Ming Xu Stephen Hausler

Congrats to <a href="/imahmadkhaliq/">ahmad khaliq</a> for his accepted #ECCV2024 paper!!!

𝐕𝐋𝐀𝐃-𝐁𝐮𝐅𝐅: 𝐁𝐮𝐫𝐬𝐭-𝐚𝐰𝐚𝐫𝐞 𝐅𝐚𝐬𝐭 𝐅𝐞𝐚𝐭𝐮𝐫𝐞 𝐀𝐠𝐠𝐫𝐞𝐠𝐚𝐭𝐢𝐨𝐧 𝐟𝐨𝐫 𝐕𝐢𝐬𝐮𝐚𝐥 𝐏𝐥𝐚𝐜𝐞 𝐑𝐞𝐜𝐨𝐠𝐧𝐢𝐭𝐢𝐨𝐧

w. <a href="/sourav_garg_/">Sourav Garg</a> lead supervisor + co-authors Ming Xu  <a href="/Stephen_Hausler/">Stephen Hausler</a>
Alejandro Fontán (@afontanvillcmp) 's Twitter Profile Photo

🚀 We present AnyFeature-VSLAM at #RSS2024 next week! 📝 AnyFeature-VSLAM: Automating the Usage of Any Chosen Feature into VSLAM, Alejandro Fontán, Javier Civera, Michael Milford FTSE 🤖🚘🐀🧠📘🗣️ 📅 Session 8. Perception and Navigation 🗓️ 17-Jul ⏰ 10:00-11:00am Paper: roboticsconference.org/program/papers… [1/3]

Krishan Rana (@krshnrana) 's Twitter Profile Photo

📌How can we scale imitation learning to long-horizon, multi-object tasks & generalise (spatial & intra-category) from only 10 demos? ✨Introducing Affordance-Centric Policy Decomposition 💡Relative task-frame diffusion policies that self-chain for seamless operation 🧵👇

Jon Barron (@jon_barron) 's Twitter Profile Photo

Baffling European Conference on Computer Vision #ECCV2024 policy: Students cannot register their own paper, and must either rely on a non-student co-author's full registration, or must pay more money. Is the idea that we should be taxing students who write papers without faculty/industry support? Seems bad.

Baffling <a href="/eccvconf/">European Conference on Computer Vision #ECCV2024</a> policy: Students cannot register their own paper, and must either rely on a non-student co-author's full registration, or must pay more money. Is the idea that we should be taxing students who write papers without faculty/industry support? Seems bad.
Amar Ali-bey (@amaralibey) 's Twitter Profile Photo

🚀 Excited to release OpenVPRLab! 🎉 An open-source framework for Visual Place Recognition (VPR), featuring extensible, modular, and scalable components, enabling researchers to train/develop deep VPR models with reproducible SOTA performance. 🔗github.com/amaralibey/Ope… 🧵👇

🚀 Excited to release OpenVPRLab! 🎉 
An open-source framework for Visual Place Recognition (VPR), featuring extensible, modular, and scalable components,  enabling researchers to train/develop deep VPR models with reproducible SOTA performance.

🔗github.com/amaralibey/Ope…

🧵👇
Tobias Fischer (@tobiasrobotics) 's Twitter Profile Photo

Super proud of Somayeh for giving a fantastic final #PhD seminar on #spiking #networks 🧠for #place #recognition 🗺️🧭. Read about her great work 📝 QUTRobotics with myself and Michael Milford FTSE 🤖🚘🐀🧠📘🗣️: scholar.google.com.au/citations?hl=e… 🤖

Super proud of <a href="/Somayeh_HS/">Somayeh</a> for giving a fantastic final #PhD seminar on #spiking #networks 🧠for #place #recognition 🗺️🧭. Read about her great work 📝 <a href="/QUTRobotics/">QUTRobotics</a> with myself and <a href="/maththrills/">Michael Milford FTSE 🤖🚘🐀🧠📘🗣️</a>: scholar.google.com.au/citations?hl=e… 🤖
Andrej Karpathy (@karpathy) 's Twitter Profile Photo

To help explain the weirdness of LLM Tokenization I thought it could be amusing to translate every token to a unique emoji. This is a lot closer to truth - each token is basically its own little hieroglyph and the LLM has to learn (from scratch) what it all means based on

To help explain the weirdness of LLM Tokenization I thought it could be amusing to translate every token to a unique emoji. This is a lot closer to truth - each token is basically its own little hieroglyph and the LLM has to learn (from scratch) what it all means based on
François Chollet (@fchollet) 's Twitter Profile Photo

The question of whether LLMs can reason is, in many ways, the wrong question. The more interesting question is whether they are limited to memorization / interpolative retrieval, or whether they can adapt to novelty beyond what they know. (They can't, at least until you start

Eric Brachmann (@eric_brachmann) 's Twitter Profile Photo

We created some visualisations of MASt3R on the #MapFreeReloc dataset. The numbers do not lie, the results are amazing. The public version is a bit worse than the private one which is top on the leaderboard. 0.751 AUC instead of 0.817. Examples in 🧵including public vs private.

Christian Wolf (@chriswolfvision) 's Twitter Profile Photo

Left: Hierarchical model based RL with a large-scale pre-trained world model, auxiliary tasks and skill-discovery and a model for inverse kinematics. Right: PID

Left: Hierarchical model based RL with a large-scale pre-trained world model, auxiliary tasks and skill-discovery and a model for inverse kinematics.

Right: PID
Rohit Jayanti (@_rjayanti) 's Twitter Profile Photo

7/8 Finally, had to try Transformers (wink-wink)! Several point prompts but all before the transition through the underpass! Roughly - Point prompts for Bumblebee on frame 050, on the Decepticon on frame 089 (before it disintegrates!), a couple on the bridge on frame 175.