Adrien Gaidon, PhD
Dr. Adrien Gaidon is an experienced Machine Learning researcher, engineer, manager, executive, advisor, and investor. He is a Partner at Calibrate Ventures, an Adjunct Professor of Computer Science at Stanford, and a technical advisor and former head of ML at TRI.
Since 2007, Dr. Gaidon has been working on Principle-centric ML: learning algorithms that effectively combine concise prior knowledge and large-scale data. In particular, Dr. Gaidon has a deep expertise in Embodied Intelligence, including Computer Vision, Autonomous Driving, and Robotics, with more than 100 patents and 100 publications at top AI venues like CVPR and NeurIPS.
Besides scientific expertise, Dr. Gaidon has experience in building and leading world-class teams of ML researchers and engineers that work together to invent and develop new solutions to hard real-world problems like safe and automated driving.
More info: CV, linkedin, Google Scholar, DBLP, arxiv.
News
- End of 2024: early-stage investments in some amazing deep tech startups that we made at Calibrate Ventures (more news soon!).
- December 2024: presenting Streaming Detection of Queried Event Start at NeurIPS’24 with Cristobal Eyzaguirre and Stanford colleagues.
- October 2024: presenting Linearizing Large Language Models at CoLM’24 (a little bit of the whole lot of LLM work that happened behind the scenes at TRI!)
- September 2024: two papers leveraging MAE (Masked Auto-Encoders) for 3D vision presented at ECCV’24: NeRF-MAE and OctMAE.
- July 2024: if you want to learn about the intersection of CV for robotics and VC, I gave an unusual talk at the excellent ICRA’24 RoboNeRF workshop titled “Robot Ventures in Neural Fields”.
- June 2024: VTCD: Understanding Video Transformers via Universal Concept Discovery presented at CVPR’24 (as a highlight!).
- June 2024: ReFiNe: Recursive Field Networks for Cross-Modal Multi-Scene Representation presented at SIGGRAPH’24.
- February 2024: Big personal news: after 7 wonderful years leading ML at TRI, I’m thrilled to be joining the early-stage deep tech VC firm Calibrate Ventures as a Partner to invest in the explosion of Embodied Intelligence startups! If you are curious about why and how, then check out my personal blog post and the Calibrate announcement.
2023 Updates
- July 2023: Robust Self-Supervised Extrinsic Self-Calibration accepted at IROS’23, and 3 papers accepted at ICCV’23: ZeroDepth (scale-aware monocular depth that generalizes zero-shot across domains, including indoors and outdoors!), DeLiRa (self-supervised depth, light, and radiance fields), and NeO 360 (outdoor NeRFs from sparse views).
- June 2023: we received the best paper award at L4DC’23 for iDBF: Self-Supervised Policy Filters that Avoid Out-of-Distribution States! Congrats Fernando and Haruki!
- May 2023: 3 papers published at CVPR’23: a new 3D detector using view-point equivariance (VEDet), a new video object segmentation benchmark (VOST), a new video model for object discovery from motion-guided tokens (MoTok), and more from the team as mentioned in our TRI @ CVPR 2023 blog.
- January 2023: Neural Groundplans published at ICLR’23, Depth Is All You Need for 3D Detection at ICRA’23, and Active Sampling to reduce Causal Confusion at CLEAR’23 and the NeurIPS’23 CML4Impact workshop.
2022 Updates
- September 2022: 3 papers accepted at CoRL’22: ROAD: Learning an Implicit Recursive Octree Auto-Decoder to Efficiently Encode 3D Shapes, Representation Learning for Object Detection from Unlabeled Point Cloud Sequences, and RAP: Risk-Aware Prediction for Robust Planning (oral).
- July 2022: 4 papers accepted at ECCV’22: Depth Field Networks (DeFiNe), implicit representations of shape and appearance (ShAPO), Photo-realistic Neural Domain Randomization (PNDR), Stochastic Sequential Pointcloud Forecasting (S2Net).
- June 2022: I am proposing a new paradigm for Embodied Intelligence: Principle-centric AI. See an intro in my TWiMLAI podcast with Sam Charrington and a technical presentation at ICRA’22.
- June 2022: one paper accepted at IROS’22 on uncertainty modeling for trajectory forecasting (HAICU)
- May 2022: new paper accepted at ICML’22: Object Permanence Emerges in a Random Walk along Memory.
- March 2022: 3 papers accepted at CVPR’22: Revisiting the “Video” in Video-Language Understanding (oral), Multi-Frame Self-Supervised Depth with Transformers, Discovering Objects that Can Move.
- February 2022: 4 papers accepted at ICRA’22: Full Surround Monodepth from Multiple Cameras (FSM), Self-Supervised Camera Self-Calibration from Video, Learning Optical Flow, Depth, and Scene Flow without Real-World Labels, and Control-Aware Prediction Objectives for Autonomous Driving.
- January 2022: 2 papers accepted as spotlights at ICLR’22: Self-supervised Learning is More Robust to Dataset Imbalance and Dynamics-Aware Comparison of Learned Reward Functions (DARD).
2021 Updates
- Fall 2021: co-teaching CS131: Computer Vision: Foundations and Applications at Stanford with Juan Carlos Niebles.
- October 2021: presenting our papers at ICCV’21, including work on Self-supervised sim-to-real transfer (GUDA), Learning to Track with Object Permanence (PermaTrack), self-supervised pre-training for monocular 3D object detection (DD3D), and video auto-labeling (Warp-Refine Propagation). I also gave 3 workshop talks on 3D detection at 3DODI, scene understanding at ROAD, and trajectory forecasting at BTFM. See our medium post for an overview.
- September 2021: one oral at NeurIPS’21 (top 1% of ~10k submissions!) on Provable Guarantees for Self-Supervised Learning and one paper accepted at CoRL’21 on Single Shot Scene Reconstruction.
- August 2021: my team is growing! We are hiring 4 new Research Scientists in areas including Perception, Reconstruction / Inverse Graphics, Computer Vision Safety, and RL Safety. Please apply if you are interested in joining a talented team on a mission to discover the learning principles for safe robot autonomy and human amplification at scale!
- July 2021: I gave a talk on Bridging the Perception-Control gap with Prediction at RSS’21.
- July 2021: I did a fun interview with the founder & CEO of Parallel Domain, Kevin McNamara, on how we use Synthetic Data to advance the state of the art in Computer Vision. PD also made a cool blog covering our recent PermaTrack work.
- June 2021: we are organizing the CVPR’21 DDAD depth estimation challenge! Try your ideas on public data from our TRI fleet! Winners will win prizes and present, along with prestigious invited speakers, at our Mono3D CVPR’21 workshop on the Frontiers of Monocular 3D Perception.
- May 2021: got an outstanding reviewer award at CVPR!
- May 2021: together with Vitor and Rares, we wrote a blog post summarizing a lot of our research results in self-supervised learning for depth estimation.
- May 2021: I did a fun interview about ML for Autonomous Driving with Lukas Biewald for the Gradient Dissent podcast. Check it out!
- April 2021: new preprints on Full Surround Monodepth (FSM) and trajectory forecasting (HAICU).
- March 2021: 2 multi-task learning papers accepted at CVPR’21, one on joint depth prediction and completion (PackNet-SAN) and another on Hierarchical Lovász Embeddings for panoptic segmentation.
- March 2021: Blake Wulfe and I won the NeurIPS 2020 ProcGen RL Competition! Find the full report here.
- February 2021: 1 paper at RoboSoft in collaboration with the brilliant TRI Robotics team on monocular depth estimation inside visuotactile sensors.
- January 2021: 1 paper on distributionally robust control accepted at RA-L/ICRA and another at ICLR on regularization for heteroskedastic and imbalanced Deep Learning. Both on robustness and with great Stanford collaborators from Mac Schwager and Tengyu Ma’s labs.
2020 Updates
- November 2020: 1 paper (oral) at 3DV on Neural Ray Surfaces in collaboration with Greg Shakhnarovich at TTIC.
- October 2020: got a “top 10% reviewer award” at NeurIPS!
- October 2020: 2 papers accepted at CoRL 2020, including one oral on self-supervised 3D keypoints and a poster on interpretable trajectory forecasting in collaboration with Marco Pavone’s lab at Stanford
- October 2020: I gave an invited talk at IPAM covering a lot of our recent results across the full AV stack.
- August 2020: we are organizing the ECCV 2020 workshop on Perception for Autonomous Driving (PAD)
- July 2020: we are organizing the ICML 2020 workshop on AI for Autonomous Driving (AIAD)
- July 2020: 2 papers accepted at ECCV 2020, including an oral on trajectory prediction and a poster on differentiable rendering
- July 2020: 5 papers accepted at IROS 2020 and 1 at ITSC on scene flow, imitation learning, game-theoretic planning, risk sensitive control, traffic simulation, and planner testing.
- July 2020: 1 paper with Stanford on hierarchical RL and imitation in near-accidents accepted at RSS 2020
- June 2020: Together with colleagues from PFN and TRI-AD, we have released a survey on Differentiable Rendering.
- June 2020: 4 papers (3 orals!) accepted at CVPR 2020 (PackNet pseudo-lidar, real-time panoptic segmentation, auto-labeling via Differentiable Rendering, spatio-temporal graph distillation). See also our new DDAD dataset for depth estimation!
- February 2020: 1 paper with Stanford on Pedestrian Intent Prediction accepted at RA-L/ICRA 2020. See also our new STIP dataset!
- January 2020: I got promoted to Senior Manager! Super grateful to my team and excited for our next steps together!
2019 updates
- December 2019: 1 paper accepted at ICLR 2020 and 1 (oral) at WACV 2020
- October 2019: 1 paper accepted at the International Journal of Computer Vision (IJCV)
- September 2019: 1 paper accepted at NeurIPS 2019 (also oral at BayLearn 2019) and 2 papers (spotlights) accepted at CoRL 2019
- July 2019: 1 paper accepted (oral) at ICCV 2019
- May 2019: I did an interview with the wonderful Sam Charrington for the TWIML AI podcast!
- Finally started a personal website!