At Toyota Research Institute (TRI), we're on a mission to improve the quality of human life. We're developing new tools and capabilities to amplify the human experience. To lead this transformative shift in mobility, we've built a world-class team in Automated Driving, Energy & Materials, Human-Centered AI, Human Interactive Driving, Large Behavior Models, and Robotics.
The Mission
Make general-purpose robots a reality.
The Challenge
We envision a future where robots assist with household chores and cooking, aid the elderly in maintaining their independence, and enable people to spend more time on the activities they enjoy most. To achieve this, robots must be able to operate reliably in complex, unstructured environments. Our mission is to answer the question "What will it take to create truly general-purpose robots that can accomplish a wide variety of tasks in settings like human homes with minimal human supervision?" We believe that the answer lies in cultivating large-scale datasets of physical interaction from a variety of sources and building on the latest advances in machine learning to learn general purpose robot behaviors from this data.
The Team
The Learning From Videos (LFV) team in the Robotics division focuses on the development of foundation models capable of leveraging large-scale multi-modal (RGB, depth, flow, semantics, bounding boxes, tactile, audio, etc.) data from multiple domains (driving, robotics, indoors, outdoors, etc.) to improve the performance of downstream tasks. This paradigm targets training scalability, since data from multiple modalities can be equally leveraged to learn useful data-driven priors (3D geometry, physics, dynamics, etc) for world understanding. Our topics of interest include, but are not limited to, Video Generation, World Models, 4D Reconstruction, Multi-Modal Models, Multi-View Geometry, Data Augmentation, and Video-Language-Action models, with a primary focus on foundation models for embodied applications. We are aiming to make progress on some of the hardest scientific challenges around spatio-temporal reasoning, and how it can lead to the deployment of autonomous agents in real-world unstructured environments.
The Opportunity
Our Learning From Videos (LFV) team is looking for a Computer Vision Research Scientist with expertise in Video Generation, Spatio-temporal Representation Learning, World Models, Foundation Models, Multi-Modal Learning, Vision-as-Inverse-Graphics (including Differentiable Rendering), or related fields, to improve dynamic scene understanding for robots. We are working on some of the hardest scientific challenges around the safe and effective usage of large robotic fleets, simulation, and prior knowledge (geometry, physics, domain knowledge, behavioral science), not only for automation but also for human augmentation.
As a Research Scientist, you will work with a team proposing, conducting, and transferring innovative research. You will use large amounts of sensory data (real and synthetic) to address open problems, train models at scale, publish at top academic venues, and test your ideas in the real world (including on our robots). You will also work closely with other teams at TRI to transfer and ship our most successful algorithms and models towards world-scale long-term autonomy and advanced assistance systems.
Responsibilities
MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.