Robotic Environmental State Recognition with Vision-Language Models (Advanced Robotics 2024)
JSK Tendon Group / Kento Kawaharazuka JSK Tendon Group / Kento Kawaharazuka
797 subscribers
130 views
14

 Published On Sep 30, 2024

Title: Robotic Environmental State Recognition with Pre-Trained Vision-Language Models and Black-Box Optimization
Authors: K. Kawaharazuka, Y. Obinata, N. Kanazawa, K. Okada, M. Inaba
Accepted at Advanced Robotics
arxiv - https://arxiv.org/abs/2409.17519
website - https://haraduka.github.io/vlm-bbo

In order for robots to autonomously navigate and operate in diverse environments, it is essential for them to recognize the state of their environment. On the other hand, the environmental state recognition has traditionally involved distinct methods tailored to each state to be recognized. In this study, we perform a unified environmental state recognition for robots through the spoken language with pre-trained large-scale vision-language models. We apply Visual Question Answering and Image-to-Text Retrieval, which are tasks of Vision-Language Models. We show that with our method, it is possible to recognize not only whether a room door is open/closed, but also whether a transparent door is open/closed and whether water is running in a sink, without training neural networks or manual programming. In addition, the recognition accuracy can be improved by selecting appropriate texts from the set of prepared texts based on black-box optimization. For each state recognition, only the text set and its weighting need to be changed, eliminating the need to prepare multiple different models and programs, and facilitating the management of source code and computer resource. We experimentally demonstrate the effectiveness of our method and apply it to the recognition behavior on a mobile robot, Fetch.

show more

Share/Embed