The advancement of artificial intelligence (AI) algorithms has opened new possibilities for the development of robots that ...
Abstract: Assistive robots should be able to wash, fold or iron clothes. However, due to the variety, deformability and self-occlusions of clothes, creating robot systems for cloth manipulation is ...
Vision-language-action models (VLAs) trained on large-scale robotic datasets have demonstrated strong performance on manipulation tasks, including bimanual tasks. However, because most public datasets ...
Vision-Language-Action (VLA) models have shown remarkable potential in visuomotor control and instruction comprehension through end-to-end learning processes. However, current VLA models face ...
MemoryVLA is a Cognition-Memory-Action framework for robotic manipulation inspired by human memory systems. It builds a hippocampal-like perceptual-cognitive memory to capture the temporal ...
ASL Citizen is the first crowdsourced isolated sign language video dataset. The dataset contains about 84k video recordings of 2.7k isolated signs from American Sign Language (ASL), and is about four ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results