Vision foundation models (VFMs), such as the segment anything model (SAM), allow zero-shot or interactive segmentation of visual contents; thus, they are quickly applied in a variety of visual scenes.
Abstract: In this paper, we propose a method that utilizes clinical knowledge to bridge the domain gap when applying the Segment Anything Model (SAM) to medical images. Many recent methods employing ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results