SAM (Segment anything model) 2D&3D

Meta claimed to democratize segmentation by introducing the Segment Anything project: a new task, dataset, and model for image segmentation, as we explain in our research paper. They released both general Segment Anything Model (SAM) and Segment Anything 1-Billion mask dataset (SA-1B), the largest ever segmentation dataset, to enable a broad set of applications and foster further research into foundation models for computer vision. They made the SA-1B dataset available for research purposes and the Segment Anything Model is available under a permissive open license (Apache 2.0).

SAM's capabilities:

(1) SAM allows users to segment objects with just a click or by interactively clicking points to include and exclude from the object. The model can also be prompted with a bounding box.

(2) SAM can output multiple valid masks when faced with ambiguity about the object being segmented, an important and necessary capability for solving segmentation in the real world.

(3) SAM can automatically find and mask all objects in an image.

(4) SAM can generate a segmentation mask for any prompt in real time after precomputing the image embedding, allowing for real-time interaction with the model.

 

More info:

Github: https://github.com/facebookresearch/segment-anything

Doc: https://ai.meta.com/blog/segment-anything-foundation-model-image-segmentation/

 

Contact us with your foundation model usage requirements.

Contact our sales
Your name
Your email
Your company
Your requirements