Imaging Lunch: AI-Assisted Segmentation of Images using Foundation Models (SAM)
Our in-house expert Daniel Sage will give a workshop on AI-Assisted Segmentation of Images using Foundation Models (SAM) during our next imaging lunch. Open to all EPFL PhD students and Postdocs!
The newly released Segment-Anything Model (SAM, Meta AI Research, 2023) is framed as a universal image segmentation tool able to perform zero-shot generalization on unseen objects in natural images. Similar to ChatGPT, SAM is stunningly versatile to delineate objects from user prompts. Numerous SAM variants, with varying efficiency, have been integrated into user-friendly software, some tailored to domain-specific fields like GIS, medical imaging, and microscopy. However, its application in science requires careful consideration.
After introducing the encoding mechanisms of SAM’s Vision Transformer architecture, we present its use cases, including segmentation, dataset creation, and user-assisted annotation. We will also discuss the risks associated with misusing SAM for extracting reliable and reproducible visual information, as well as its limitations with large images. Finally, we question the dependence on unnecessarily large and inefficient energy-consuming tools for image segmentation.
Audience: Scientists, engineers, and general public with an interest in imaging. No programming or mathematical knowledge is required.
About the Imaging Lunches: Once per month, the EPFL Center for Imaging organises an event dedicated to all PhD students and postdocs working with/in imaging. Discuss the latest advances in imaging. Connect with imaging peers. Learn about popular imaging tools!