Featured Application This perspective review outlines how Large Language Models (LLM) can be deployed as intelligent interfaces and orchestration layers for advanced optical microscopy platforms. A representative future application is the development of conversational LLM-driven microscope assistants capable of translating high-level experimental goals, such as optimizing live-cell imaging conditions or autonomously exploring heterogeneous samples, into validated acquisition workflows. By integrating instrument control, real-time analysis, and facility-level data management, such systems have the potential to lower barriers to advanced microscopy, improve reproducibility, and enable adaptive, closed-loop experiments in both research laboratories and shared imaging facilities.Abstract Optical microscopy is a fundamental tool in the physical, chemical, and life sciences, enabling direct investigation of structure, dynamics, and function across multiple spatial and temporal scales. Advances in optical design, detectors, and computational techniques have greatly enhanced performance, but have also increased the complexity of modern microscopes, which are now software-driven and embedded in data-intensive workflows. Artificial intelligence has become an important component of this landscape, particularly through task-specific machine learning approaches for image analysis, optimization, and limited instrument control. While effective, these solutions are often fragmented and lack the ability to integrate experimental intent, contextual knowledge, and multi-step reasoning. Recent progress in large language models (LLMs) offers a new paradigm for intelligent microscopy. As foundation models trained on large-scale text and code, LLMs exhibit emergent capabilities in reasoning, abstraction, and tool coordination, allowing them to act as natural interfaces between users and complex experimental systems. This perspective highlights how LLMs can function as cognitive and orchestration layers that connect experiment design, instrument control, data analysis, and knowledge integration. Emerging applications include conversational microscope control, workflow supervision, and scientific assistance for data exploration and hypothesis generation, alongside important technical, ethical, and governance challenges.
Sancataldo, G. (2026). ChatMicroscopy: A Perspective Review of Large Language Models for Next-Generation Optical Microscopy. APPLIED SCIENCES, 16(5) [10.3390/app16052502].
ChatMicroscopy: A Perspective Review of Large Language Models for Next-Generation Optical Microscopy
Sancataldo G.
2026-03-05
Abstract
Featured Application This perspective review outlines how Large Language Models (LLM) can be deployed as intelligent interfaces and orchestration layers for advanced optical microscopy platforms. A representative future application is the development of conversational LLM-driven microscope assistants capable of translating high-level experimental goals, such as optimizing live-cell imaging conditions or autonomously exploring heterogeneous samples, into validated acquisition workflows. By integrating instrument control, real-time analysis, and facility-level data management, such systems have the potential to lower barriers to advanced microscopy, improve reproducibility, and enable adaptive, closed-loop experiments in both research laboratories and shared imaging facilities.Abstract Optical microscopy is a fundamental tool in the physical, chemical, and life sciences, enabling direct investigation of structure, dynamics, and function across multiple spatial and temporal scales. Advances in optical design, detectors, and computational techniques have greatly enhanced performance, but have also increased the complexity of modern microscopes, which are now software-driven and embedded in data-intensive workflows. Artificial intelligence has become an important component of this landscape, particularly through task-specific machine learning approaches for image analysis, optimization, and limited instrument control. While effective, these solutions are often fragmented and lack the ability to integrate experimental intent, contextual knowledge, and multi-step reasoning. Recent progress in large language models (LLMs) offers a new paradigm for intelligent microscopy. As foundation models trained on large-scale text and code, LLMs exhibit emergent capabilities in reasoning, abstraction, and tool coordination, allowing them to act as natural interfaces between users and complex experimental systems. This perspective highlights how LLMs can function as cognitive and orchestration layers that connect experiment design, instrument control, data analysis, and knowledge integration. Emerging applications include conversational microscope control, workflow supervision, and scientific assistance for data exploration and hypothesis generation, alongside important technical, ethical, and governance challenges.| File | Dimensione | Formato | |
|---|---|---|---|
|
applsci-16-02502 (1).pdf
accesso aperto
Descrizione: articolo
Tipologia:
Versione Editoriale
Dimensione
774.7 kB
Formato
Adobe PDF
|
774.7 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


