Introduction

Interventional endoscopy maximizes the benefits of minimally invasive clinical procedures, as it allows clinicians to reach the internal human anatomy while leveraging minimal incisions using the body’s natural pathways. This translates into minimum impact on patients, avoiding complex open-surgery interventions, reducing risk, and minimizing recovery time. Nevertheless, endoscopic procedures are not without challenges.

Endoscopes are long flexible devices, with many degrees of freedom, difficult to operate and control. Like other soft mechanisms, they are also undersensed, and clinicians typically need to guess the location and orientation of an endoscope with minimal available information. This problem is exacerbated in endoscopic procedures, such as colonoscopy, where the only information is the inserted length of the endoscope and a local view of the body’s narrow passage through a camera mounted on the tip of the endoscope. Navigation through the body anatomy and mapping of the covered anatomy become complex tasks to be carried out by the clinician, without much technological support. In the case of colonoscopy, the challenges in endoscope navigation can lead to severe injuries such as colon perforation due to excessive forces, or misdiagnosis such as undetected cancerous tissue due to looping of the endoscope.

Endoscopy poses a tremendous opportunity for autonomous robotics and human-robot interaction in complex changing environments. The combined endoscope-body system highlights many classical challenges in robotics, including mechanism design, system identification, mechanical simulation, and manipulation control. Furthermore, it opens challenges for modern areas of robotics related to digital emerging technologies, and new enabling material and sensor technology. This includes soft robot design, model-based sensing, learning-based navigation, sensory augmentation, human-robot cooperation, or robot-based training.

The irruption of intelligent robotic endoscopy can drastically advance the possibilities for in-vivo endoscope navigation, endoscope design, and clinical operator training. Altogether, given the ageing European population and the massive European effort on beating cancer (BECA), intelligent robotic endoscopy manifests as a key approach to safety and effectiveness under the expected increase in endoscopic procedures.

Intelligent robotic endoscopy

The Intelligent Robotic Endoscopes (IRE) for improved healthcare services project will address multiple scientific and technical challenges across intelligent robotic endoscopy. Due to the underactuated and under-sensed nature of endoscopes, the project will leverage a digital value chain based on simulation models and learning-based techniques to reconstruct missing information and guide intelligent algorithms. The development of simulation models of soft endoscopes and soft anatomy will be rooted on modern methods of statistical modelling for anatomy/geometry, and model-order reduction and differentiable simulation for mechanics. Furthermore, learning-based techniques will drive the design of control policies, navigation methods, and navigation feedback based on sensory augmentation. The proposed simulation models and learning-based control methods will be applied to solve problems of endoscope design, intelligent navigation assistance, virtual-reality-based training, and soft robot phantom-based testing/training.

Novel methods will be tested on phantoms, as a validation step toward clinical deployment. Intelligent endoscopes are not meant to replace the human operator but will empower the operator with enriched information to make informed decisions. The project results will be demonstrated on colonoscopy, as a particularly impactful case of endoscopy.

Objectives

The IRE project aims to empower endoscopy technology by inventing next-step autonomy robotic functionalities that create a step change in undertaking non-repetitive colonoscopy cancer screening procedures in realistic laboratory settings. This includes critical safe human-robot interactions when operating in the complex and dynamic working environments inside of the human body.

The results will be demonstrated on three real-world scenarios within colonoscopy: endoscope design (both through novel digital methodologies and novel technical robotic functionalities), endoscope operator training (using both virtual-reality-based and phantom-based training), and endoscope operator assistance (through intelligent navigation, human-robot interaction, and augmented sensory feedback). The proposed results will be possible by advancing the state of the art on many areas of cognitive robotics and digital supply chain technology.

The IRE project will work on five specific objectives, which can be summarised as:

1) Soft robotic endoscopes Create the first AI-powered soft robotic endoscopes, with an increased level of autonomy through fine manipulation control and sensory augmentation for operator feedback, optimised through model-based design.

2) Robotic test and train phantoms Build a soft robot phantom of human anatomy for continuous development and clinical validation of intelligent robotic endoscopes, measuring functional performance against clinical data.

3) AI-enabled digital endoscope twins Provide fast and highly accurate digital twins of the robotic endoscope and the complex dynamic environment of the soft tissue, suitable for data-driven learning and optimisation.

4) AI-powered navigation Develop learning-based methods to design navigation policies for operator assistance in complex endoscopy tasks and increase the level of autonomy to improve human-robot collaboration.

5) OPEN/FAIR models and data. Develop OPEN and FAIR models and datasets that will facilitate further research on intelligent robotic endoscopes, applied to colonoscopy.

Work Packages

The IRE project is structured along six scientific work packages, one exploitation/dissemination work package, and one management work package. The scientific work packages cover the following research themes: anatomical modelling, simulation models, test/train phantoms, soft robot endoscopes, AI-based navigation, and clinical verification.

List of work packages

WP1: Biomechanical Population Modelling. Work package lead: University of Copenhagen

WP2: Creating Digital Endoscope Twins. Work package lead: Universidad Rey Juan Carlos

WP3: Create software for virtual training and real-life phantom simulators. Work package lead: INRIA

WP4: Creating Robotic Endoscopes. Work package lead: Universiteit Twente

WP5: Learning Intelligent Navigation. Work package lead: University of Copenhagen

WP6: Process Integration and Clinical Verification. Work package lead: CAMES

WP7: Dissemination, Exploitation, Communication. Work package lead: University of Copenhagen

WP8: Management. Work package lead: University of Copenhagen