-
Notifications
You must be signed in to change notification settings - Fork 5
Home
Welcome to the SAIfer Surgery wiki. This wiki will hold information about the functionality developed as part of the SAIfer Surgery Turing project, with an emphasis on the software tools built in support of this project.

Broadly speaking, this project aims to robustly close the loop between specification inference and synthesis. In line with this, we will attempt to learn models and specifications at scale from cross-modal human activity data, building codes of practice and suitable representations thereof, and attempt to synthesis control policies that are safety guaranteed and robust to variations and uncertainty.
At the same time, we seek to address a number of challenges in the surgical domain, including automated suction, tool handover and active lighting/ viewpoint selection, in addition to joint tissue manipulation. Each of these applications will contribute towards the development of a customisable surgical IDE that aims to reduce the cognitive burden on surgeons and scrub nurses working in operating theatres.
The figure below shows the functional block diagram for the software tools. Each of the functional units is briefly discussed below.

The SaiferSurgery stack is built around two UR10 manipulators mounted on the ceiling above a mock-up surgery scenario as shown below.
H1 and H2 denote the UR10 physical manipulators, while FU1 is the functional unit running the manipulator drivers. Commands sent to the drivers are filtered by a safety layer, FU2. Initially FU2 will constrain motions to avoid collisions with objects in the planning scene (FU8) using the MoveIt! motion planning libraries. A whitelisting system will be put in place to indicate objects in the planning scene with which contact is allowed.
The planning scene (FU8) will initially be populated using semantic recognition from the perception layer (FU5), which will process information from cameras (H4) and RGB-D sensors (H3). In later work, the planning scene will be extended to hold behavioural constraints, learned from human-robot interaction.
FU6 will provide data logging and cleaning necessary for use in the learning layer (FU7). The learning layer will extract behavioural and demonstration information for the planning scene, in addition to controller policies learned from demonstration. Reinforcement learning policies will also be learned in FU7, relying on the controller and safety layers to facilitate safe robot learning.
Commands to the safety layer will be passed through the controller layer (FU3), which accepts plans from the planning layer (FU4). Planning will be accomplished at a local and global level, and higher level objectives passed to planners by the system arbiter (FU9). The system arbiter will also provide application specific functionality, with applications integrated into the UI layer (FU10).
A number of applications are envisaged as part of the SAIfer surgery project. These include:
- Point-to-point motion control using MoveIt!
- Waypoint control using MoveIt!
- Active viewpoint section
- Swabbing with safety constraints
- Suction with safety constraints
- Tool handover
- Automatic system calibration