ICIP 2020 Challenge Session

Real-time distortion classification in laparoscopic videos

Final Results Leaderboard

Team/ParticipantF1-score (Single + Multi distortions)F1-score (Single-distortion) AccuracyAverage time per frame (seconds)
Deep Heros0.9490.9470.8300.05
LION Team0.9410.9330.8150.104
INSA-INTTIC0.9330.9070.7800.05
FEJ0.9150.8800.7650.015
alchet0.8540.9870.5800.04
BUET_ENDGAME0.8320.8930.5700.007

Team Details

Team NameTeam MembersAffiliation
Deep Heros1. Nouar AlDahoul
2. Mhd Adel Momou
Multimedia University,
Malaysia
LION Team1. Dounia Hammou
2. Sid Ahmed Fezza
National Institute of
Telecom & ICT, Algeria
INSA-INTTIC1. Zoubida Ameur
2. Sid Ahmed Fezza
3. Wassim Hamidouche
4. Olivier Deforges
[1,3,4] Univ. Rennes, INSA
Rennes, France
[2] National Institute of
Telecom & ICT, Algeria
FEJ1. Firas Laakom
2. Emmi Antikainen
3. Junaid Malik
[1,3] Tampere University,
Finland
[2] VTT Technical Research
Center, Finland
alchetAladine ChetouaniUniversité d'Orléans,
France
BUET_ENDGAME1. Md. Tariqul Islam
2. Sheikh Asif Imran
Bangladesh University
of Engineering & Tech,
Bangladesh

Laparoscopic videos may be affected by different kinds of distortions during the surgery, resulting in a loss of visual quality. To prevent any disruptions in surgery due to video quality issues, there is a great need of having automated video enhancement systems. For any automated video enhancement system, the feedback loop plays an important part whereby any change in video quality is handled by applying the correct enhancement approach [1]. In this feedback loop, one of the most critical steps is the identification of distortion [2] affecting the video in real-time to allow timely application of enhancement. The purpose of this challenge is to target this problem by developing a fast, unified and effective algorithm for real-time classification of distortions within a laparoscopic video. In order to complete this challenge, we will provide our own dataset of shortduration laparoscopic videos called the Laparoscopic Video Quality (LVQ) database [7]. These videos have been carefully selected from an existing public dataset and have been distorted with either single or multiple distortions simultaneously at different levels. In total, 800 such videos would be provided out of which a sample data of 200 is already available publicly.

A good video quality is an essential requirement for laparoscopic surgery. The distortions in a laparoscopic video not only affect a surgeon’s visibility but also degrade the results of subsequent computational tasks in robot-assisted surgery and imageguided navigation systems. These tasks include segmentation, instrument tracking [3] and augmented or mixed reality [4]. The distortions in a laparoscopic video appear either because of technical problems in the equipment [5] or due to side-effects of the instruments being used (e.g. smoke with diathermy). In order to tackle such problems, most of the existing solutions rely on making some changes to the technical equipment using one of the many available troubleshooting options. However, all such solutions are time-consuming and may not always solve the problem at hand requiring eventually a specialist technician or a change in equipment. To handle these problems more effectively, automated video enhancement systems need to be employed.

The LVQ dataset would be made available to the participants once the challenge opens. The participants would be required to use this database to develop a single classification algorithm that is able to classify distortions in all the videos in real-time. The participants would be required to submit an easily readable code of their algorithm (preferably in Matlab or Python) with comments along with a document describing a brief summary and steps of their method. Moreover, each solution should also contain a demo code which could be used to run the submitted solution using a test video. The classification results should also be displayed in real-time on the tested video (or on a console window/terminal alongside) while the video is being played. In case of multiple distortions, all classes should be displayed. The participants must also share the speed of their code and the system specifications on which it was run.

The submissions would be judged on two following criteria:

1. Speed of the algorithm:  The submissions will be run on Windows OS on an intel core i-7 system with 32 GB RAM and NVIDIA GeForce GTX 1050. A smaller running time would be given a higher score provided that the algorithm scores well on the second criteria

2. Classification criteria:  The submissions would be judged on their implementation using a classification score based on a weighted combination of classification accuracy and F1 score. Equal weight would be given to both. The algorithms would also be tested with a different set of laparoscopic video data than that provided. Moreover, the performance of the algorithm would also be judged separately for videos with single distortion and for those with multiple distortions. More weightage would be given to a method that performs well for the multi-distorted videos.

The LVQ database to be provided for this challenge would consist of 800 distorted laparoscopic videos each of 10 seconds. These videos have been extracted carefully from Cholec80 public dataset [6] and then distorted using one or many of the five distortions. The five distortions included are the defocus blur, motion blur, noise, smoke and uneven illumination. The details of the database are given in the table below

Prof. Azeddine Beghdadi (Universite Sorbonne Paris Nord, France)
Dr. Mounir Kaaniche, (Universite Sorbonne Paris Nord, France)  
Zohaib Amjad Khan, (Universite Sorbonne Paris Nord, France 
Prof. Faouzi Alaya Cheikh, (Norwegian University of Science and Technology (NTNU), Norway)
Prof. Bjørn Edwin(Oslo University Hospital, Norway) 
Prof. Ole Jakob Elle, (Oslo University Hospital, Norway) 
Dr. Rafael Palomar, (Oslo University Hospital, Norway) 
Egidijus Pelanis, (Oslo University Hospital, Norway) 
Dr. Åsmund Avdem Fretland, (Oslo University Hospital, Norway)

• Registration opening: April 1, 2020
• Training data available: April 30, 2020
• Testing data available: May 30, 2020
• Result (Solution) submission deadline: June 7, 2020 June 14, 2020
• Announcement of the results: July 31, 2020 August 10, 2020

1. Sdiri, B., Cheikh, F.A., Dragusha, K. and Beghdadi, A., 2015, August. “Comparative study of endoscopic image enhancement techniques”. In 2015 Colour and Visual Computing Symposium (CVCS) (pp. 1-5). IEEE. 
2. Wang, C., Sharma, V., Fan, Y., Cheikh, F.A., Beghdadi, A., Elle, O.J. and Stiefelhagen, R., 2018, November. “Can Image Enhancement be Beneficial to Find Smoke Images in Laparoscopic Surgery?”. In Color and Imaging Conference (Vol. 2018, No. 1, pp. 163-168). Society for Imaging Science and Technology. 
3. Zhou, J. and Payandeh, S., 2014. Visual tracking of laparoscopic instruments. Journal of Automation and Control Engineering Vol, 2(3). 
4. Pelanis, E., Kumar, R.P., Aghayan, D.L., Palomar, R., Fretland, Å.A., Brun, H., Elle, O.J. and Edwin, B., 2019. “Use of mixed reality for improved spatial understanding of liver anatomy”. Minimally Invasive Therapy & Allied Technologies, pp.1-7. 
5. Siddaiah-Subramanya, M., Nyandowe, M., and Tiang, K. W., “Technical problems during laparoscopy: a systematic method of troubleshooting for surgeons,” Innovative Surgical Sciences 2(4), 233{237 (2017). 
6. Twinanda, A. P., Shehata, S., Mutter, D., Marescaux, J., De Mathelin, M., and Padoy, N., “Endonet: a deep architecture for recognition tasks on laparoscopic videos,” IEEE transactions on medical imaging 36(1), 86{97 (2017).
7. Khan, Z.A., Beghdadi, A., Cheikh, F.A., Kaaniche, M., Pelanis, E., Palomar, R., Fretland, Å.A., Edwin, B. and Elle, O.J., 2020, March. Towards a video quality assessment based framework for enhancement of laparoscopic videos. In Medical Imaging 2020: Image Perception, Observer Performance, and Technology Assessment (Vol. 11316, p. 113160P). International Society for Optics and Photonics.