바로가기 메뉴
본문내용 바로가기
하단내용 바로가기

메뉴보기

메뉴보기

발표연제 검색

연제번호 : P-104 북마크
제목 Deep learning based automatic detection of pharyngeal phase in video fluoroscopic swallowing study
소속 Korea University Anam Hospital, Department of Rehabilitation Medicine1, Korea University Ansan Hospital, department of clinical dentistry2, Korea University, Brain convergence research center3
저자 Eunyoung Lee1,3*, Ki-sun Lee2†, Sung-Bom Pyun1,3†
Objective
Videofluoroscopic swallowing study (VFSS) is a standard diagnostic tool for dysphagia. To evaluate dysphagia, it requires manual analysis of every frame of the raw VFSS video files. In this study, we present a deep learning based approach that uses transferred deep convolutional neural networks (CNN) to discriminate the frame of pharyngeal phase in VFSS videos without manual searching.

Materials and Methods
In this study, 432 video clips with one swallow cycle were collected from 144 patients who performed VFSS due to swallowing difficulty. In order to train a deep CNN model that can classify the pharyngeal phase frame among all frames in the video clip, only the frames corresponding to the middle part of the pharyngeal phase were extracted from each video clip. To comparatively evaluate classification capabilities of the pharyngeal phase frame in video clips of VFSS, we examined the several transferred deep CNNs with fine-tuning technique.

Results
Compared with all tested deep CNN model, The VGG16 model with fine-tuning of 2 blocks achieved the highest classification performance in terms of accuracy (85.6%), sensitivity (89.8%), specificity (77.8%), and AUC (89.4%).

Conclusion
The results of this experiment proved that the optimized fine-tuned deep CNN model can show outperformed in the discriminative capabilities of the pharyngeal phase in VFSS videos under limited data set conditions.