Label Aware Denoising Pretraining

Authors

  • Dean Ninalga University of Toronto

Abstract

Most large-scale pre-trained image models are not designed with segmentation or medical imaging in mind. Hence, practitioners often use specialized augmentation techniques such as CarveMix and denoising pretraining objectives to initialize and train their models. However, these methodologies may misappropriate model resources for learning task-irrelevant information as they do not incorporate label information. We propose Label Aware Denoising Pretraining (LADP), a deep learning model pretraining technique for hypoxic-ischemic encephalopathy lesion segmentation, which causes severe motor and cognitive disability and high mortality in neonates. LADP uses the region-of-interest extraction method from CarveMix to impart increasing levels of noise to regions surrounding lesion contours. In this way, models efficiently learn better representations for a few key areas most relevant to the downstream task.

Downloads

Published

2024-06-26

How to Cite

[1]
D. Ninalga, “Label Aware Denoising Pretraining”, CMBES Proc., vol. 46, Jun. 2024.

Issue

Section

Academic