Fractional Fourier Time-Frequency Representation for Heart Sound Classification

Authors

Keywords:

Fourier transform, MFCC, Spectrogram, Fractional Fourier Transform, Heart Sound

Abstract

Early detection of heart sounds can significantly
reduce mortality rates by allowing physicians to intervene on
time. However, manual heart sound analysis is subjective and
it relies heavily on the skills and experience of the physi-
cian. Fortunately, deep learning has emerged as a promising
method for heart sound classification. Time-frequency repre-
sentations (TFR) such as spectrograms and continuous wavelet
transforms (CWT), and Mel-Frequency ceptral coefficients have
been widely accepted input representations for heart sound rep-
resentation. This study proposes a combination of fractional
Fourier time-frequency representation (FrFT_TFR) and a deep
learning model for heart sound classification and uses a public
dataset to demonstrate the efficacy of the proposed representa-
tion.Classification using deep learning model with FrFT_TFR as
input outperforms that obtained with spectrograms and CWT
as inputs by approximately 4% and by 13% over Mel-frequency
cepstral coefficients (MFCC) as inputs. The results underscore
the effectiveness of using FrFT_TFR for heart sound classifica-
tion for initial heart sound diagnosis.

Downloads

Published

2024-06-26

How to Cite

[1]
E. A. Nehary and S. . Rajan, “Fractional Fourier Time-Frequency Representation for Heart Sound Classification”, CMBES Proc., vol. 46, Jun. 2024.

Issue

Section

Academic