Sound visualization for deaf assistance using mobile computing

Joint Authors

al-Habbash, Mahmud S.
Abu Samrah, Ayman Ahmad

Source

Journal of Engineering Research and Technology

Issue

Vol. 2, Issue 2 (30 Jun. 2015), pp.159-166, 8 p.

Publisher

The Islamic University-Gaza Deanship of Research and Graduate Affairs

Publication Date

2015-06-30

Country of Publication

Palestine (Gaza Strip)

No. of Pages

8

Main Subjects

Engineering & Technology Sciences (Multidisciplinary)

Abstract EN

This thesis presents a new approach to the visualization of sound for deaf assistance that simultaneously illustrates important dynamic sound properties and the recognized sound icons in an easy readable view.

.In order to visualize general sounds efficiently, the MFCC sound features was utilized to represent robust discriminant properties of the sound.

The problem of visualizing MFCC vector that has 39 dimension was simplified by visualizing one-dimensional value, which is the result of comparing one reference MFCC vector with the input MFCC vector only.

New similarity measure for MFCC feature vectors comparison was proposed that outperforms existing local similarity measures due to their problem of one to one attribute value calculation that leaded to incorrect similarity decisions.

Classification of input sound was performed and attached to the visualizing system to make the system more usable for users.

Each time frame of sound is put under K-NN classification algorithm to detect short sound events.

In addition, every one second the input sound is buffered and forwarded to Dynamic Time Warping (DTW) classification algorithm which is designed for dynamic time series classification.

Both classifiers works in the same time and deliver their classification results to the visualization model.

The application of the system was implemented using Java programming language to work on smartphones that run Android OS, so many considerations related to the complexity of algorithms is taken into account.

The system was implemented to utilize the capabilities of the smartphones GPU to guarantee the smoothness and fastness of the rendering.

The system design was built based on interviews with five deaf persons taking into account their preferred visualizing system.

In addition to that, the same deaf persons tested the system and the evaluation of the system is carried out based on their interaction with the system.

Our approach yields more accessible illustrations of sound and more suitable for casual and little expert users

American Psychological Association (APA)

Abu Samrah, Ayman Ahmad& al-Habbash, Mahmud S.. 2015. Sound visualization for deaf assistance using mobile computing. Journal of Engineering Research and Technology،Vol. 2, no. 2, pp.159-166.
https://search.emarefa.net/detail/BIM-717480

Modern Language Association (MLA)

Abu Samrah, Ayman Ahmad& al-Habbash, Mahmud S.. Sound visualization for deaf assistance using mobile computing. Journal of Engineering Research and Technology Vol. 2, no. 2 (Jun. 2015), pp.159-166.
https://search.emarefa.net/detail/BIM-717480

American Medical Association (AMA)

Abu Samrah, Ayman Ahmad& al-Habbash, Mahmud S.. Sound visualization for deaf assistance using mobile computing. Journal of Engineering Research and Technology. 2015. Vol. 2, no. 2, pp.159-166.
https://search.emarefa.net/detail/BIM-717480

Data Type

Journal Articles

Language

English

Notes

Includes bibliographical references : p. 165-166

Record ID

BIM-717480