Emotional Video to Audio Transformation Using Deep Recurrent Neural Networks and a Neuro-Fuzzy System
المؤلفون المشاركون
Cunha Sergio, Gwenaelle
Lee, Minho
المصدر
Mathematical Problems in Engineering
العدد
المجلد 2020، العدد 2020 (31 ديسمبر/كانون الأول 2020)، ص ص. 1-15، 15ص.
الناشر
Hindawi Publishing Corporation
تاريخ النشر
2020-02-24
دولة النشر
مصر
عدد الصفحات
15
التخصصات الرئيسية
الملخص EN
Generating music with emotion similar to that of an input video is a very relevant issue nowadays.
Video content creators and automatic movie directors benefit from maintaining their viewers engaged, which can be facilitated by producing novel material eliciting stronger emotions in them.
Moreover, there is currently a demand for more empathetic computers to aid humans in applications such as augmenting the perception ability of visually- and/or hearing-impaired people.
Current approaches overlook the video’s emotional characteristics in the music generation step, only consider static images instead of videos, are unable to generate novel music, and require a high level of human effort and skills.
In this study, we propose a novel hybrid deep neural network that uses an Adaptive Neuro-Fuzzy Inference System to predict a video’s emotion from its visual features and a deep Long Short-Term Memory Recurrent Neural Network to generate its corresponding audio signals with similar emotional inkling.
The former is able to appropriately model emotions due to its fuzzy properties, and the latter is able to model data with dynamic time properties well due to the availability of the previous hidden state information.
The novelty of our proposed method lies in the extraction of visual emotional features in order to transform them into audio signals with corresponding emotional aspects for users.
Quantitative experiments show low mean absolute errors of 0.217 and 0.255 in the Lindsey and DEAP datasets, respectively, and similar global features in the spectrograms.
This indicates that our model is able to appropriately perform domain transformation between visual and audio features.
Based on experimental results, our model can effectively generate an audio that matches the scene eliciting a similar emotion from the viewer in both datasets, and music generated by our model is also chosen more often (code available online at https://github.com/gcunhase/Emotional-Video-to-Audio-with-ANFIS-DeepRNN).
نمط استشهاد جمعية علماء النفس الأمريكية (APA)
Cunha Sergio, Gwenaelle& Lee, Minho. 2020. Emotional Video to Audio Transformation Using Deep Recurrent Neural Networks and a Neuro-Fuzzy System. Mathematical Problems in Engineering،Vol. 2020, no. 2020, pp.1-15.
https://search.emarefa.net/detail/BIM-1201159
نمط استشهاد الجمعية الأمريكية للغات الحديثة (MLA)
Cunha Sergio, Gwenaelle& Lee, Minho. Emotional Video to Audio Transformation Using Deep Recurrent Neural Networks and a Neuro-Fuzzy System. Mathematical Problems in Engineering No. 2020 (2020), pp.1-15.
https://search.emarefa.net/detail/BIM-1201159
نمط استشهاد الجمعية الطبية الأمريكية (AMA)
Cunha Sergio, Gwenaelle& Lee, Minho. Emotional Video to Audio Transformation Using Deep Recurrent Neural Networks and a Neuro-Fuzzy System. Mathematical Problems in Engineering. 2020. Vol. 2020, no. 2020, pp.1-15.
https://search.emarefa.net/detail/BIM-1201159
نوع البيانات
مقالات
لغة النص
الإنجليزية
الملاحظات
Includes bibliographical references
رقم السجل
BIM-1201159
قاعدة معامل التأثير والاستشهادات المرجعية العربي "ارسيف Arcif"
أضخم قاعدة بيانات عربية للاستشهادات المرجعية للمجلات العلمية المحكمة الصادرة في العالم العربي
تقوم هذه الخدمة بالتحقق من التشابه أو الانتحال في الأبحاث والمقالات العلمية والأطروحات الجامعية والكتب والأبحاث باللغة العربية، وتحديد درجة التشابه أو أصالة الأعمال البحثية وحماية ملكيتها الفكرية. تعرف اكثر