On approximation of multidimensional functions by using feed forward neural networks

العناوين الأخرى

حول تقريب الدوال متعددة الأبعاد باستخدام الشبكات العصبية ذات التغذية التقدمية

مقدم أطروحة جامعية

al-Khafaji, Najla Muhammad Husayn

مشرف أطروحة جامعية

Naum, Riyad Shakir

أعضاء اللجنة

Husayn, Shawqi Shakir
Jasim, Mahmud Khalid
Muhammad, Lama Naji
Majid, Abd al-Rahman Hamid
Mansur, Nadir Jurj

الجامعة

جامعة بغداد

الكلية

كلية العلوم

القسم الأكاديمي

قسم الرياضيات

دولة الجامعة

العراق

الدرجة العلمية

دكتوراه

تاريخ الدرجة العلمية

2006

الملخص الإنجليزي

In this thesis we consider, with the aid of using feedforward neural networks (FFNNs) procedure, the problem of approximating real-valued multidimensional functions f ∈C(Rs ) .

It is well known that if the dimension of the input space increased, s, then the approximation of f becomes severely difficult, the curse of dimensionality.

We overcome the above problem of dimensionality by using the concept of the Radon transform.

Thus with the aid of line projection, and fixed angles, the Radon transform can be used to reduce the dimension of the input space.

Also, we introduce the inverse of the Radon transform so that to return back to the original dimension of the input space.

These two concepts, the Radon transform and it’s inverse, and with the aid of using feedforward neural networks (FFNNs), we introduce two methods which have been used to approximate real-valued functions f ∈C(Rs ) , we call such two methods as • Radon Ridge function neural networks (RRGFNNs).

• Radon Radial basis function neural networks (RRBFNNs).

In the method RRGFNNs, we introduce a new type of feedforward neural networks FFNNs, namely Ridge function neural networks (RGFNNs).

For the above two methods we develop, and modified, the Greedy algorithm to train the neural network subprocedures.

The above choice of Greedy algorithm depends on the strategy of avoiding the use of derivatives of f and derivatives of activation function, usually used sigmoidal function.

However, if we use the derivatives of f and sigmoidal function, where the derivative of f, f ′ , can not put in term of f, then the Cheney convergence results, [8], can not be used and this is due to the lack of completeness of the input space.

Thus to bound the error, fexact − fapproximate , we have to use Sobolev space.

We discuss different algorithms to train the feedforward neural networks FFNNs, which we use, to approximate the given multidimensional functions.

These algorithms depend on using the gradient of the error function so that the error function will be minimized and thus we will have the optimal adjusting of the weights.

The output of the proposed neural networks is the linear system B a = d , where B∈Rm× n , a∈Rn ×1 and d ∈Rm×1, when m = n we use the LU algorithm, and such linear system could be non degenerate, that is rank(B) = n, or degenerate, that is rank(B) < n .

For non degenerate neural linear system we use the least squares QR algorithm to find aopt .

However, for degenerate neural linear system we use singular value decomposition (SVD) algorithm to find aopt .

Also, we prove a theorem which connects the relation between the condition number of the above neural system and the number of the neurons in the hidden layer of the neural networks, which is the number of basis functions in the series terms f ≈ n Ʃi=1 ai ∅I, where ai are unknown coefficients and i φ is the basis functions.

From above we can decide how many neurons in the hidden layer, i.e.

basis functions, are sufficient to produce an accurate approximation to f ∈C(Rs ) .

To make this research work accessible to the reader, we have include or modify some of known results, from linear algebra or functional analysis, with details of the proofs in some cases, or proofs for results that are given in literatures without proofs and avoided the very long proofs.

Finally, we think that this thesis will show that there is a lot of research work, in this new branch, can be done.

That is, for example, developing a robust and reliable neural network method for approximating a real-valued multidimensional function f ∈C(Rs ) .

التخصصات الرئيسية

الرياضيات

الموضوعات

عدد الصفحات

210

قائمة المحتويات

Table of contents.

Abstract.

Abstract in Arabic.

Introduction.

Chapter One : Artificial neural networks.

Chapter Two : Approximation of multidimensional functions using ridge functions and radial basis functions.

Chapter Three : The training algorithms for neural networks.

Chapter Four : Linear algebra and neural networks.

Chapter Five : Approximation of multidimensional functions using radon transform and neural networks.

References.

نمط استشهاد جمعية علماء النفس الأمريكية (APA)

al-Khafaji, Najla Muhammad Husayn. (2006). On approximation of multidimensional functions by using feed forward neural networks. (Doctoral dissertations Theses and Dissertations Master). University of Baghdad, Iraq
https://search.emarefa.net/detail/BIM-603045

نمط استشهاد الجمعية الأمريكية للغات الحديثة (MLA)

al-Khafaji, Najla Muhammad Husayn. On approximation of multidimensional functions by using feed forward neural networks. (Doctoral dissertations Theses and Dissertations Master). University of Baghdad. (2006).
https://search.emarefa.net/detail/BIM-603045

نمط استشهاد الجمعية الطبية الأمريكية (AMA)

al-Khafaji, Najla Muhammad Husayn. (2006). On approximation of multidimensional functions by using feed forward neural networks. (Doctoral dissertations Theses and Dissertations Master). University of Baghdad, Iraq
https://search.emarefa.net/detail/BIM-603045

لغة النص

الإنجليزية

نوع البيانات

رسائل جامعية

رقم السجل

BIM-603045