On approximation of multidimensional functions by using feed forward neural networks

Other Title(s)

حول تقريب الدوال متعددة الأبعاد باستخدام الشبكات العصبية ذات التغذية التقدمية

Dissertant

al-Khafaji, Najla Muhammad Husayn

Thesis advisor

Naum, Riyad Shakir

Comitee Members

Husayn, Shawqi Shakir
Jasim, Mahmud Khalid
Muhammad, Lama Naji
Majid, Abd al-Rahman Hamid
Mansur, Nadir Jurj

University

University of Baghdad

Faculty

College of Science

Department

Mathematics Department

University Country

Iraq

Degree

Ph.D.

Degree Date

2006

English Abstract

In this thesis we consider, with the aid of using feedforward neural networks (FFNNs) procedure, the problem of approximating real-valued multidimensional functions f ∈C(Rs ) .

It is well known that if the dimension of the input space increased, s, then the approximation of f becomes severely difficult, the curse of dimensionality.

We overcome the above problem of dimensionality by using the concept of the Radon transform.

Thus with the aid of line projection, and fixed angles, the Radon transform can be used to reduce the dimension of the input space.

Also, we introduce the inverse of the Radon transform so that to return back to the original dimension of the input space.

These two concepts, the Radon transform and it’s inverse, and with the aid of using feedforward neural networks (FFNNs), we introduce two methods which have been used to approximate real-valued functions f ∈C(Rs ) , we call such two methods as • Radon Ridge function neural networks (RRGFNNs).

• Radon Radial basis function neural networks (RRBFNNs).

In the method RRGFNNs, we introduce a new type of feedforward neural networks FFNNs, namely Ridge function neural networks (RGFNNs).

For the above two methods we develop, and modified, the Greedy algorithm to train the neural network subprocedures.

The above choice of Greedy algorithm depends on the strategy of avoiding the use of derivatives of f and derivatives of activation function, usually used sigmoidal function.

However, if we use the derivatives of f and sigmoidal function, where the derivative of f, f ′ , can not put in term of f, then the Cheney convergence results, [8], can not be used and this is due to the lack of completeness of the input space.

Thus to bound the error, fexact − fapproximate , we have to use Sobolev space.

We discuss different algorithms to train the feedforward neural networks FFNNs, which we use, to approximate the given multidimensional functions.

These algorithms depend on using the gradient of the error function so that the error function will be minimized and thus we will have the optimal adjusting of the weights.

The output of the proposed neural networks is the linear system B a = d , where B∈Rm× n , a∈Rn ×1 and d ∈Rm×1, when m = n we use the LU algorithm, and such linear system could be non degenerate, that is rank(B) = n, or degenerate, that is rank(B) < n .

For non degenerate neural linear system we use the least squares QR algorithm to find aopt .

However, for degenerate neural linear system we use singular value decomposition (SVD) algorithm to find aopt .

Also, we prove a theorem which connects the relation between the condition number of the above neural system and the number of the neurons in the hidden layer of the neural networks, which is the number of basis functions in the series terms f ≈ n Ʃi=1 ai ∅I, where ai are unknown coefficients and i φ is the basis functions.

From above we can decide how many neurons in the hidden layer, i.e.

basis functions, are sufficient to produce an accurate approximation to f ∈C(Rs ) .

To make this research work accessible to the reader, we have include or modify some of known results, from linear algebra or functional analysis, with details of the proofs in some cases, or proofs for results that are given in literatures without proofs and avoided the very long proofs.

Finally, we think that this thesis will show that there is a lot of research work, in this new branch, can be done.

That is, for example, developing a robust and reliable neural network method for approximating a real-valued multidimensional function f ∈C(Rs ) .

Main Subjects

Mathematics

Topics

No. of Pages

210

Table of Contents

Table of contents.

Abstract.

Abstract in Arabic.

Introduction.

Chapter One : Artificial neural networks.

Chapter Two : Approximation of multidimensional functions using ridge functions and radial basis functions.

Chapter Three : The training algorithms for neural networks.

Chapter Four : Linear algebra and neural networks.

Chapter Five : Approximation of multidimensional functions using radon transform and neural networks.

References.

American Psychological Association (APA)

al-Khafaji, Najla Muhammad Husayn. (2006). On approximation of multidimensional functions by using feed forward neural networks. (Doctoral dissertations Theses and Dissertations Master). University of Baghdad, Iraq
https://search.emarefa.net/detail/BIM-603045

Modern Language Association (MLA)

al-Khafaji, Najla Muhammad Husayn. On approximation of multidimensional functions by using feed forward neural networks. (Doctoral dissertations Theses and Dissertations Master). University of Baghdad. (2006).
https://search.emarefa.net/detail/BIM-603045

American Medical Association (AMA)

al-Khafaji, Najla Muhammad Husayn. (2006). On approximation of multidimensional functions by using feed forward neural networks. (Doctoral dissertations Theses and Dissertations Master). University of Baghdad, Iraq
https://search.emarefa.net/detail/BIM-603045

Language

English

Data Type

Arab Theses

Record ID

BIM-603045