Trofimov V.V.
Russia, St. Petersburg, St. Petersburg State University, tvv@mail.axon.ru
Gluhov A.O.
Belarussia, New-Polotsk, Polotsk State University, agld@psu.unibel.by
SIMULATION AND APPROXIMATION ON MULTIAGENTS STRUCTURES
Abstract: This article presents result of research into multiagent, self-explicating structures, which were realised in the base of object-oriented, distributed algorithm. They demonstrate high efficiency in simulation and approximation. And, the efficiency of offered algorithms is much higher, than efficiency of neurons network structures. These algorithms are simple in realisation, it do not require plenty of elements, of which networks consist.
Аннотация: В статье приводятся результаты исследований многоагентных, саморазвивающихся структур, реализованных с помощью объектно-ориентированного, распределенного алгоритма. Они демонстрируют высокую эффективность при решении задач моделирования и аппроксимации. Причем, эффективность предлагаемых алгоритмов намного выше, чем эффективность нейронных сетевых структур. Алгоритмы просты в реализации, так как не требуют большого количества элементов, из которых они состоят.
The fundamental property of neural networks is an ability to modify its own structure. This property is successfully used for simulation and approximation.
Base elements of such networks are the neurons. The formal neuron is consist of following elements: the inhomogeneous adaptive adder, nonlinear converter, branchpoint and linear communication (synapse). All algorithms of problem solution, which used the neural structures, include a stage of preliminary learning of a structure. This stage has an aim to set-up of connections of neurons. The networks with return distribution of an error are most known. In such networks the learning described by optimisation function. Such mode gives a universal method for creation of neural networks.
The object-oriented formalism allows successfully describing neural, dynamic developing structures as systems of objects (agents). It is possible because the agents' structure have the following properties:
1. They have the certain behavior.
2. They increase efficiency by using: parallel calculations; competing search on objects; learning of objects during a realisation of search;
3. There is many programming languages, that simplify development and realisation of these algorithms.
The multiagent structure (S) can be described as association of a set of data types - T, alphabet of events - X, set of object identifiers - I, set of existing classes (object models) - C and set of existing objects - O: S = (T, X, I, C, O).
We offer the following algorithm of learning of a multiagent structure for approximation:
1. To calculate - Calc () and to receive significance of a functional of quality of approximation for all agents.
2. To select a current output and useful structure according to the minimum significance of a functional of quality of approximation;
3. To execute for a current output - ocout a procedure of return distribution of an error - ocout.fback(y-xcout);
4. To find the agent oerr with the greatest error (desirable significance - yerr) or one from the agents, to which the error is "essential" (choice of the agent can to be executed by a casual image);
5. To minimize an error in a current structure by procedure - oerr.fsetup(yerr);
6. To repeat a calculation Calc (), and then procedure - ocout.fback(y-xcout);
7. To select the agent oerr and define for it the desirable significance - yw, and select ow - the agent calling oerr.;
8. Try to minimise a chosen error by prosedure oi.ftype(yw) for all agents who were not including in a useful structure - oiI O*;
9. If any agent reduce an error by including the agent onew , then switch a communication of the agent ow from oerr to onew;
10. To check an absence of a cycle after such switching by oi.ftype(yw);
11. Repeat from item 1 to while the condition of stop will be executed. As a condition of stop can be used limit of a modification of approximation quality or limit of an amount of iterations.
This algorithm was realised and tested on various structures and on various learning samples with known associations. All accounts were duplicated by a solution with the help of neural network algorithm. The following operations adequate to a condition of convergence were taken as a set of base transformations (operations)
fw(x)=(a*x) |
add(x1,x2)=(x1+a*x2) |
mul(x1,x2)=(a*x1*x2) |
Pow(x1,x2)=(x1^x2) |
exp(x)=exp(a*x) |
ln(x)=log(a*x) |
powa(x)=(x^a) |
In the beginning, the agents network were lead in some initial condition, supposing the first calculation without errors. When we start the learning procedure, also works the procedure of check the agents network on correctness. Two multiagent structures were taken for a research: a structure (A), consisting from nine agents and a structure (B), consisting from three agents (fig. 1).
The learning samples were formed automatically according to the known analytical associations, connecting output and inputs. All experiments were conducted with the following functions of three variables:
1. y=x1+x2x3 | 2. y=x1+x2*x3 |
3. y=x1*x2*x3 |
4. y=10*x1*x2*x3 |
5. y=sqrt(x12 +x22 +x32) |
6. y=x1+sqrt(x22 +x32) |
7. y=x1+x2+sqrt(x3) |
As a base of a comparison were used the results of simulations obtained with the help of the program DELTA (book of Yoh-Han Pao “ Adaptive Pattern Recognition and Neural Networks ”, Application A, program from example C). The algorithm of learning of a neural network used algorithm of return distribution of an error. The function of activation a neuron was the hyperbolic tangent. The network consists of three latent stratums till five neurons in each. The outcomes of learning of a multiagent network are indicated in a fig. 2 and in tab. 1.
The quality of approximations obtained by algorithm, offered us, was much higher then by a neural network (in 10 times, and in some cases and in 100 times).
Table 1.
The efficiency of learning process in solving process of approximations on multiagent and neural network structures.
№ |
Structure А |
Structure Б |
Neural network |
|||
Quality |
Amount of iterations |
Quality |
Amount of iterations |
Quality |
Amount of iterations |
|
1 |
0.0 |
16 |
0.0 |
14 |
354.3 |
100 |
2 |
0.0 |
100 |
0.3 |
100 |
34.52 |
100 |
3 |
0.0 |
12 |
0.0 |
13 |
113.4 |
100 |
4 |
0.0 |
12 |
0.0 |
17 |
2894548.3 |
100 |
5 |
0.1 |
100 |
0.4 |
100 |
48.5 |
100 |
6 |
3.2 |
100 |
1302.3 |
100 |
139.6 |
100 |
7 |
4.1 |
100 |
17.5 |
100 |
7082.2 |
100 |
From the table 1 it is visible, that the quality of approximation depends on time of learning and from number of the agents in a network. The approximation requires work only a part of a network structure. The agents' redundancy increases a quality. However, than more number of the agents, that more time is required for learning.
Other important factor raising quality of approximation is the set of base transformations (operations). The influence of this factor was calculated for various sets of operations. The outcomes of experiment show stable differences. The quality of a neural network is commensurable with quality of a multiagent structure. But it is fair only for a set of base transformations, which consists of transformations of addition and scaling - add (x1, x2) and fw (x). And it is fair despite of large differences in algorithms of learning.
The noise in learning sample influences learning process. The large amplitude of a noise leads to a significant difficulty in search. The small amplitude of a noise (10 % from a level of inputs and output) didn't influence the learning.
Conclusions
The problems of approximation are decided by multiagent, self-explicating structures much better, than neural networks.
These structures do not require plenty of elements and are simple in realisation.
The obtained outcomes show a high efficiency of offered object-oriented, distributed algorithm.
The increase of efficiency (on a comparison with neural networks) is a result of:
Site of Information
Technologies Designed by inftech@webservis.ru. |
|