Journal of Systems Engineering and Electronics ›› 2010, Vol. 32 ›› Issue (2): 386-391.

Previous Articles     Next Articles

Training algorithm for neural networks based on 
distributed parallel calculation

ZHANG Dai-yuan1,2   

  1. (1. Coll. of Computer, Nanjing Univ. of Posts and Telecommunications, Nanjing 210003, China;
    (2. Inst. of Computer Technology, Nanjing Univ. of Posts and Telecommunications, Nanjing 210003, China)
  • Online:2010-02-03 Published:2010-01-03

Abstract:

To improve computing performance (speed and scalability), an innovative parallel computation architecture and a training algorithm for neural networks are proposed. Each weight function is a composite function of a generalized Chebyshev polynomial and a linear function, only algebraic calculation is needed, and no requirement is involved for the calculation such as the steepest descentlike algorithms or matrix calculation. The weight functions are found independently, therefore they are calculated by using a parallel algorithm in a parallel system. The algorithm is used to find the global minimum. A useful expression is obtained for the approximate error of the networks. The scalability of the algorithm is described as the ability to maintain linear proportion of speedup to the number of processors available in the parallel system within the total number of weight functions. The results show that the computing performance proposed is much better than that obtained with traditional methods.

[an error occurred while processing this directive]