Journal of Systems Engineering and Electronics ›› 2010, Vol. 32 ›› Issue (2): 386-391.

• 软件、算法与仿真 • 上一篇    下一篇

基于分布式并行计算的神经网络算法

张代远1,2   

  1. (1. 南京邮电大学计算机学院, 江苏 南京210003;
    (2. 南京邮电大学计算机技术研究所, 江苏 南京 210003)
  • 出版日期:2010-02-03 发布日期:2010-01-03

Training algorithm for neural networks based on 
distributed parallel calculation

ZHANG Dai-yuan1,2   

  1. (1. Coll. of Computer, Nanjing Univ. of Posts and Telecommunications, Nanjing 210003, China;
    (2. Inst. of Computer Technology, Nanjing Univ. of Posts and Telecommunications, Nanjing 210003, China)
  • Online:2010-02-03 Published:2010-01-03

摘要:

为了提高计算性能(速度与可扩展性),提出了一种新颖的神经网络的并行计算体系结构和计算网络权函数的训练算法。权函数是广义Chebyshev多项式和线性函数的复合函数,只需要通过代数计算就可以求得,不需要梯度下降计算或者矩阵计算。各个权函数能够独立求解,可以通过并行系统采用并行算法计算。算法可以求得全局最优点,得到反映网络误差的一个有用的表达式。此外,算法在不超过权函数总数的范围内,还具有维持加速比与并行系统中提供的处理器的数量成线性增长的能力。仿真实验结果表明,本文算法的计算性能远远优于传统算法。

Abstract:

To improve computing performance (speed and scalability), an innovative parallel computation architecture and a training algorithm for neural networks are proposed. Each weight function is a composite function of a generalized Chebyshev polynomial and a linear function, only algebraic calculation is needed, and no requirement is involved for the calculation such as the steepest descentlike algorithms or matrix calculation. The weight functions are found independently, therefore they are calculated by using a parallel algorithm in a parallel system. The algorithm is used to find the global minimum. A useful expression is obtained for the approximate error of the networks. The scalability of the algorithm is described as the ability to maintain linear proportion of speedup to the number of processors available in the parallel system within the total number of weight functions. The results show that the computing performance proposed is much better than that obtained with traditional methods.