Systems Engineering and Electronics ›› 2019, Vol. 41 ›› Issue (5): 1021-1027.doi: 10.3969/j.issn.1001-506X.2019.05.13

Previous Articles     Next Articles

Distributed deep networks based on Bagging-Down SGD algorithm

QIN Chao1, GAO Xiaoguang1, CHEN Daqing2   

  1. 1. School of Electronics and Information, Northwestern Poly-technical University,
    Xi’an 710100, China; 2. London South Bank University, London SE10AA, England
  • Online:2019-04-30 Published:2019-04-28

Abstract:

As a cutting-edge disruptive technology, deep learning and unsupervised learning have attracted a significant research attention, and it has been widely acknowledged that training big data with a distributed deep learning algorithm can get better structures. However, there are two main problems with traditional distributed deep learning algorithms: the speed of training is slow and the accuracy of training is low. The Bootstrap aggregating-down stochastic gradient descent (Bagging-Down SGD) algorithm is proposed to solve the speed problem mainly. We add a speed controller to update the parameters of the single machine statistically, and to split model  training and parameters updating to improve the training speed with the assurance of the same accuracy. It is  to be proved in the experiment that the algorithm has the generality to learn the structures  of different kinds of data.

Key words: deep network, distributed, Bootstrap aggregating-down stochastic gradient descent (Bagging-Down SGD), speed controller

[an error occurred while processing this directive]