Systems Engineering and Electronics ›› 2019, Vol. 41 ›› Issue (7): 1652-1657.doi: 10.3969/j.issn.1001-506X.2019.07.29

Previous Articles     Next Articles

eNB selection for LTE-V using deep reinforcement learning

XIE Hao1, GUO Aihuang1,2, SONG Chunlin1, JIAO Runze1   

  1. 1. School of Electronics and Information Engineering, Tongji University, Shanghai 201804, China;
    2. State Key Laboratory of Millimeter Waves, Southeast University, Nanjing 210092, China
  • Online:2019-06-28 Published:2019-07-09

Abstract: The source allocation scheme for long term evolution-vehicle (LTE-V) is based on random selection, which will cause serious network congestion easily. Based on deep reinforcement learning (DRL), an best access evolved node B (eNB) selection algorithm for the vehicle type communication under LTE-V network is proposed. In order to reduce both the blocking probability and communication delays of LTE-V network, the mobility management entity (MME) is used as an agent, also the receiving rate at user side and network loading at network side are taking into consideration. Meanwhile, dueling-double deep Q-network (D-DDQN) is adopt to fit the target action-value function (AVF). D-DDQN can convert the high dimension state inputs to the low dimension action outputs. The simulation shows that the blocking probability of LTE-V network is reduced significantly after the convergence of DQN’s parameters and the properties of the entire network is improved greatly.

Key words: long term evolution-vehicle (LTE-V), deep reinforcement learning (DRL), evolved node B (eNB) selection, network blocking probability, load balance

[an error occurred while processing this directive]