系统工程与电子技术 ›› 2023, Vol. 45 ›› Issue (4): 1200-1206.doi: 10.12305/j.issn.1001-506X.2023.04.29

• 通信与网络 • 上一篇    

基于自监督对比学习的信号调制识别算法

陈洋1,2,*, 廖灿辉2, 张锟2, 刘建2, 王鹏举2   

  1. 1. 战略支援部队信息工程大学, 河南 郑州 450001
    2. 中国人民解放军32076部队盲信号处理国家级重点实验室, 四川 成都 610041
  • 收稿日期:2021-11-10 出版日期:2023-03-29 发布日期:2023-03-28
  • 通讯作者: 陈洋
  • 作者简介:陈洋(1992—), 男, 硕士研究生, 主要研究方向为智能信号处理
    廖灿辉(1982—), 男, 副研究员, 博士, 主要研究方向为深度学习、盲信号处理
    张锟(1990—), 男, 助理研究员, 硕士, 主要研究方向为深度学习、盲信号处理
    刘建(1987—), 男, 助理研究员, 硕士, 主要研究方向为深度学习、盲信号处理
    王鹏举(1993—), 男, 助理研究员, 硕士, 主要研究方向为深度学习、盲信号处理

A signal modulation indentification algorithm based on self-supervised contrast learning

Yang CHEN1,2,*, Canhui LIAO2, Kun ZHANG2, Jian LIU2, Pengju WANG2   

  1. 1. Information Engineering University, Zhengzhou 450001, China
    2. National Key Laboratory of Science and Technology on Blind Signal Processing, Unit 32076 of the PLA, Chengdu 610041, China
  • Received:2021-11-10 Online:2023-03-29 Published:2023-03-28
  • Contact: Yang CHEN

摘要:

近年来, 基于深度学习的信号调制识别技术发展迅速, 但大多数解决方案以监督学习方法为主, 需要大量带标签样本。考虑到信号数据样本分析标注难度大、成本高, 提出了一种通过自监督对比学习利用大量无标签样本进行模型预训练、基于预训练特征值提取及利用少量带标签样本进行调制识别训练的学习方法, 可大幅降低训练所需的带标签样本数量。在RadioML2018.01A上的试验表明, 所提方法仅需1%的带标签数据就能达到全数据集上与监督学习模型相当的性能, 且带标签数据减少至0.1%时, 对信噪比大于等于8 dB的24类调制信号的识别准确率仍然能达到93%以上。

关键词: 深度学习, 调制识别, 自监督对比学习, 模型预训练

Abstract:

The technique of signal modulation identification based on deep learning has made significant progress in recent years, but most of the solutions are based on supervised learning methods which require a large number of labeled samples. It is well-known that labeling on signal data samples is difficult and costly. Therefore, a semi-supervised learning method is proposed, which is pre-trained through self-supervised contrast learning method with a large number of unlabeled samples, and the modulation identification network based on the pre-trained feature extraction network can be trained to convergence by a small number of labeled samples. In this way, the dependence on labeled samples can be reduced significantly. Experiments on RadioML2018.01A show that the proposed algorithm used 1% of the labeled samples can almost achieve the same identification performance as the supervised learning algorithm with the use of all of the labeled samples. Meanwhile, when trained on only 0.1% of the labeled samples, the identification model on 24 modulation types still achieved the accuracy of above 93% under the condition that the signal-to-noise ratio is equal to and over 8 dB.

Key words: deep learning, modulation identification, self-supervised contrast learning, model pretraining

中图分类号: