自然语言处理(NLP)必读论文、课程、术语汇总
本文总结了在学习过程中,与NLP相关的必读论文全文对照翻译和相关术语介绍的博客。
持续更新中···⏰
文章目录NLPIndexPartI:必读论文1.Seq2Seq2.Encoder-Decoder3.Attention4.Transformer5.BERT6.GPT7.EfficientTransformerPartII:相关术语0.RNN1.Seq2Seq2.Embedding3.Word2Vec4.MarkovChain5.BeamSearch6.AttentionMechanisms7.TransformerNLPIndexhttps://index.quantumstat.com/
PartI:必读论文1.Seq2SeqSequencetoSequenceLearningwithNeuralNetworks2.Encoder-DecoderLearningPhraseRepresentationsusingRNNEncoder-DecoderforStatisticalMachineTranslation3.AttentionNeuralMachineTranslationbyJointlyLearningtoAlignandTranslateEffectiveApproachestoAttention-basedNeuralMachineTranslation4.TransformerAttentionIsAllYouNeed5.BERTBERT:Pre-trainingofDeepBidirectionalTransformersforLanguageUnderstanding
6.GPTGPTv1:ImprovingLanguageUnderstandingbyGenerativePre-TrainingGPTv2:LanguageModelsareUnsupervisedMultitaskLearnersGPTv3:LanguageModelsareFew-ShotLearners
7.EfficientTransformerGLUVariantsImproveTransformerEfficientTransformersASurveyPartII:相关术语0.RNNHowLSTMsWork?图解RNN,LSTM,GRU1.Seq2SeqSeq2Seq模型简介2.Embedding神经网络嵌入解释3.Word2VecWord2Vec简介图解Word2Vec4.MarkovChain马尔可夫链和隐马尔可夫模型简介隐马尔可夫模型(HMM)简介5.BeamSearch波束搜索算法的直观解释6.AttentionMechanisms注意力机制理解注意力机制的直观理解注意力机制概述图解AttentionIllustratedAttention7.Transformer图解TransformerHowTransformersWorkIllustratedSelf-Attention