十大网站app排行榜,wordpress如何删除你好和设置菜单,网站开发前后端,做统计的网站今天来帮大家回顾一下计算机视觉、自然语言处理等热门研究领域的197个经典SOTA模型#xff0c;涵盖了图像分类、图像生成、文本分类、强化学习、目标检测、推荐系统、语音识别等13个细分方向。建议大家收藏了慢慢看#xff0c;下一篇顶会的idea这就来了~
由于整理的SOTA模型…今天来帮大家回顾一下计算机视觉、自然语言处理等热门研究领域的197个经典SOTA模型涵盖了图像分类、图像生成、文本分类、强化学习、目标检测、推荐系统、语音识别等13个细分方向。建议大家收藏了慢慢看下一篇顶会的idea这就来了~
由于整理的SOTA模型有点多这里只做简单分享全部论文以及项目源码看文末
一、图像分类SOTA模型15个
1.模型AlexNet
论文题目Imagenet Classification with Deep Convolution Neural Network
2.模型VGG
论文题目Very Deep Convolutional Networks for Large-Scale Image Recognition
3.模型GoogleNet
论文题目Going Deeper with Convolutions
4.模型ResNet
论文题目Deep Residual Learning for Image Recognition
5.模型ResNeXt
论文题目Aggregated Residual Transformations for Deep Neural Networks
6.模型DenseNet
论文题目Densely Connected Convolutional Networks
7.模型MobileNet
论文题目MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
8.模型SENet
论文题目Squeeze-and-Excitation Networks
9.模型DPN
论文题目Dual Path Networks
10.模型IGC V1
论文题目Interleaved Group Convolutions for Deep Neural Networks
11.模型Residual Attention Network
论文题目Residual Attention Network for Image Classification
12.模型ShuffleNet
论文题目ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
13.模型MnasNet
论文题目MnasNet: Platform-Aware Neural Architecture Search for Mobile
14.模型EfficientNet
论文题目EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
15.模型NFNet
论文题目MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applic
二、文本分类SOTA模型12个
1.模型RAE
论文题目Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions
2.模型DAN
论文题目Deep Unordered Composition Rivals Syntactic Methods for Text Classification
3.模型TextRCNN
论文题目Recurrent Convolutional Neural Networks for Text Classification
4.模型Multi-task
论文题目Recurrent Neural Network for Text Classification with Multi-Task Learning
5.模型DeepMoji
论文题目Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm
6.模型RNN-Capsule
论文题目Investigating Capsule Networks with Dynamic Routing for Text Classification
7.模型TextCNN
论文题目Convolutional neural networks for sentence classification
8.模型DCNN
论文题目A convolutional neural network for modelling sentences
9.模型XML-CNN
论文题目Deep learning for extreme multi-label text classification
10.模型TextCapsule
论文题目Investigating capsule networks with dynamic routing for text classification
11.模型Bao et al.
论文题目Few-shot Text Classification with Distributional Signatures
12.模型AttentionXML
论文题目AttentionXML: Label Tree-based Attention-Aware Deep Model for High-Performance Extreme Multi-Label Text Classification
三、文本摘要SOTA模型17个
1.模型CopyNet
论文题目Incorporating Copying Mechanism in Sequence-to-Sequence Learning
2.模型SummaRuNNer
论文题目SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documen
3.模型SeqGAN
论文题目SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
4.模型Latent Extractive
论文题目Neural latent extractive document summarization
5.模型NEUSUM
论文题目Neural Document Summarization by Jointly Learning to Score and Select Sentences
6.模型BERTSUM
论文题目Text Summarization with Pretrained Encoders
7.模型BRIO
论文题目BRIO: Bringing Order to Abstractive Summarization
8.模型NAM
论文题目A Neural Attention Model for Abstractive Sentence Summarization
9.模型RAS
论文题目Abstractive Sentence Summarization with Attentive Recurrent Neural Networks
10.模型PGN
论文题目Get To The Point: Summarization with Pointer-Generator Networks
11.模型Re3Sum
论文题目Retrieve, rerank and rewrite: Soft template based neural summarization
12.模型MTLSum
论文题目Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation
13.模型KGSum
论文题目Mind The Facts: Knowledge-Boosted Coherent Abstractive Text Summarization
14.模型PEGASUS
论文题目PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
15.模型FASum
论文题目Enhancing Factual Consistency of Abstractive Summarization
16.模型RNNext ABS RL Rerank
论文题目Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting
17.模型BottleSUM
论文题目BottleSum: Unsupervised and Self-supervised Sentence Summarization using the Information Bottleneck Principle
四、图像生成SOTA模型16个 Progressive Growing of GANs for Improved Quality, Stability, and Variation A Style-Based Generator Architecture for Generative Adversarial Networks Analyzing and Improving the Image Quality of StyleGAN Alias-Free Generative Adversarial Networks Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images A Contrastive Learning Approach for Training Variational Autoencoder Priors StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets Diffusion-GAN: Training GANs with Diffusion Improved Training of Wasserstein GANs Self-Attention Generative Adversarial Networks Large Scale GAN Training for High Fidelity Natural Image Synthesis CSGAN: Cyclic-Synthesized Generative Adversarial Networks for Image-to-Image Transformation LOGAN: Latent Optimisation for Generative Adversarial Networks A U-Net Based Discriminator for Generative Adversarial Networks Instance-Conditioned GAN Conditional GANs with Auxiliary Discriminative Classifier
五、视频生成SOTA模型15个 Temporal Generative Adversarial Nets with Singular Value Clipping Generating Videos with Scene Dynamics MoCoGAN: Decomposing Motion and Content for Video Generation Stochastic Video Generation with a Learned Prior Video-to-Video Synthesis Probabilistic Video Generation using Holistic Attribute Control ADVERSARIAL VIDEO GENERATION ON COMPLEX DATASETS Sliced Wasserstein Generative Models Train Sparsely, Generate Densely: Memory-efficient Unsupervised Training of High-resolution Temporal GAN Latent Neural Differential Equations for Video Generation VideoGPT: Video Generation using VQ-VAE and Transformers Diverse Video Generation using a Gaussian Process Trigger NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion StyleGAN-V: A Continuous Video Generator with the Price, Image Quality and Perks of StyleGAN2 Video Diffusion Models
六、强化学习SOTA模型13个 Playing Atari with Deep Reinforcement Learning Deep Reinforcement Learning with Double Q-learning Continuous control with deep reinforcement learning Asynchronous Methods for Deep Reinforcement Learning Proximal Policy Optimization Algorithms Hindsight Experience Replay Emergence of Locomotion Behaviours in Rich Environments ImplicitQuantile Networks for Distributional Reinforcement Learning Imagination-Augmented Agents for Deep Reinforcement Learning Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning Model-based value estimation for efficient model-free reinforcement learning Model-ensemble trust-region policy optimization Dynamic Horizon Value Estimation for Model-based Reinforcement Learning
七、语音合成SOTA模型19个 TTS Synthesis with Bidirectional LSTM based Recurrent Neural Networks WaveNet: A Generative Model for Raw Audio SampleRNN: An Unconditional End-to-End Neural Audio Generation Model Char2Wav: End-to-end speech synthesis Deep Voice: Real-time Neural Text-to-Speech Parallel WaveNet: Fast High-Fidelity Speech Synthesis Statistical Parametric Speech Synthesis Using Generative Adversarial Networks Under A Multi-task Learning Framework Tacotron: Towards End-to-End Speech Synthesis VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis Deep Voice 3: Scaling text-to-speech with convolutional sequence learning ClariNet Parallel Wave Generation in End-to-End Text-to-Speech LPCNET: IMPROVING NEURAL SPEECH SYNTHESIS THROUGH LINEAR PREDICTION Neural Speech Synthesis with Transformer Network Glow-TTSA Generative Flow for Text-to-Speech via Monotonic Alignment Search FLOW-TTS: A NON-AUTOREGRESSIVE NETWORK FOR TEXT TO SPEECH BASED ON FLOW Conditional variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech PnG BERT: Augmented BERT on Phonemes and Graphemes for Neural TTS
八、机器翻译SOTA模型18个 Neural machine translation by jointly learning to align and translate Multi-task Learning for Multiple Language Translation Effective Approaches to Attention-based Neural Machine Translation A Convolutional Encoder Model for Neural Machine Translation Attention is All You Need Decoding with Value Networks for Neural Machine Translation Unsupervised Neural Machine Translation Phrase-based Neural Unsupervised Machine Translation Addressing the Under-translation Problem from the Entropy Perspective Modeling Coherence for Discourse Neural Machine Translation Cross-lingual Language Model Pretraining MASS: Masked Sequence to Sequence Pre-training for Language Generation FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow Multilingual Denoising Pre-training for Neural Machine Translation Incorporating BERT into Neural Machine Translation Pre-training Multilingual Neural Machine Translation by Leveraging Alignment Information Contrastive Learning for Many-to-many Multilingual Neural Machine Translation Universal Conditional Masked Language Pre-training for Neural Machine Translation
九、文本生成SOTA模型10个 Sequence to sequence learning with neural networks Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation Neural machine translation by jointly learning to align and translate SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient Attention is all you need Improving language understanding by generative pre-training BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Cross-lingual Language Model Pretraining Language Models are Unsupervised Multitask Learners BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
十、语音识别SOTA模型12个 A Neural Probabilistic Language Model Recurrent neural network based language model Lstm neural networks for language modeling Hybrid speech recognition with deep bidirectional lstm Attention is all you need Improving language understanding by generative pre- training Bert: Pre-training of deep bidirectional transformers for language understanding Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context Lstm neural networks for language modeling Feedforward sequential memory networks: A new structure to learn long-term dependency Convolutional, long short-term memory, fully connected deep neural networks Highway long short-term memory RNNs for distant speech recognition
十一、目标检测SOTA模型16个 Rich feature hierarchies for accurate object detection and semantic segmentation Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition Fast R-CNN Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks Training Region-based Object Detectors with Online Hard Example Mining R-FCN: Object Detection via Region-based Fully Convolutional Networks Mask R-CNN You Only Look Once: Unified, Real-Time Object Detection SSD: Single Shot Multibox Detector Feature Pyramid Networks for Object Detection Focal Loss for Dense Object Detection Accurate Single Stage Detector Using Recurrent Rolling Convolution CornerNet: Detecting Objects as Paired Keypoints M2Det: A Single-Shot Object Detector based on Multi-Level Feature Pyramid Network Fully Convolutional One-Stage Object Detection ObjectBox: From Centers to Boxes for Anchor-Free Object Detection
十二、推荐系统SOTA模型18个 Learning Deep Structured Semantic Models for Web Search using Clickthrough Data Deep Neural Networks for YouTube Recommendations Self-Attentive Sequential Recommendation Graph Convolutional Neural Networks for Web-Scale Recommender Systems Learning Tree-based Deep Model for Recommender Systems Multi-Interest Network with Dynamic Routing for Recommendation at Tmall PinnerSage: Multi-Modal User Embedding Framework for Recommendations at Pinterest Eicient Non-Sampling Factorization Machines for Optimal Context-Aware Recommendation Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation Field-aware Factorization Machines for CTR Prediction Deep Learning over Multi-field Categorical Data – A Case Study on User Response Prediction Product-based Neural Networks for User Response Prediction Wide Deep Learning for Recommender Systems Deep Cross Network for Ad Click Predictions xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems Deep Interest Network for Click-Through Rate Prediction GateNet:Gating-Enhanced Deep Network for Click-Through Rate Prediction Package Recommendation with Intra- and Inter-Package Attention Networks
十三、超分辨率分析SOTA模型16个 Image Super-Resolution Using Deep Convolutional Networks Deeply-Recursive Convolutional Network for Image Super-Resolution Accelerating the Super-Resolution Convolutional Neural Network Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network Image Restoration Using Convolutional Auto-encoders with Symmetric Skip Connections Accurate Image Super-Resolution Using Very Deep Convolutional Networks Image super-resolution via deep recursive residual network Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution Image Super-Resolution Using Very Deep Residual Channel Attention Networks Image Super-Resolution via Dual-State Recurrent Networks Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform Cascade Convolutional Neural Network for Image Super-Resolution Image Super-Resolution with Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining Single Image Super-Resolution via a Holistic Attention Network One-to-many Approach for Improving Super-Resolution
关注下方《学姐带你玩AI》
回复“SOTA模型”获取论文代码合集
码字不易欢迎大家点赞评论收藏