Back to Top

Paper Title

EVALUATING THE PERFORMANCE OF TRANSFORMER ARCHITECTURES AGAINST CLASSICAL DEEP LEARNING MODELS IN SEMI-SUPERVISED LEARNING CONTEXTS

Authors

Keywords

  • transformers
  • semi-supervised learning
  • deep learning
  • cnns
  • rnns
  • machine learning
  • representation learning

Article Type

Research Article

Issue

Volume : 6 | Issue : 1 | Page No : 9-16

Published On

April, 2025

Downloads

Abstract

Recent advances in deep learning have witnessed the emergence of transformer architectures beyond their initial dominance in natural language processing (NLP) tasks. Meanwhile, semi-supervised learning (SSL) remains a crucial strategy for leveraging limited labeled data alongside abundant unlabeled data. This paper investigates the comparative performance of transformer models against classical convolutional neural networks (CNNs) and recurrent neural networks (RNNs) within semi-supervised settings, focusing on general machine learning benchmarks. The study highlights the growing efficacy of transformer-based models in SSL scenarios and discusses challenges and limitations.

View more >>

Uploded Document Preview