In the past few years, significant advancements in convolutional neural networks (CNNs) have significantly propelled the field of image super-resolution (SR) research. Nonetheless, many current SR techniques are limited in effectively addressing real-world data degradation, particularly in blind scenarios characterized by multi-modal, spatially variant, and unknown distributions. Based on this issue, we propose a degradation-aware Swin Transformer with sparse attention for blind SR. In this model, we proposed a degradation-aware residual Swin Transformer sparse attention block that is based on the Swin transformer layer, the non-local sparse attention (NLSA), and the degradation-aware convolutional (DA Cov). The Swin Transformer solves CNN’s problems because it has the ability to process images of large size and extract long-range dependency, which works as a local attention mechanism. Moreover, the NLSA is utilized to solve problems combined with non-local attention, which works as a global attention mechanism. Also, it prevents the model from attending to noisy and less informative locations by partitioning the deep feature pixels into different groups. The DA Cov is used to integrate the degraded kernel with extracted features. Moreover, our model shows superior visual quality and reconstruction accuracy with an efficient number of parameters and Mult-Adds. For example, on the Set5 dataset with a kernel size of 0.06 and a scaling factor of x4, our model achieved a 0.1 dB improvement in PSNR compared to DRAN.