University of Twente Student Theses

Login

Transformer Offline Reinforcement Learning for Downlink Link Adaptation

Mo, Alexander (2023) Transformer Offline Reinforcement Learning for Downlink Link Adaptation.

Full text not available from this repository.

Full Text Status:Access to this publication is restricted
Embargo date:24 January 2026
Abstract:Recent advancements in Transformers have unlocked a new relational analysis technique for Reinforcement Learning (RL). This thesis researches the models for DownLink Link Adaptation (DLLA). Radio resource management methods such as DLLA form a critical facet for radio-access networks, where intricate optimization problems are continuously resolved under strict latency constraints in the order of milliseconds. Although previous work has showcased improved downlink throughput in an online RL approach, time dependence of DLLA obstructs its wider adoption. Consequently, this thesis ventures into uncharted territory by extending the DLLA framework with sequence modelling to fit the Transformer architecture. The objective of this thesis is to assess the efficacy of an autoregressive sequence modelling based offline RL Transformer model for DLLA using a Decision Transformer. Experimentally, the thesis demonstrates that the attention mechanism models environment dynamics effectively. However, the Decision Transformer framework lacks in performance compared to the baseline, calling for a different Transformer model.
Item Type:Essay (Master)
Clients:
Ericsson, Stockholm, Sweden
KTH, Stockholm, Sweden
Faculty:EEMCS: Electrical Engineering, Mathematics and Computer Science
Subject:54 computer science
Programme:Computer Science MSc (60300)
Link to this item:https://purl.utwente.nl/essays/97177
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page