Show simple item record

dc.contributor.advisorKrunz, Marwan M.
dc.contributor.authorAykin, Irmak
dc.creatorAykin, Irmak
dc.date.accessioned2020-05-22T21:56:39Z
dc.date.available2020-05-22T21:56:39Z
dc.date.issued2020
dc.identifier.urihttp://hdl.handle.net/10150/641360
dc.description.abstractMillimeter-wave (mmWave) communications have recently attracted considerable interest as a key element of next-generation wireless systems, e.g., 5G New Radio (NR) cellular systems and WiGig. The abundant spectrum available in the mmWave bands enables many users to be served by a base station (BS), with significantly higher data rates than what is possible at sub-6 GHz bands. However, wireless transmissions at mmWave frequencies suffer from high propagation losses and poor penetration. At the same time, the small wavelengths allow many antenna elements to be packed into a single portable device without increasing its form factor. With proper processing of signals fed into these antennas, transmissions can be beamed along a desired direction, resulting in high beamforming gain that compensates for the severe signal attenuation. However, beamforming and directional communications raise various challenges related to beam management. In 4G LTE systems, initial access (IA) procedure, the process by which a user equipment (UE) establishes a connection with a BS, is performed in an omnidirectional fashion, alleviating the complexity of beam alignment. In contrast, the IA procedure in 5G NR is done directionally to reach more users and support subsequent directional data transmissions. Specifically, the BS need to sequentially sweep its beams for the UEs to measure their qualities, determine the best beam and report it back to the BS. This directional procedure incurs significant delay, which lowers the link throughput and spectral efficiency. Methods that focus on reducing this link establishment delay often suffer from an unacceptable probability of missing a UE in range, motivating the need for fast and efficient link establishment techniques. In addition to the IA problem, directionally tracking a mobile UE efficiently and reliably is also a research challenge. After establishing the initial directional link between a BS and a UE, beam misalignments occur frequently due to UE mobility, environmental changes, or wind. These misalignments incur a large beamforming loss, resulting in reduced data rate or even link outage. Consequently, tracking UEs and maintaining the quality of their directional links are quite critical. This can be done via frequent channel probing at the expense of extra control overhead, which lowers the spectral efficiency and increases latency. In this dissertation, we design efficient and reliable initial access and beam tracking protocols for directional mmWave systems. In designing these protocols, we aim at minimizing the UE finding/tracking overhead, while maintaining a low probability of missing previously undiscovered UEs. We first design an IA protocol called FastLink, which allows the BS and the UE to transmit and receive using the narrowest possible beams, hence providing the highest possible beamforming gain. At the same time, beam sweeping in FastLink is done in such a way that the discovery process is much faster than the beam scanning mechanisms used in both the 802.11ad standard and 5G NR. FastLink exploits compressive sensing (CS) theory to determine the number of measurements (beam probes) needed to identify the `dominant' channel cluster. Inspired by this theory, we design a search algorithm called 3-dimensional peak finding (3DPF) to find the best beam. 3DPF divides the set of beam directions into equally spaced subsets and then finds the beam that achieves the maximum received power in each subset. The number of subsets is a design parameter, determined using CS results. We integrate 3DPF into the FastLink protocol and explain the required control messages that need to be exchanged between the BS and the UE in support of this protocol. Next, we develop a communication protocol called SmartLink that utilizes multiple channel clusters between the BS and the UE to provide an effective mechanism for maintaining communications under random blockage. SmartLink is built on a unique beam scanning technique called multi-lobe beam search (MLBS). Implementing shortest-depth decision trees, MLBS utilizes beam patterns with multiple lobes to simultaneously discover multiple channel clusters. Through rigorous analysis, we show that MLBS reduces the search time from linear to logarithmic with respect to the total number of beam directions. Depending on the discovered channel clusters and their relative gains, we then virtually divide the Tx and the Rx antenna arrays into several sub-arrays. Each sub-array is assigned to a beam that points towards one of the inferred channel clusters. The goal of such antenna partitioning is to maximize the average data rate in a blockage-prone environment. Besides incorporating MLBS and antenna partitioning, the specification of SmartLink also includes the required message exchange between the BS and the UE to establish the multi-directional link. Finally, for beam tracking in mmWave systems, we propose a restless multi-armed bandit framework, called MAMBA. In MAMBA, each beam is modeled as an arm of a multi-armed bandit problem. The BS acts as the agent, interacting with these arms to learn the underlying system dynamics, i.e., changes in beam qualities over time. The quality of a beam is quantified by the best modulation and coding scheme (MCS) that can be supported. We develop a reinforcement learning (RL) algorithm called adaptive Thompson sampling (ATS) to determine the optimal beam/MCS pair in MAMBA for each downlink transmission. ATS aims at maximizing the expected transmission rate, taking into account the estimated reward distributions associated with each beam. However, due to the time-varying nature of the environment, keeping track of these reward distributions is nontrivial. To address this issue, ATS uses a priori beam-quality information collected through IA, and updates this information at each iteration based on the feedback obtained from the UE. The beam and MCS for the next downlink transmission are then selected based on the updated posterior distributions of the rewards, i.e., achievable rates of various beams. Due to its model-free nature, ATS can accurately estimate the best beam/MCS pair without making unrealistic assumptions regarding channel dynamics and/or the UE mobility pattern.
dc.language.isoen
dc.publisherThe University of Arizona.
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
dc.subject5G
dc.subjectbeamforming
dc.subjectmachine learning
dc.subjectmillimeter-wave
dc.subjectwireless communications
dc.titleDesign and Implementation of Machine-Learning-Based Beam Management Protocols for 5G Millimeter-Wave Networks
dc.typetext
dc.typeElectronic Dissertation
thesis.degree.grantorUniversity of Arizona
thesis.degree.leveldoctoral
dc.contributor.committeememberLazos, Loukas
dc.contributor.committeememberLi, Ming
thesis.degree.disciplineGraduate College
thesis.degree.disciplineElectrical & Computer Engineering
thesis.degree.namePh.D.
refterms.dateFOA2020-05-22T21:56:39Z


Files in this item

Thumbnail
Name:
azu_etd_17799_sip1_m.pdf
Size:
7.240Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record