Show simple item record

dc.contributor.advisorHartman, John
dc.contributor.authorChoi, Illyoung
dc.creatorChoi, Illyoung
dc.date.accessioned2021-02-16T21:16:27Z
dc.date.available2021-02-16T21:16:27Z
dc.date.issued2020
dc.identifier.citationChoi, Illyoung. (2020). Remote Data Access in Scientific Computing (Doctoral dissertation, University of Arizona, Tucson, USA).
dc.identifier.urihttp://hdl.handle.net/10150/656767
dc.description.abstractRemote data access is becoming increasingly common in scientific computing. The trends toward collaborative research, big-data, and cloud computing have increased the need to transfer large-scale datasets between geographically separated systems. The net result is that scientific computing is increasingly reliant on data transfers over wide-area networks (WANs), which are often high-cost and low-bandwidth. Furthermore, cluster computing has become the dominant platform for scientific computing, and remote data access is even more complicated when the systems are clusters of computers.This dissertation presents two file systems, Syndicate and Stargate, that provide remote data access over a WAN for scientific computing. Syndicate uses on-demand block-based data transfer augmented with prefetching and multi-tier caching to address the challenges of large-scale data transfer over a low-bandwidth WAN. However, Syndicate is not designed to run on a cluster, therefore it has limitations in network bandwidth use and caching. Stargate, its successor, implements a content-addressable protocol and intra-cluster data caching to avoid redundant data transfers. Stargate performs cluster-to-cluster concurrent data transfers that make more efficient use of network bandwidth. In addition, Stargate uses a novel approach that co-locates computations and transfers to achieve efficient data access in cluster computing, addressing the limitations in Syndicate. As an example, Syndicate was 62% slower than local HDFS with warm caches. In comparison, Stargate’s performance was comparable to local HDFS. Stargate’s performance on benchmarks with heavy workloads was 7% faster than WebHDFS and only 8% slower than local HDFS. Stargate outperformed staging using DistCP. In addition, Stargate’s caches effectively trade high-cost WAN traffic for low-cost LAN traffic. This dissertation explores the viability of on-demand data transfer in scientific computing on clusters. Stargate’s performance and reduction in WAN traffic proves that on-demand data transfer approach can be an acceptable solution for remote data access in scientific computing.
dc.language.isoen
dc.publisherThe University of Arizona.
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
dc.subjectcluster computing
dc.subjectcluster-to-cluster data transfer
dc.subjectfile system
dc.subjecton-demand data transfer
dc.subjectremote data access
dc.subjectwide-area network
dc.titleRemote Data Access in Scientific Computing
dc.typetext
dc.typeElectronic Dissertation
thesis.degree.grantorUniversity of Arizona
thesis.degree.leveldoctoral
dc.contributor.committeememberStrout, Michelle
dc.contributor.committeememberZhang, Beichuan
dc.contributor.committeememberPeterson, Larry
thesis.degree.disciplineGraduate College
thesis.degree.disciplineComputer Science
thesis.degree.namePh.D.
refterms.dateFOA2021-02-16T21:16:27Z


Files in this item

Thumbnail
Name:
azu_etd_18525_sip1_m.pdf
Size:
1.448Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record