User Tools

Site Tools


publication

Publication details

  • Analyzing Data Properties using Statistical Sampling – Illustrated on Scientific File Formats (Julian Kunkel), In Supercomputing Frontiers and Innovations, Series: Volume 3, Number 3, pp. 19–33, (Editors: Jack Dongarra, Vladimir Voevodin), Publishing Center of South Ural State University (454080, Lenin prospekt, 76, Chelyabinsk, Russia), 2016-10
    Publication detailsURLDOI

Abstract

Understanding the characteristics of data stored in data centers helps computer scientists in identifying the most suitable storage infrastructure to deal with these workloads. For example, knowing the relevance of file formats allows optimizing the relevant formats but also helps in a procurement to define benchmarks that cover these formats. Existing studies that investigate performance improvements and techniques for data reduction such as deduplication and compression operate on a subset of data. Some of those studies claim the selected data is representative and scale their result to the scale of the data center. One hurdle of running novel schemes on the complete data is the vast amount of data stored and, thus, the resources required to analyze the complete data set. Even if this would be feasible, the costs for running many of those experiments must be justified. This paper investigates stochastic sampling methods to compute and analyze quantities of interest on file numbers but also on the occupied storage space. It will be demonstrated that on our production system, scanning 1% of files and data volume is sufficient to deduct conclusions. This speeds up the analysis process and reduces costs of such studies significantly.

BibTeX

@article{ADPUSSIOSF16,
	author	 = {Julian Kunkel},
	title	 = {{Analyzing Data Properties using Statistical Sampling -- Illustrated on Scientific File Formats}},
	year	 = {2016},
	month	 = {10},
	editor	 = {Jack Dongarra and Vladimir Voevodin},
	publisher	 = {Publishing Center of South Ural State University},
	address	 = {454080, Lenin prospekt, 76, Chelyabinsk, Russia},
	journal	 = {Supercomputing Frontiers and Innovations},
	series	 = {Volume 3, Number 3},
	pages	 = {19--33},
	doi	 = {http://dx.doi.org/10.14529/jsfi1603},
	abstract	 = {Understanding the characteristics of data stored in data centers helps computer scientists in identifying the most suitable storage infrastructure to deal with these workloads. For example, knowing the relevance of file formats allows optimizing the relevant formats but also helps in a procurement to define benchmarks that cover these formats. Existing studies that investigate performance improvements and techniques for data reduction such as deduplication and compression operate on a subset of data. Some of those studies claim the selected data is representative and scale their result to the scale of the data center. One hurdle of running novel schemes on the complete data is the vast amount of data stored and, thus, the resources required to analyze the complete data set. Even if this would be feasible, the costs for running many of those experiments must be justified. This paper investigates stochastic sampling methods to compute and analyze quantities of interest on file numbers but also on the occupied storage space. It will be demonstrated that on our production system, scanning 1\% of files and data volume is sufficient to deduct conclusions. This speeds up the analysis process and reduces costs of such studies significantly.},
	url	 = {http://superfri.org/superfri/article/view/106},
}

publication.txt · Last modified: 2019-01-23 10:26 by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki