recentpopularlog in
« earlier   later »
Producing Wrong Data Without Doing Anything Obviously Wrong!
This paper presents a surprising result: changing a seemingly
innocuous aspect of an experimental setup can cause a systems
researcher to draw wrong conclusions from an experiment.
What appears to be an innocuous aspect in the experimental
setup may in fact introduce a significant bias in an
evaluation. This phenomenon is called measurement bias in
the natural and social sciences.
Our results demonstrate that measurement bias is significant
and commonplace in computer system evaluation. By
significant we mean that measurement bias can lead to a performance
analysis that either over-states an effect or even
yields an incorrect conclusion. By commonplace we mean
that measurement bias occurs in all architectures that we
tried (Pentium 4, Core 2, and m5 O3CPU), both compilers
that we tried (gcc and Intel’s C compiler), and most of the
SPEC CPU2006 C programs. Thus, we cannot ignore measurement
bias. Nevertheless, in a literature survey of 133 recent
papers from ASPLOS, PACT, PLDI, and CGO, we determined
that none of the papers with experimental results
adequately consider measurement bias.
Inspired by similar problems and their solutions in other
sciences, we describe and demonstrate two methods, one
for detecting (causal analysis) and one for avoiding (setup
randomization) measurement bias.
paper  filetype:pdf  experiment  bias  measurement 
september 2017
Browser Security White Paper
This white paper provides a technical comparison of the security features and attack surface of Google
Chrome, Microsoft Edge, and Internet Explorer. We aim to identify which browser provides the highest level
of security in common enterprise usage scenarios, and show how differences in design and implementation
of various security technologies in modern web browsers might affect their security.
Comparisons are done using a qualitative approach since many issues regarding browser security cannot
easily be quantified. We focus on the weaknesses of different mitigations and hardening features and take
an attacker’s point of view. This should give the reader an impression about how easy or hard it is to attack
a certain browser.
The analysis has been sponsored by Google. X41 D-Sec GmbH accepted this sponsorship on the condition
that Google would not interfere with our testing methodology or control the content of our paper. We
are aware that we could unconsciously be biased to produce results favorable to our sponsor, and have
attempted to eliminate this by being as transparent as possible about our decision-making processes and
testing methodologies.
browser  edge  chrome  ie  web  security  paper  infosec  filetype:pdf 
september 2017
Energy Efficiency across Programming Languages
This paper presents a study of the runtime, memory usage
and energy consumption of twenty seven well-known software
languages. We monitor the performance of such languages
using ten different programming problems, expressed
in each of the languages. Our results show interesting findings,
such as, slower/faster languages consuming less/more
energy, and how memory usage influences energy consumption.
We show how to use our results to provide software
engineers support to decide which language to use when
energy efficiency is a concern.
efficiency  language  energy  green  paper  filetype:pdf 
september 2017
« earlier      later »
per page:    204080120160

Copy this bookmark:





to read