recentpopularlog in
« earlier  
Programming Satan’s Computer
Cryptographic protocols are used in distributed systems to
identify users and authenticate transactions. They may involve the exchange of about 2–5 messages, and one might think that a program of
this size would be fairly easy to get right. However, this is absolutely not
the case: bugs are routinely found in well known protocols, and years
after they were first published. The problem is the presence of a hostile
opponent, who can alter messages at will. In effect, our task is to program a computer which gives answers which are subtly and maliciously
wrong at the most inconvenient possible moment. This is a fascinating
problem; and we hope that the lessons learned from programming Satan’s computer may be helpful in tackling the more common problem of
programming Murphy’s.
paper  security  infosec  comp-sci  crypto  protocol  design 
13 days ago
Overcoming the challenges to feedback-directed optimization (Keynote Talk)
Feedback-directed optimization (FDO) is a general term used to describe any technique that alters a program's execution based on tendencies observed in its present or past runs. This paper reviews the current state of affairs in FDO and discusses the challenges inhibiting further acceptance of these techniques. It also argues that current trends in hardware and software technology have resulted in an execution environment where immutable executables and traditional static optimizations are no longer sufficient. It explains how we can improve the effectiveness of our optimizers by increasing our understanding of program behavior, and it provides examples of temporal behavior that we can (or could in the future) exploit during optimization.
paper  comp-sci  compilers  optimisation  performance 
26 days ago
Reasoning about the Node.js Event Loop using Async Graphs - IEEE Conference Publication
With the popularity of Node.js, asynchronous, event-driven programming has become widespread in server-side applications. While conceptually simple, event-based programming can be tedious and error-prone. The complex semantics of the Node.js event loop, coupled with the different flavors of asynchronous execution in JavaScript, easily leads to bugs. This paper introduces a new model called Async Graph to reason about the runtime behavior of applications and their interactions with the Node.js event loop. Based on the model, we have developed AsyncG, a tool to automatically build and analyze the Async Graph of a running application, and to identify bugs related to all sources of asynchronous execution in Node.js. AsyncG is compatible with the latest ECMAScript language features and can be (de)activated at runtime. In our evaluation, we show how AsyncG can be used to identify bugs in real-world Node.js applications.
paper  node  node.js  asynchronous  debugging  tools 
5 weeks ago
A fork() in the road
The received wisdom suggests that Unix’s unusual combination of fork() and exec() for process creation was an
inspired design. In this paper, we argue that fork was a clever
hack for machines and programs of the 1970s that has long
outlived its usefulness and is now a liability. We catalog the
ways in which fork is a terrible abstraction for the modern programmer to use, describe how it compromises OS
implementations, and propose alternatives.
As the designers and implementers of operating systems,
we should acknowledge that fork’s continued existence as
a first-class OS primitive holds back systems research, and
deprecate it. As educators, we should teach fork as a historical artifact, and not the first process creation mechanism
students encounter.
filetype:pdf  unix  os  design  fork  memory  safety  performance 
9 weeks ago
Fast Serializable Multi-Version Concurrency Control for Main-Memory Database Systems
Multi-Version Concurrency Control (MVCC) is a widely employed concurrency control mechanism, as it allows for execution modes where readers never block writers. However,
most systems implement only snapshot isolation (SI) instead
of full serializability. Adding serializability guarantees to existing SI implementations tends to be prohibitively expensive.
We present a novel MVCC implementation for main-memory database systems that has very little overhead compared
to serial execution with single-version concurrency control,
even when maintaining serializability guarantees. Updating
data in-place and storing versions as before-image deltas in
undo buffers not only allows us to retain the high scan performance of single-version systems but also forms the basis of our cheap and fine-grained serializability validation
mechanism. The novel idea is based on an adaptation of
precision locking and verifies that the (extensional) writes
of recently committed transactions do not intersect with the
(intensional) read predicate space of a committing transaction. We experimentally show that our MVCC model allows
very fast processing of transactions with point accesses as
well as read-heavy transactions and that there is little need
to prefer SI over full serializability any longer.
comp-sci  database  mvcc  performance  design  architecture  filetype:pdf  paper  serialisability  linearisability 
11 weeks ago
EIO: Error Handling is Occasionally Correct
The reliability of file systems depends in part on how
well they propagate errors. We develop a static analysis technique, EDP, that analyzes how file systems and
storage device drivers propagate error codes. Running
our EDP analysis on all file systems and 3 major storage
device drivers in Linux 2.6, we find that errors are often
incorrectly propagated; 1153 calls (13%) drop an error
code without handling it.
We perform a set of analyses to rank the robustness
of each subsystem based on the completeness of its error propagation; we find that many popular file systems
are less robust than other available choices. We confirm that write errors are neglected more often than read
errors. We also find that many violations are not cornercase mistakes, but perhaps intentional choices. Finally,
we show that inter-module calls play a part in incorrect
error propagation, but that chained propagations do not.
In conclusion, error propagation appears complex and
hard to perform correctly in modern systems.
filetype:pdf  paper  comp-sci  filesystem  errors  correctness  error-handling 
11 weeks ago
« earlier      
per page:    204080120160

Copy this bookmark:





to read