recentpopularlog in

jabley : design   843

« earlier  
[no title]
This article is intended to serve as an introduction to the newest Linux IO interface, io_uring, and compare it to the
existing offerings. We'll go over the reasons for its existence, inner workings of it, and the user visible interface. The
article will not go into details about specific commands and the likes, as that would just be duplicating the information
available in the associated man pages. Rather, it will attempt to provide an introduction to io_uring and how it works,
with the goal hopefully being that the reader will have gained a deeper understanding of how it all ties together. That
said, there will be some overlap between this article and the man pages. It's impossible to provide a description of
io_uring without including some of those details.
linux  kernel  io  async  io_uring  data-structures  performance  overview  design  filetype:pdf 
4 weeks ago by jabley
Programming Satan’s Computer
Cryptographic protocols are used in distributed systems to
identify users and authenticate transactions. They may involve the exchange of about 2–5 messages, and one might think that a program of
this size would be fairly easy to get right. However, this is absolutely not
the case: bugs are routinely found in well known protocols, and years
after they were first published. The problem is the presence of a hostile
opponent, who can alter messages at will. In effect, our task is to program a computer which gives answers which are subtly and maliciously
wrong at the most inconvenient possible moment. This is a fascinating
problem; and we hope that the lessons learned from programming Satan’s computer may be helpful in tackling the more common problem of
programming Murphy’s.
paper  security  infosec  comp-sci  crypto  protocol  design 
june 2019 by jabley
A fork() in the road
The received wisdom suggests that Unix’s unusual combination of fork() and exec() for process creation was an
inspired design. In this paper, we argue that fork was a clever
hack for machines and programs of the 1970s that has long
outlived its usefulness and is now a liability. We catalog the
ways in which fork is a terrible abstraction for the modern programmer to use, describe how it compromises OS
implementations, and propose alternatives.
As the designers and implementers of operating systems,
we should acknowledge that fork’s continued existence as
a first-class OS primitive holds back systems research, and
deprecate it. As educators, we should teach fork as a historical artifact, and not the first process creation mechanism
students encounter.
filetype:pdf  unix  os  design  fork  memory  safety  performance 
april 2019 by jabley
Fast Serializable Multi-Version Concurrency Control for Main-Memory Database Systems
Multi-Version Concurrency Control (MVCC) is a widely employed concurrency control mechanism, as it allows for execution modes where readers never block writers. However,
most systems implement only snapshot isolation (SI) instead
of full serializability. Adding serializability guarantees to existing SI implementations tends to be prohibitively expensive.
We present a novel MVCC implementation for main-memory database systems that has very little overhead compared
to serial execution with single-version concurrency control,
even when maintaining serializability guarantees. Updating
data in-place and storing versions as before-image deltas in
undo buffers not only allows us to retain the high scan performance of single-version systems but also forms the basis of our cheap and fine-grained serializability validation
mechanism. The novel idea is based on an adaptation of
precision locking and verifies that the (extensional) writes
of recently committed transactions do not intersect with the
(intensional) read predicate space of a committing transaction. We experimentally show that our MVCC model allows
very fast processing of transactions with point accesses as
well as read-heavy transactions and that there is little need
to prefer SI over full serializability any longer.
comp-sci  database  mvcc  performance  design  architecture  filetype:pdf  paper  serialisability  linearisability 
march 2019 by jabley
[no title]
The FoundationDB Record Layer is an open source library
that provides a record-oriented datastore with semantics
similar to a relational database, implemented on top of FoundationDB, an ordered, transactional key-value store. The
Record Layer provides a lightweight, highly extensible way
to store structured data. It offers schema management and a
rich set of query and indexing facilities, some of which are
not usually found in traditional relational databases, such
as nested record types, indexes on commit versions, and indexes that span multiple record types. The Record Layer is
stateless and built for massive multi-tenancy, encapsulating
and isolating all of a tenant’s state, including indexes, into a
separate logical database. We demonstrate how the Record
Layer is used by CloudKit, Apple’s cloud backend service, to
provide powerful abstractions to applications serving hundreds of millions of users. CloudKit uses the Record Layer
to host billions of independent databases, many with a common schema. Features provided by the Record Layer enable
CloudKit to provide richer APIs and stronger semantics, with
reduced maintenance overhead and improved scalability.
filetype:pdf  foundationdb  apple  paper  comp-sci  database  scale  design  experience 
january 2019 by jabley
[no title]
The fastest plans in MPP databases are usually those with
the least amount of data movement across nodes, as data
is not processed while in transit. The network switches
that connect MPP nodes are hard-wired to perform packetforwarding logic only. However, in a recent paradigm shift,
network devices are becoming “programmable.” The quotes
here are cautionary. Switches are not becoming general purpose computers (just yet). But now the set of tasks they can
perform can be encoded in software.
In this paper we explore this programmability to accelerate OLAP queries. We determined that we can offload
onto the switch some very common and expensive query
patterns. Thus, for the first time, moving data through
networking equipment can contribute to query execution.
Our preliminary results show that we can improve response
times on even the best agreed upon plans by more than 2x
using 25 Gbps networks. We also see the promise of linear
performance improvement with faster speeds. The use of
programmable switches can open new possibilities of architecting rack- and datacenter-sized database systems, with
implications across the stack.
filetype:pdf  paper  comp-sci  database  networking  hardware  optimisation  datacenter  design 
january 2019 by jabley
« earlier      
per page:    204080120160

Copy this bookmark:

to read