recentpopularlog in

jabley : hardware   116

« earlier  
The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design
The past decade has seen a remarkable series of advances in machine learning, and in particular deep
learning approaches based on artificial neural networks, to improve our abilities to build more accurate
systems across a broad range of areas, including computer vision, speech recognition, language
translation, and natural language understanding tasks. This paper is a companion paper to a keynote talk
at the 2020 International Solid-State Circuits Conference (ISSCC) discussing some of the advances in
machine learning, and their implications on the kinds of computational devices we need to build,
especially in the post-Moore’s Law-era. It also discusses some of the ways that machine learning may
also be able to help with some aspects of the circuit design process. Finally, it provides a sketch of at
least one interesting direction towards much larger-scale multi-task models that are sparsely activated
and employ much more dynamic, example- and task-based routing than the machine learning models of
today
machine-learning  future  hardware  design  cpu 
8 weeks ago by jabley
[no title]
The fastest plans in MPP databases are usually those with
the least amount of data movement across nodes, as data
is not processed while in transit. The network switches
that connect MPP nodes are hard-wired to perform packetforwarding logic only. However, in a recent paradigm shift,
network devices are becoming “programmable.” The quotes
here are cautionary. Switches are not becoming general purpose computers (just yet). But now the set of tasks they can
perform can be encoded in software.
In this paper we explore this programmability to accelerate OLAP queries. We determined that we can offload
onto the switch some very common and expensive query
patterns. Thus, for the first time, moving data through
networking equipment can contribute to query execution.
Our preliminary results show that we can improve response
times on even the best agreed upon plans by more than 2x
using 25 Gbps networks. We also see the promise of linear
performance improvement with faster speeds. The use of
programmable switches can open new possibilities of architecting rack- and datacenter-sized database systems, with
implications across the stack.
filetype:pdf  paper  comp-sci  database  networking  hardware  optimisation  datacenter  design 
january 2019 by jabley
« earlier      
per page:    204080120160

Copy this bookmark:





to read