A Centralized "Zero-Queue" Datacenter Network

Fastpass is a datacenter network framework that aims for high utilization with zero queueing. It provides low median and tail latencies for packets, high data rates between machines, and flexible network resource allocation policies. The key idea in Fastpass is fine-grained control over packet transmission times and network paths.

A logically centralized arbiter controls and orchestrates all network transfers.

Read the paper Get the code

Note (August 2017): Flowtune is developing network monitoring and scheduling based on principles derived from this work. See the website for more details.

Zero network queues

Because the arbiter has knowledge of all current and scheduled transfers, it can choose slots and paths that yield the "zero-queue" property: the arbiter arranges for each packet to arrive at a switch on the path just as the next link to the destination becomes available.

Linux endpoint support

A loadable kernel module implements a Linux qdisc (queueing discipline) that intercepts outgoing packets just before they are passed to the NIC. The qdisc sends requests to the arbiter using specialized control sockets also in the kernel module, implemented as a Fastpass transport protocol.


The arbiter is implemented on the DPDK framework. The arbiter processes each request, performing two functions:

  1. Timeslot allocation: Assign the requester a set of timeslots in which to transmit this data. The granularity of a timeslot is the time taken to transmit a single MTU-sized packet over the fastest link connecting an endpoint to the network. The arbiter keeps track of the source-destination pairs assigned each timeslot.
  2. Path selection. The arbiter also chooses a path through the network for each packet and communicates this information to the requesting source.

Improved fairness

Five rack servers each send a bulk TCP flow to a sixth receiving server. The experiment begins with one bulk flow; every 30 seconds, a new bulk flow arrives until all five are active for 30 seconds, and then one flow terminates every 30 seconds. Even with 1s averaging intervals, baseline TCP flows achieve widely varying rates. In contrast, for Fastpass (bottom), with 3, 4, or 5 connections, the throughput curves are on top of one another. The Fastpass max-min fair timeslot allocator maintains fairness at fine granularity.

Reduced latency

Histogram of ping RTTs with background load using Fastpass (blue) and baseline (red). Fastpass’s RTT is 15.5× smaller, even with the added overhead of contacting the arbiter.

Clone the Fastpass git repository:

git clone
export FASTPASS_DIR=`pwd`/fastpass

Fastpass developers.