Ring buffers in Python/Numpy

UPDATE: see Michel Pelletier  comment below, for using numpy.roll.

Recently I’ve been developing a scrolling oscilloscope for real-time data. An important first step is to store the data in a non-real-time buffer, which will be input to the plotting functions.

Requirements

For my purposes, the length is around 10,000 to 100,000 single precision floats, with blocks of 1,000 to 10,000 being added 40-20 times a second. The buffer will be read about 10 times a second as a numpy array.

Deque

The first place I looked for a ring buffer was in the standard library. Within collections there’s a promising data structure called deque which by default is a first-in-first-out queue, with an optional max length argument. Though a queue isn’t a ring buffer, it can have the same behavior, making my code base simpler and smaller.

from collections import deque
import numpy as np

def ringbuff_deque_test():
    ringlen = 100000
    ringbuff = deque(np.zeros(ringlen, dtype='f'), ringlen)
    for i in range(40):
        ringbuff.extend(np.zeros(10e3, dtype='f')) # write
        np.array(ringbuff) #read

Is it fast enough? Using Ipython’s %timeit:

In [32]: timeit ringbuff_deque_test()
 1 loops, best of 3: 19.7 s per loop

Sadly, no.  Looking a little deeper, it turns out that reading a deque object into a numpy array is painfully slow. Converting the deque to a list first

np.array(list(ringbuff))

reduces the time to 448 milliseconds, still too slow. Removing the read command entirely brings our test to only 30 milliseconds! I’m not going to re-invent the plotting library, so no deque for this project. However, this does look like a viable option for non-numpy projects.

Writing a numpy class

The next step is to implement a ring buffer in numpy. Because I’m always adding arrays of length greater than 1, I only wrote an extend method.

class RingBuffer():
    "A 1D ring buffer using numpy arrays"
    def __init__(self, length):
        self.data = np.zeros(length, dtype='f')
        self.index = 0

    def extend(self, x):
        "adds array x to ring buffer"
        x_index = (self.index + np.arange(x.size)) % self.data.size
        self.data[x_index] = x
        self.index = x_index[-1] + 1

    def get(self):
        "Returns the first-in-first-out data in the ring buffer"
        idx = (self.index + np.arange(self.data.size)) %self.data.size
        return self.data[idx]

def ringbuff_numpy_test():
    ringlen = 100000
    ringbuff = RingBuffer(ringlen)
    for i in range(40):
        ringbuff.extend(np.zeros(10000, dtype='f')) # write
        ringbuff.get() #read

Is it fast enough?

In [33]: timeit ringbuff_numpy_test()
 100 loops, best of 3: 105 ms per loop

Yes. 105 milliseconds of computation over a typical second in the application.

Not fast enough for your project? Look into cython or numba to implement a for loop instead of creating index arrays.

Next step: efficient scrolling plots in python (pyqtgraph?, galry?).

Making raster plots in python with matplotlib

Using matplotlib’s vlines function, raster plots are essentially a one-liner. However, I create these plots just occasionally enough that remembering how, exactly, to write that one-liner takes a few minutes. Therefore I’ve wrapped vlines in a simple function called raster. I hope you find it useful. If you have your own raster plot function I’d love to hear about it.

an example of a matplotlib raster plot
An example of the output of the raster function.

Interact with a simple neuron model

At the heart of neural processing is action potential generation, a spike in membrane voltage that travels out to initiate communication with other neurons. Even if we reduce all the chattering neural input to a single variable, the equations that govern the action potential, called the Hodgkin-Huxley model, are quite complex: a system of differential equations ranging from four to over sixty equalities. This makes the model’s dynamics difficult to analyze and computationally expensive to simulate.

Thus, theorists often use a simplified neuron model governed by a single differential equation called the Leaky Integrate and Fire model (LIF). In this model, the cell membrane slowly accumulates charge until a threshold point where an action potential is created and the voltage drops to a reset point.

Because it is easy to gain an intuition for spiking neurons by playing with a simple LIF model, I built one for the browser using processing and processing.js.

The top graph represents the voltage of the cell over time, while the bottom graph displays the input current. Four types of input are available including the ‘mouse’ mode which controls the input by the horizontal mouse position.

Screenshot:

LIF

Explore for yourself the spiking dynamics of the model neuron here.

Code available here.

Sunday Project: One Dimensional Cellular Automata

I found Stephen Wolfram’s A New Kind of Science at used book store. It’s a beautiful book, full evolving patterns and shapes. As the title suggests, Wolfram intended the book to be a revolution for many, perhaps all, scientific fields. It has been ten years since publication, and science, it seems, has ignored Wolfram’s revolution, and rightly so. The book is magnificent but misguided, a meticulously constructed temple with empty halls, having failed to convert the scientific masses.

As I flip through this tome I see the chilling emptiness of a scientist who has lost touch with science, and the delicate elegance of the work. The former giving the latter a frosty edge. The thesis of the book is cellular automata — very simple programs — can recreate the complexities of the natural world. This argument is very similar to that of fractals, and like fractals, cellular automata have a strange beauty.

I wanted one of my own. Therefore, I spent a Sunday morning coding a one-dimensional automata in python, using numpy and matplotlib. The code is available on github. For the final print, I choose “rule 30″, one of the simplest chaotic automata. My roommates decided 50 iterations was the most aesthetic, here is the result:

rule_30_size_50_imshow2

Philip Cherny, a friend and student of art history, had the following to say,

This image kind of draws my attention to the very human desire to determine order or make sense of objects, the way the brain processes information. It is largely due to the fact that there does seem to be some complex order at work. If it were just “noise,” random scribbles or an organic form, I think I wouldn’t try to read it like a puzzle. It arouses my curiosity, somewhat analogous to an alien from looking at earthling’s alphabet for the first time, noticing repeating symbols and trying to determine their pattern.
Finally, I ordered a print for the house. It’s a little too simple, too mathematical to really be seen as art, but I think it looks great on the wall and is a good conversation piece.

LaTeX for Students: Times New Roman, 1 inch margins, double spaced

I love LaTeX. I’m addicted to its (almost) clean separation of content and layout, excellent math typesetting, and automatic numbering and citation generation. Unfortunately, professors often require 12pt Times New Roman font, 1” margins and double spacing. It takes some work to beat the default article class into this drab form, so here is a simple “student” template I’ve created. Apacite is great for in-line citations, see the included example.


\documentclass[12pt] {article}
\usepackage[margin=1in]{geometry} %one inch margins
\renewcommand{\baselinestretch}{2} %double space, safe for fancy headers
\usepackage{pslatex} %Times font
\usepackage{apacite} %apa citation style
\bibliographystyle{apacite}
%\usepackage[pdfborder={0 0 0}]{hyperref}%for hyperlinks without ugly boxes
\usepackage{graphicx} %for figures
\usepackage{enumerate} %for lists
\usepackage{fancyhdr} %header
\pagestyle{fancy}
\usepackage[font={small,sf},format=plain,labelfont=bf,up]{caption}
\fancyhf{}
\fancyhead[l,lo]{YOUR NAME \textit{ SHORT TITLE}} %left top header
\fancyhead[r,ro]{\thepage} %right top header
\begin{document}
\title{The Very Interesting Title}
\author{Your Name}
\date \today
\maketitle
\thispagestyle{empty}
\bigskip
%\tableofcontents
\pagebreak
\setcounter{page}{1}
\section{Introduction}
APAcite examples:

\cite{Sample2011}

\shortcite{Sample2011} %et al.

\citeA{Sample2011} %in-line

\pagebreak
\bibliography{yourbibliography} %rename to your .bib file
\end{document}

The final product looks like this:

Eyeborg documentary

THEY CALL ME EYEBORG. This short documentary/advertisement is led by Rob Spence, a man who lost his eye in a shotgun accident, and provides a heartfelt and awesome review of modern prostheses. On the subject of neural devices, he says, “we are only just beginning to experiment with neural prosthetics.” While this is generally true, cochlear implants are a widely successful neural prosthetic, and received no mention.

Beyond the technology itself, the determination of those interviewed is deeply inspiring.

1988 Cognition Docudrama

A 1988 Dutch docudrama about the ideas of Douglas Hofstadler? Yes please.

Douglas Hofstadler is a cognitive scientist best known for his beautiful book Gödel Escher Bach: an Eternal Golden Braid. Later, Hofstadler co-edited a volume of essays with philosophy heavyweight Daniel Dennett called The Mind’s I, and clearly someone in Holland was very excited about it, as they turned the collection into a “docudrama” which has been uploaded in its entirety.

Aside from funky music and fashion, this video is a wonderful mix Good Old-Fashioned Artificial Intelligence and materialist philosophy. Someone with a fondness for neurons might be a little put off by Hofstadler’s silly ‘careenium’ and the vagueness of the physical foundations of symbols, but it’s hard to understand what cognitive science was like before the neuroscience and Bayesian revolutions. I find thinking about high level cognition with Hofstadler and Dennet refreshing and fun, and it outlines the neuroscientific terra incognita ahead.

My favorite optical illusion

Akiyoshi's snake illusionAs you scan the image, “snakes” in your peripheral vision appear to coil. The illusion also works in gray-scale, with illusory motion travelling from black→dark gray→white→light gray→black. Here, blue and yellow replace dark gray and light gray respectively.

So how does it work? A few competing theories have been proposed, but according to Conway et al. 2005, motion sensitive neurons respond slower to low-contrast areas than high-contrast areas. This difference in response time creates the illusion of motion… or that’s one theory. Complex perceptual phenomena are pretty difficult to explain using even the most advanced experimental techniques so… you know… be skeptical. For more information, see the article in the Journal of Neuroscience.

This image was created by Japanese psychologist Akiyoshi Kitaoka. You can find this illusion and many others on his website.