Wednesday, August 28, 2013

unit circle under different norms

Using the p-norm defined as
\[||x||_{p} = \left(\sum_{i=1}^{n} |x_{i}|^{p}\right)^{1/p}\]
for \(p>0\), we can look at what a unit circle is for each norm. Note that for \(0<p<1\) the formula above is not actually a norm as it violates the triangle inequality required to be a norm. Regardless, let's look at what a unit circle looks like in 2D for a few of the norms:
Isn't it cool how the corners of the unit circle get pushed out as the order of the norm increases! 


Friday, August 16, 2013

The importance of high spike rates


My lab uses the Neural Engineering Framework (NEF) for a principled method of implementing computations on our neuromorphic hardware. NEF uses the spike rates of neurons to represent values and calculate the weights connecting neurons. A fundamental dilemma is then how do we estimate the spike rate of a neuron? Real neurons can't just tell you their spike rate (Although simulated ones certainly can), so you have to estimate the neural spike rate from the spikes that the neurons output. 

A simple, reasonable, first approach to estimating neural spike rate would be to just count spikes over a time window and divide the number of spikes by the length of the time window. This would work in the simple case of having as long as you want to observe and with an unchanging spike rate to estimate. In this case, as you make the observation window longer and longer, the spike rate estimate will become closer to the actual spike rate. 

Unfortunately, we don't have the luxury of time, and our systems work in dynamic environments where variables are constantly changing.  Using a box window is also limited by quantization: A spike is either in the window or not, so your spike rate estimate will be quantized to multiples of \(1/T\), where \(T\) is the window length. Here, we can see that as \(T\to\infty\), our estimate could take on any value. However, we typically limit \(T\) to 100ms in operation, so if we used a box window, our spike rate estimates would be quantized to multiples of 10Hz, which not very precise. 

A better way to estimate spike rates for calculating the weights to connect neurons in our network would be to see spike rates in a manner similar to how the neurons will see them. 

How does a neuron see a spike rate? Well, spikes interact with a neuron through the neuron's synapse, which is a low pass filter in our neuron model. The synapse converts the spike (impulses) into a current that is inject into the neuron's soma. It is this current that can be interpreted as what the neuron sees as the spike rate. 

Therefore, to put ourselves in the perspective of the neuron when collecting spike rate data to training the network, we can use a low pass filter, with time constant matched to the neural synapses, to estimate the spike rate instead of counting spikes over a time window. For Poisson and uniform spike inputs, this method looks something like this:
Spike rate estimation for various rates over time. Blue traces are filter estimate when input spikes are generated uniformly. Red traces are filter estimate when input spikes are generated from a Poisson process. Filter time constant is 100ms. Spike rate estimates are normalized to the actual firing rate, so the correct estimate on the y-axis is 1. Vertical dotted lines denote the first 5 time constants of time. Horizontal dotted lines denote exponential progression towards steady state value.
A few things to note from the plot:
  • As spike rates increase, our estimate gets tighter to the actual spike rate
  • A uniform spike input is understandably less noisy than a Poisson spike input.
  • It takes time for the synapse to reach its steady state spike rate estimate. Note how the traces follow the saturating exponential function indicated by the dotted line. 
Why does it make sense that as spike rate increases, the the estimate gets tighter to the actual spike rate? Consider the Poisson spike process, where the number of spikes seen in a window of time is described by a Poisson distribution, in which the mean equals the variance, \(\mu=\sigma^{2}.\) As the rate of the Poisson distribution increases, it looks more and more like a Gaussian distribution. For a Gaussian distribution, about 68% of the probability is within 1 standard deviation of the mean.  Looking at the ratio between the standard deviation and the mean, 
\[\frac{\sqrt{\sigma^{2}}}{\mu}=\frac{\sqrt{\mu}}{\mu}\]
\[ = \frac{1}{\sqrt{\mu}}\]
we see that as the mean rate increases, the spread around the mean drops with the square root of the mean, which explains why the estimate gets tighter to the actual spike rate with higher rates.

It's intuitive that a Poisson spike input would be more noisy than a uniform input. But how much more noisy?Looking at the distribution of spike rate estimate values collected over time after allowing the system to settle (i.e. run for 5 time constants) gives us an idea of what the synapse estimates as the rate:
Distribution (i.e. histogram) of spike rate estimates with a uniform spike input. x-axis is spike rate estimate, and y-axis is histogram counts so area of histogram is 1.
Distribution (i.e. histogram) of spike rate estimates with a Poisson spike input. x-axis is spike rate estimate, and y-axis is histogram counts so area of histogram is 1.
Of note from these plots:
  • The estimate of the Poisson spike rate begins to look Gaussian as the rate increases.
  • The estimate of the uniform spike rate begins to look uniform as the rate increases. Surprise surprise!
  • The x-axis scaling gets tighter with high firing rates, reflecting that our estimate gets better and better.
Code for this post found here.