Sunday, June 24, 2012

Adding a printer ubuntu

Problem is that nobody remembers the root password, which the system prompts you for upon selecting add printer in either the printers gui or localhost:631

The way around this was to run sudo system-config-printer from the command line...

Synapse Calibration procedure

on ng-thalamu

There are 4 steps to the calibration:
  1. gleak
  2. Erev
  3. txmt
  4. tau_lpf
For gleak (Erev)
  1. Arrange the board's xml file so the chip and synapse numbers you are looking to calibrate are first in the file
  2. Match the chip and synapse numbers in syn_f_vs_g_sleak.py (f_vs_erev_sleak.py) to the desired chip and synapse numbers AND BE SURE TO SAVE.
  3. In spring, run  syn_f_vs_g_sleak.py (f_vs_erev_sleak.py)
  4. Set the chip and synapse numbers in calib_syn_f_vs_g_sleak.py (calib_f_vs_erev_sleak.py) to the desired chip and synapse numbers AND BE SURE TO SAVE.
  5. Run calib_syn_f_vs_g_sleak.py (calib_f_vs_erev_sleak.py) in ipython --pylab
  6. Execute extract_syn_param_g_lksoma.py (extract_syn_param_erev_lksoma.py) in ipython --pylab, and then call run_default with the appropriate bif file, board name, chip number, and synapse number.
  7. Use median values in histogram as parameters in xml file
For txmt (tau_lpf)
  1. In neuro-boa/apps/calibrate_neuron/calibrate_synapse/calibrate_pe/calibrate_pe.cpp (calibrate_vleakpf/calibrate_vleakpf.cpp):
    1.  make sure the appropriate chip calibration is pushed back
      • CALIBRATION_DAC_FILE.push_back(<check this>);
      • CALIBRATION_ADC_FILE.push_back(<check this>);
    2. make sure the appropriate chip number is pushed back
      • selected_chips.push_back(<chip number>);
    3. select the appropriate chip in the for loop around ~line 621
  2. run make
  3. run calibrate_pe (calibrate_vleakpf)
  4. For txmt:
    1. change pw_values.csv to pw_values_<board>_<chip>.csv
    2. In fit_2d_1coeff.py
      • change filename to match data
      • select synapse
    3. run fit_2d_1coeff.py
    4. set C1 to median value
    5. set C3 to 0
  5. For tau_lpf:
    1. change the data/ folder name to data<chip num>/
    2. In fit_tau.py
      1. change data folder to match chip number
    3. run fit_tau.py
    4. set tau_lpf to median value

Saturday, June 23, 2012

Synaptic Pulse extender

On Neurogrid, the synaptic conductance, \(x\), is governed by
\[\tau \dot{x} = -x + g_{max}\sum_ip(t-t_i)\]
where \(p(t)\) is a square pulse of length \(t_{xmt}\) resulting from spikes arriving at time \(t_i\), \(g_{max}\) is the maximum synaptic conductance, and \(\tau\) is the synaptic time constant.

Let's analyze the steady state conductance induced by a Poisson spike train where the interarrival times of spikes are distributed exponentially with pdf \(f(t) = \lambda e^{-\lambda t}\).

At steady state, \(\dot{x} = 0\), and \(0 = -x + g_{max}p(t)\).
Therefore at steady state, \(x = g_{max}p(t)\) and on average \(\langle x \rangle=g_{max} \langle p(t) \rangle\).

There is one wrinkle in our analysis: when a spike arrives within \(t_{xmt}\) of the previous spike. The two resulting pulses do not add linearly.  The second pulse merely extends the previous pulse by the time between the spikes.


\[\langle p(t)\rangle=\langle \mathrm{spike\ rate}\rangle \langle \mathrm{average\ pulse\ value}\rangle\]

For a Poisson process, the rate is simply \(\lambda\).

For a full pulse, the area is simply \(t_{xmt}\).  For the pulse resulting from a collision, the area is \(t=\Delta t\). So

\[\langle \mathrm{average\ pulse\ value}\rangle = \int_0^{t_{xmt}} t \lambda e^{-\lambda t} dt + \int_{t_{xmt}}^\infty t_{xmt}\lambda e^{-\lambda t} dt\]

\[ = \left. t e^{-\lambda t}\right|_{t_{xmt}}^0 + \int_0^{t_{xmt}} e^{-\lambda t} dt + \left. t_{xmt}\lambda e^{-\lambda t} \right|_\infty^{t_{xmt}}\]
\[ = -t_{xmt} e^{-\lambda t_{xmt}} + \left. \frac{1}{\lambda} e^{-\lambda t} \right|_{t_{xmt}}^0 + t_{xmt}\lambda e^{-\lambda t_{xmt}} \]

\[ = \frac{1}{\lambda} (1-e^{-\lambda t_{xmt}})\]

\[\langle p(t)\rangle = \lambda \frac{1}{\lambda} (1-e^{-\lambda t_{xmt}})\]
\[\langle p(t)\rangle = (1-e^{-\lambda t_{xmt}})\]

The average synaptic conductance is then
\[\langle x \rangle = g_{max}(1-e^{-\lambda t_{xmt}})\]

Tuesday, June 19, 2012

Useful Laplace techniques

Final Value Theorem:
$\lim_{x \to \infty} x(t) = \lim_{s \to 0} sX(s)$
don't forget about the extra $s$ in the rhs.


Monday, June 18, 2012

Linear Systems and DiffEqs

I came across the most wonderful diagram on Richard Prager's Cambridge engineering mathematics course site relating differential equations to linear systems analysis. If only I had see this when I was taking signals and systems!  
differential equation --> solve --> compute step response --> differentiate --> voila! you have the impulse response and can now calculate the response to any input.

I've been having trouble focusing recently.  I think it's because I have lab presentation in a couple of weeks but I feel like I have little results to share with the group.
There is hope though.  I should talk to Nick about his synaptic gain modulation again and see what conclusions we drew from his presentation.

Things I can present:
Kalman filter update (ie integrator update)
Synaptic gain modulation
-does QIF neuron support it
Robot plan
-simulation results would be great

Friday, June 15, 2012

Remote desktop with x11 vnc

x11 vnc allows for remote desktop access

It's a simple system to use:

  • ssh into the computer you would like to remotely access.
  • run x11vnc on the remote computer
    • it should print out something like: "The VNC desktop is:      <remote_host>:<display number>
  • on your local computer, run vncviewer <remote_host>:<display number>

This guy developed x11vnc.

Tuesday, June 12, 2012

Setting up user accounts on ubuntu


Creating an account:
useradd -m <username>

Don't forget the -m!!!!! or they won't have a home folder and hence desktop and it will be very confusing!

Change shell to bash
chsh -s /bin/bash <username>

Man, there are so many little details under the hood of Ubuntu that you can miss! Yes, it gives you a lot of control, but the learning curve is super steep!

Monday, June 11, 2012

Linearization

Linearization is the idea of approximating a continuous function around a point with a line.
For a continuous function \(f(x)\), we linearize \(f(x)\) around point \(a\) as
\[f(x)\approx f(a) + f'(a)(x-a).\]

More interesting things happen in higher dimensions:
\[f(\mathbf{x}) \approx f(\mathbf{a}) + \left. \frac{\partial f(\mathbf{x})}{\partial x_1}\right|_{\mathbf{a}} + \left. \frac{\partial f(\mathbf{x})}{\partial x_2}\right|_{\mathbf{a}} + \ldots \]
or
\[f(\mathbf{x}) \approx f(\mathbf{a}) + \left. \nabla f \right|_{\mathbf{a}}  \cdot ({\mathbf{x}} - {\mathbf{a}}) .\]

Saturday, June 9, 2012

You My Friend

My Russian coworker explained to me why Russians say You my friend".
English sentences have subjects and verbs.
"You are my friend"
subject: you
verb: are

But here, the words "you" and "are" are redundant.  In Russian, these two words would simply be lumped together into "you", so "you are my friend" becomes (with thick accent) "you my friend".