Skip to content

Latest commit

 

History

History
185 lines (97 loc) · 4.67 KB

16.stochastic_process.md

File metadata and controls

185 lines (97 loc) · 4.67 KB
layout title date author summary weight
notes
16.stochastic process
2016-08-05
ErbB4
stochastic process
16

Noise types motivated in single neuron models

Why noise?

During simulation, simplified neuron models like RSM, leaky integrate and fire model synthesize regular firing activity. However, in biological reality, the inter spike intervals are exponentially distributed, according to a point process of spike generation.

escape noise:

focus on firing threshold, replace fixed firing threshold with firing probability based on the difference between firing threshold and membrane potential. $$ \rho = f(u-\theta) = \frac{1}{\Delta}\int_{-\infty}^{(u-\theta)} \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{x^2}{2\sigma^2}}\mathrm{d}x $$

slow noise:

focus on refractory time. $\eta$ is the exponential refractory kernel,

$\eta(s)=\eta_0 e^{\frac{-s}{\tau}}$

But the amount $\eta_0$ depends on a stochastic variable $r$:

$\eta_0(r)=\tilde \eta_0 e^{\frac{r}{\tau}}$

diffusive noise:

focus on stochastic synaptic inputs to the neuron, which leads to fluctuations of membrane potential. Firing or not is based on the distribution of membrane potential.

Let's look at a integrate-and-fire model, $$ \frac{\mathrm{d}}{\mathrm{d}t}u(t)=-\frac{u(t)}{\tau_m}+\frac{1}{C}I^{ext}(t)+\sum_j\sum_{t_j^{(f)}>\tilde t}w_j\delta(t-t_j^{(f)})+\sum_k\sum_{t_k^{(k)}>\tilde t} w_k\delta(t-t_k^{(f)}) $$ $\delta(t-t_k^{(f)})$ is the superposition of all pre-synaptic inputs, weighted by $w_k$. This input spike train is generated by Poisson Process.

From diffusive noise to escape noise

For a model of diffusive noise, the membrane potential is normally distributed, even at the threshold level, if the neuron doesn't fire immediately. $$ P(u\sim \theta) = \Delta t e^{-\frac{[u_0(t)-\theta]^2}{2<\Delta u^2(t)>}} $$ According to Stochastic Process, $\Delta u^2(t)$ approaches a constant value of $\sigma^2/2$.

So, the probability density at $\theta$ could be rewritten as $$ f(u_0-\theta)=\frac{c_1}{\tau_m}e^{-\frac{[u_0(t)-\theta]^2}{\sigma^2}} $$ which share the same unit of one over time with $P(u\sim\theta)$, known as a escape rate formula.

To define a shift of probability density function when $u$ crosses $\theta$: $$ f(u_0,\dot u_0) = (\frac{c_1}{\tau_m} + \frac{c_2}{\sigma}[\dot u_0]_+) e^{-\frac{[u_0(t)-\theta]^2}{\sigma^2}}, $$ with $x(t)=\frac{u_0(t)-\theta}{\sigma}$.

So the similarity of diffusive noise and escape noise is that they all care about the difference between current membrane potential and threshold $u-\theta$, and the firing probability based on this difference.

Stochastic resonance

notice: noise could improve the signal transmission property of neuronal system, especially in sub-threshold regime.

Noise makes the neuron fire: $$ |x|=|(u-\theta)/\sigma|. $$ If the normalized distance is small, the neuron has great probability to fire (exponentially dependence). If noise $\sigma$ is very large, $x^2 \sim 0$

Find the optimal noise level: signal-to noise ratio(SNR)

$$ \sigma^{opt} \approx \frac{2}{3}(\theta-u_\infty) $$ since $\sigma^2=2&lt;\Delta u^2&gt;$, the transmission is optimal if the stochastic fluctuations of the membrane potential have an amplitude: $$ \frac{3}{\sqrt 2}\sqrt{<\Delta u^2>} \approx \theta-u_\infty $$

Stochastic firing and rate models

three rate models

analog neurons (averaging over time)

$$ v=\frac{n_{sp}(T)}{T} $$ $T$: time period

$n$: spike counts over time $T$

For constant current input $I_0$, the firing rate is a function of $I_0$:

$v=g(I_0)$, which is called the gain function of the neuron.

stochastic rate model

definition: the process of generation a spike is stochastic, rate of the underlying Poisson process that generate the spikes.

inhomogeneous Poisson model:

$v_i=g(h_i)$,

where $h_i(t)=\sum_j \sum_f w_{ij} \epsilon_0(t-t_j^{(f)})$

stochastic model in discrete time(?)

population rate model

an average activity of a population of equivalent neurons.

$$ A(t)=\mathrm{lim}{\Delta t \to0} \mathrm{lim}{N\to\infty} \frac{1}{\Delta t} \frac{n_{act}(t;t+\Delta t)}{N} $$

The interaction between two groups of neurons (group $l$ and group $k$) could be represented as: $$ A_k=g(\sum_l J_{kl} A_l), $$

where $g$ again is the gain function, but $J$ is not connection weights, but the effective interaction strength between two groups.