Sensory Systems/Computer Models/Simulations of Retinal Function

The following section describes a realistic simulation of the activity of retinal ganglion cells, using the computer language  and the image- and video-processing package. After a summary of the main features of the retina that are important for the simulation, the installation of the required software packages is described. The rest of this section introduces the simulation and the corresponding parameters.

Physiology of Retina
When light reaches the back of the eye, it enters the cellular layers of retina. The cells of retina that detect and respond to light photoreceptors are located at the very back of retina. There are two types of photoreceptors: Rods and Cones. Rods allow us to see in dim light but don’t allow the perception of color. Cones on the other hand allows us to perceive color under normal lighting conditions. In most of the retina, Rods outnumber Cones. The area of the retina that provides highest acuity vision is at the center of our gaze. When light hits photoreceptor, it interacts with the molecule called Photopigment, which begins the chain reaction that serves to propagate the visual signal. The signal is transmitted to cells called Bipolar cells which connects photoreceptors to Ganglion cells. Bipolar cells pass signals to Ganglion cells which leave the eye in a large cluster and in an area called the Optic Disc. After leaving the retina the Ganglion cells are called Optic Nerve. The Optic Nerve carries the visual information towards the brain to be processed. There are two other types of cells in retina that need to be mentioned: Horizontal and Amacrine cells. Horizontal cells receive input from multiple photoreceptor cells. Amacrine cells receive signals from bipolar cells and are responsible for regulation and integration of activity in Bipolar and Ganglion cells. More illustrations of the retina can be found here. The retina cells form two cell layers: the Outer Plexiform Layer (OPL) and the Inner Plexiform Layer (IPL). Each layer is modeled with specific filters. At the IPL level which constitutes the retina output, different channels of information can be identified. We focus here on the known: the Parvocellular channel (Parvo) dedicated to detail extraction and the Magnocellular channel (Magno) dedicated to motion information extraction. In the human retina, the Parvocellular channel is most present at the fovea level (central vision) and the Magnocellular channel is most important outside of the fovea (peripheral vision), because of the relative variations of specialized cells. Interestingly, the Parvocellular signals arrive later than Magnocellular signals as shown in the image on the left.

To understand the function performed by Magno and Parvo cells, scientists performed lesion in the pathways that carry information to the brain and in the animal's performance due to lesion. When cells in the Parvocellular layers of a monkey’s lateral geniculate nucleus are destroyed, performance deteriorates on a variety of tasks, such as color discrimination and pattern detection. The most informative result is that when neurons in the Magnocellular layers are destroyed, the animal is less sensitive to rapidly flickering low spatial frequency targets. This loss of sensitivity shows that the Magnocellular pathway contains the best information which improves our ability to perform tasks requiring high temporal frequency information. The information carried by the neurons in the Magnocellular pathway provide the best information in the high temporal and low spatial frequency components of the image. Performance on motion tasks and other tasks that require this information is better when the Magnocellular pathway signal is available. The signals are not absolutely necessary to perform the task. Performance deficits on motion tasks could be compensated for simply by increasing the stimulus contrast; that is, one can compensate for the loss of information on the Magnocellular pathway by improving the quality of the information on the Parvocellular pathway. Hence, the Magnocellular pathway contains information that is particularly useful for visual tasks, such as motion perception. Also, Parvo and Magno pathways are known relating to the neurological diseases such as Alzheimer’s, Parkinson's diseases etc. More a more informative summary in.

The simulations below have been done using OpenCV library in Python. The outputs of the famous Lena image from Magno and Parvocellular layers shown in Fig 1. We can see that the output from Parvo cells contains color and pattern information and that from Magno cells contains contours with low spatial frequency. It is important to describe the physiology of the retina into a mathematical model in order to boost the development of the retinal implants and vision research. Some of the interesting articles on retinal implants in and webvision.

The following sections describe in detail about the package, parameter settings and demonstrations.

Installation Guide and Source Code
This package is currently hosted at GitHub, click here to see the source code. This package supports both Python 2.x and 3.x. For installing and running this package successfully, we strongly recommend Anaconda. Anaconda will mitigate many problems in updating and installing 3rd party packages.

Requirements

 * OpenCV3: for image and video processing. There are ways of installing OpenCV. First, you can follow online documentation at OpenCV's website and build OpenCV from source (in this case, make sure you turn on Python option and build extra modules). Anaconda users can directly install the package from binary: You can type above command in a Linux terminal or Windows console, Anaconda will install this package automatically.
 * PyQtGraph: for design and manage GUI. We recommend that you install this package by using Anaconda's build: You might not want to directly install this package from PyPI since there are few tricky points you need to take care.
 * Other required packages, such as numpy, can be installed by PyPI and will be checked during package installation

From PyPI (Recommend)
If you've installed above mentioned packages, you can grab the latest stable version from PyPI via:

From GitHub
You should make sure there is Git on your machine. For Windows user, please make sure Git executable is in your path.

After you installed  and , you can install the retina simulation package by:   grab the bleeding edge version of the package and install it automatically.

Start Retina Viewer
Retina Viewer is the central component of the entire package. It allows you to play with the retina model with different parameter settings. Running the viewer also allows you to validate your installation.

Assume you've installed the package successfully from above instructions, you can start a terminal/console and type: ''Above command is not a file name instead of a command. Once you installed, this command is searchable by your terminal/console.''

''For Windows user, your system either start the viewer right away or it will ask for the program that opens the file, you should then find and choose   from your Anaconda installation. If it's not responding, you may want to open a new console and type above command again.''

Note that FFMPEG will be downloaded automatically at the first time if it's not detected by the package.

Software GUI Explained
(as shown left) currently is divided in 5 panels. At top, there are three displayers, from left to right, that display original image/video, Parvocellular pathway output and Magnocellular pathway output. At bottom left, there are functions that allow you to play with viewers in different modes. At bottom middle, there are parameters to define Parvocellular pathway at Inner Plexiform Layer (IPL) and Outer Plexiform Layer (OPL). At bottom right, there are parameters to define Magnocellular pathway at IPL.

You can change the parameter settings during simulation. However since the retina model is also considering temporal dimension, a short adaption is required after a change of parameters. The details for the parameter settings can be found in next section.

From the bottom right panel, first, you need to select the Operation Mode, the viewer provides 5 modes:

Brief Mathematical Description of the Retina Model
In this section, we briefly review the retina model that is presented in. At right a conceptual illustration is shown. First, the illumination of the input frame is normalised by the photoreceptors, and then processed by Outer Plexiform Layer (OPL) and Inner Plexiform Layer (IPL). The output of IPL layer goes into two channels: Parvo channel that extracts details and Magno channel for motion analysis.

Illumination Variation Normalisation Using Photoreceptors
The following equations are used to adjust input luminance $R(p)$ into adjusted luminance $C(p)$  in the range of $[0, V_{\max}]$  where $V_{\max}$  represents the maximum allowed pixel value in the image (in this case, 255 for 8-bits images. This value can be different if the image is in different coding scheme.) according to the fact where photoreceptors have the ability to adjust their sensitivity with respect to the luminance of their neighbourhood.$$C(p)=\frac{R(p)}{R(p)+R_{0}(p)}\cdot V_{\max}+R_{0}(p)$$

$$R_{0}(p)=V_{0}\cdot L(p)+V_{\max}(1-V_{0})$$

where $V_{0}\in[0,1]$ is a static compression parameter, $R_{0}(p)$  is a compression parameter that is linearly linked to the local luminance $L(p)$  of the neighbourhood of the photoreceptor. $L(p)$ is computed by applying a spatial low pass filter to the image, and it's achieved by the implementing horizontal cells network.

This model enhances contrast visibility in dark areas while maintaining it in bright areas.

Outer Plexiform Layer
A model of the OPL describes the effect of horizontal cells on the signal originating in the photoreceptors. OPL layer can be modelled with a nonseparable spatio-temporal filter where $f_{s}$ is the spatial frequency and $f_{t}$  is the temporal frequency, this filter is characterised by:

$$F_{OPL}(f_{s}, f_{t})=F_{ph}(f_{s}, f_{t})\cdot[1-F_{h}(f_{s}, f_{t})]$$

where

$$F_{ph}(f_{s}, f_{t})=\frac{1}{1+\beta_{ph}+2\alpha_{ph}\cdot(1-\cos(2\pi f_{s}))+j2\pi\tau_{ph}f_{t}}$$

$$F_{h}(f_{s}, f_{t})=\frac{1}{1+\beta_{h}+2\alpha_{h}\cdot(1-\cos(2\pi f_{s}))+j2\pi\tau_{h}f_{t}}$$

The above two equations can be viewed as two low-pass spatio-temporal filters that model the photoreceptors network $ph$ and horizontal cells network $h$. The output of the network $h$ contains only the very low spatial frequency of the image, it is then used as the local luminance $L(p)$. $\beta_{ph}$ is gain of the filter $F_{ph}$, $\beta_{h}$  is the gain of the filter $F_{h}$. $\tau_{ph}$ and $\tau_{h}$  are temporal constants allowing the temporal noise to be minimised. $\alpha_{ph}$ and $\alpha_{h}$  are spatial filtering constants where $\alpha_{ph}$  sets the high cut frequency and $\alpha_{h}$  sets the low cut frequency.

The difference between $F_{ph}$ and $F_{h}$  can be represented by two operators BipON and BipOFF, respectively giving the positive and negative parts of the difference between the $ph$  and $h$  images. This models the action of the bipolar cells which divides OPL outputs in two channels, ON and OFF. The OPL filter can remove spatio-temporal noise and enhance contours.

Inner Plexiform Layer and Parvo Channel
The ganglion cells (midget cell) of the Parvo channel receive the contour information coming from the BipON and BipOFF outputs of the OPL.

Here we can apply the same local adaption law on BipON and BipOFF outputs as we did for the photoreceptors, therefore further enhance the contours information. These adapted outputs are finally combined together and sending out as Parvocellular pathway output.

Inner Plexiform Layer and Lagno Channel
On the Magnocellular channel of the IPL, amacrine cells act as high pass temporal filters.

$$A(z)=b\cdot \frac{1-z^{-1}}{1-b\cdot z^{-1}}\quad \text{with } b=e^{-\Delta t/\tau_{A}}$$

where $\Delta t=1$ is the discrete time step. and $\tau_{A}$ is the time constant of the filter (2 time steps in default configuration). This filter enhances areas where changes occur in space and time.

The amacrine cells ($A$ ) are connected to the bipolar cells (BipON and BipOFF) and to the "parasol" ganglion cells. As on the Parvo channel, the ganglion cells perform local contrast compression, but also act as a spatial low pass filter. This result is a high pass temporal filtering of the contour information which is smoothed and enhanced (by low pass filter and local contrast compression). As a consequence, only low spatial frequency moving contours are extracted and enhanced (especially contours perpendicular to the motion direction).

Summary of Parameter and Settings
The following parameter settings are mainly taken from original Retina model that developed in OpenCV. Note that the parameters here are quite different from the settings that were used in. However, fine-tuned parameter settings would produce better visualisation than what paper claims.

How the Viewer is Written
The central simulation component is the Retina Simulation Viewer. This viewer is written in a straightforward method where we wrote a static GUI interface and force the entire window update whenever there is a change in parameters or frames.

The GUI is completely written with  and , if you are familiar with Qt's framework, you can actually design a GUI you want and then export it to a XML annotated description file, then you can also use   to convert the description into a Python class. Here, we hand-coded the entire GUI, therefore if you look at the code, you will find a large portion of code is configuring graphic modules.

The most important part of the code is the update function in the script, the update function hooks with the GUI window and force it to update whenever it's necessary. At each step, the update function, according to the configuration, would check if the configuration is changed by comparing a config dictionary from previous time step, and then decide if the program reinitialises the data source and retina model. The update function will always produce a frame that fits the current configuration. This frame is then processed by the given retina model and display it.