Insect Neurobiology: Optical Illusions at the Cellular Level

Insect Neurobiology: Optical Illusions at the Cellular Level

Current Biology Dispatches Insect Neurobiology: Optical Illusions at the Cellular Level Jamie Theobald Department of Biological Sciences, Florida Int...

438KB Sizes 0 Downloads 17 Views

Current Biology

Dispatches Insect Neurobiology: Optical Illusions at the Cellular Level Jamie Theobald Department of Biological Sciences, Florida International University, Miami, FL 33199, USA Correspondence: [email protected] https://doi.org/10.1016/j.cub.2018.10.023

The reverse-phi illusion has been one key to understanding the use of correlation for motion perception in humans and many other animals. A new study finds the source of this illusion at the cellular level in fruit flies. An electric fan sits in my office under old fluorescent lights, and when I switch it on and the blades circle faster, they appear briefly to move backwards, just before becoming an indistinct blur. Like many people, I noticed this illusion as a fascinated child, then stopped noticing it as a boring adult. But studying vision renewed my interest as I learned how tremendously important optical illusions — such as the stroboscopic wagon-wheel illusion of my fan — have been to understanding visual processing in the brain. Determining how our senses are tricked often reveals the precise computations that generate ordinary sensory experiences. In this issue of Current Biology, Salazar-Gatzimas et al. [1] go a step further and report the neural origin of an illusion in fruit flies. Reversephi motion advances an image while simultaneously reversing its contrast — dark features become bright, bright features become dark, and this creates the perception of backwards motion in a host of animals [2]. Salazar-Gatzimas et al. [1] imaged fluorescence responses of the neurons that perform motion detection in the fly brain to study the illusion as it was created. Motion detection is one of the most ubiquitous operations that visual systems perform [3]. Most animals, true to their name, are fairly animate, traversing through their environments, and motion detection allows them to do this accurately. When you walk, images of the world drift over your eye and tell you whether you are moving as intended. And when animals around you move, their images give away their location and heading, revealing opportunities such as prey if you are hunting, or dangers such as predators if you are being hunted. In each case, physical motion causes patterns,

colors, and boundaries to slide onto new locations on your retinas. As time goes by (Dt), moving image features displace to new locations (Dx), while static ones do not (Figure 1A). But this trove of useful information does not just fall out of the light signal. The number of dynamic scenes our eyes can in principle transduce is staggering [4] — it is part of what makes eyes so useful — and with the exception of pitch darkness, you will probably go through life without ever seeing the exact same scene twice. This unending variety of visual scenes means motion detection cannot be accomplished by some lock and key sensory fit, but requires a robust, general algorithm. The possible computational underpinnings of visual motion analysis were put forward over 60 years ago with a remarkable study in which walking weevils attempted to turn as they watched stripes changing intensity [5,6]. Hassenstein and Reichardt found that if a bright stripe was followed, after a slight delay, by an offset bright stripe, weevils turned in the direction of the offset. This also worked if the stripes both darkened. But if one brightened and the other darkened, weevils steered in the opposite direction. The model they proposed to explain this, known as the Hassenstein– Reichardt Correlator, or HRC, used a temporal filter to bring a feature detected at one facet into register with another one detected later (Figure 1B). For a moving object the temporal filter synchronizes the facet that detects it first with the facet that detects it second. How will the brain know when these signals agree? Hassenstein and Reichardt used signed multiplication, a nonlinear step that multiplied the two positive co-occuring brightness increments (or the

two negative brightness drops), to yield a positive response — steering with the offset. When the brightness signs do not match, the multiplication of a negative and a positive produces a negative result — and steering in the other direction. Correlations from many local detectors could be integrated to form wide fields of motion sensitivity, like neurons found in the blowfly brain [7,8]. Although some motion responses are more complex [9], with simple elaborations and modifications [10–12], the correlator model accounts for a vast range of animal behavior. But it was difficult to imagine individual neurons capable of signed multiplication handling all combinations of positive and negative luminance changes [13,14]. Instead, careful studies showed that motion processing pathways are segregated into parallel channels, called ON, that respond to brightness increments, and OFF, that respond to decrements, which are realized in the T4 and T5 cells in the optic lobes of the fly brain [3]. These are the first directionally sensitive neurons in the visual pathway. T4 and T5 are further divided into four subgroups each, which respond to progressive, regressive, upward, and downward motion, for a total of 8 types. But while parallel pathways resolved one set of problems, they introduced another—how are signals from the split pathways processed for natural scenes, which contain many light and dark contrasts that must be perceived coherently? To explore this question, SalazarGatzimas et al. [1] took advantage of the reverse-phi optical illusion [1]. When an image moves, as in Figure 1A, there is some correlation between its luminance at one time and location and at a later time

Current Biology 28, R1335–R1355, December 3, 2018 ª 2018 Elsevier Ltd. R1335

Current Biology

Dispatches A

B

C

D T4(ON) prog

x

Δx

Δx

T4(ON) T5(OFF) T5(OFF) reg prog reg

Δt

y

y

x

Time

Frame

x

x

x

Δx Δt

object

Δt

Fast object

Δx Δt

Δx Δt

Δx Δt

Time

Time

Δx Δx Δt

Frame

Reverse-phi Light flash

phi

Slow object

Current Biology

Figure 1. Motion is extracted from dynamic images. (A) A continuous dynamic scene shown as stacked snapshots (top), and as an equivalent space-time plot (right) with the y axis collapsed. The scene contains two motionless features, a dark gray object static in space (vertical line), and a temporally brief light flash (horizontal line). The scene also contains two moving features, a red object moving slowly to the right and a blue object moving quickly to the right, characterized by a luminance correlation after an offset in space (Dx), and delay in time (Dt). (B) The HRC detects movement by sampling light from two inputs that are spatially offset (Dx), one temporally filtered (t), and combining them with a nonlinear operation (5) that responds to the now synchronized signals from each arm. The half correlator here would respond to rightward moving features, but not the motionless features in A. The half correlator combined with its mirror counterpart, shown in lighter gray, constitutes the full correlator. Motion in the other direction produces a negative response. (C) Apparent motion includes phi and reverse-phi sequences. The contrast reversal of reverse-phi disrupts the correlation between arms of the HRC. Even ordinary phi motion can be considered to have reverse-phi elements oriented in the other direction, in this case where the dark contrast lightens to the left. (D) Apparent motion stimulates different subtypes of T4 and T5 neurons. When the advancing edge stays the same color, it stimulates the ON or OFF cell corresponding to its brightness and direction. When the brightness reverses, it stimulates the cell that is complementary to the phi response, with each cell type thus responding to one phi and one reverse-phi stimulus.

and offset location. We can simulate this effect by rapidly displaying discrete images with a physical offset and correlated luminances, giving the sensation of continuous motion (Figure 1C). This is called phi motion (or beta [15]), and it is the basis of movies, television, and the mouse pointer moving over your computer screen — discrete images in series deliver a sensation of motion. But if instead of correlating the luminances we reverse them, flickering between dark and light in subsequent frames, the resulting scene still produces a sensation of motion but in the opposite direction (Figure 1C). This is reverse-phi, and searching for examples of these illusions yields scenes that produce a powerful sensation of backwards motion. It is also the stimulus that caused weevils to change direction in Hassenstein and Reichardt’s original experiment. Researchers have noticed this stimulus is not just an artificial construct, but a component of regular moving images [16] (Figure 1C). In flies, Salazar-Gatzimas et al. [1] began by silencing T4 and T5 and confirming they are required for reverse-

phi detection. Walking flies normally follow a pattern as it moves, but move opposite to a reverse-phi pattern. Just as for phi motion, flies lacking T4 and T5 did not respond to reverse-phi, indicating the same cells process both the regular patterns and the illusion. They then imaged the neurons during visual stimulation and found that T4 and T5 responded in a way consistent with behavioral responses. For example, a T4 cell that responds to progressive phi motion also responds to regressive reverse-phi motion (Figure 1D). This confirmed that the motion illusion is present at the earliest possible stages of motion detection, at the first cells that show direction selectivity. Because reverse-phi stimuli combine both ON and OFF edges, this implies that, despite the description of T4 as ON and T5 as OFF, these cells each get input from both ON and OFF. Both ON and OFF contrasts additionally appear together in natural moving images, and each T4 and T5 cell type responds to one phi and one reversephi type of motion. By having a single cell type respond to complementary sets of stimuli, the signals are effectively

R1336 Current Biology 28, R1335–R1355, December 3, 2018

decorrelated, especially when stimulated by natural signals. Decorrelation is an important aspect of efficient information processing and a force in nervous system evolution [17]. This could itself account for the emergence of the reverse-phi illusion. As a large animal, my inability to dispatch a tiny annoying fly with roughly one million-fold fewer neurons than me [18] can sometimes feel like a cosmic practical joke. But by loading such impressive behaviors into a small, accessible nervous system, insects have enabled tremendous progress towards understanding how animals turn sensory inputs into coherent behaviors. Motion detection in flies has rapidly become one of the best understood examples of complex information processing in all of neuroscience. By pinpointing the cellular origin of an optical illusion, this new work reveals details about the implementation of motion correlation in the insect brain, and sheds light on the processing strategies that so effectively gather information from natural scenes.

REFERENCES 1. Salazar-Gatzimas, E., Agrochao, M., Fitzgerald, J., and Clark, D. (2018). Decorrelation of parallel motion pathways explains the neuronal basis of an illusory motion percept. Curr. Biol. 28, 3748–3762. 2. Anstis, S.M. (1970). Phi movement as a subtraction process. Vision Res. 10, 1411–IN5. 3. Borst, A., and Helmstaedter, M. (2015). Common circuit design in fly and mammalian motion vision. Nat. Neurosci. 18, 1067. 4. Snyder, A.W., Laughlin, S.B., and Stavenga, D.G. (1977). Information capacity of eyes. Vision Res. 17, 1163. 5. Hassenstein, B., and Reichardt, W. (1956). Systemtheoretische analyse der zeit-, reihenfolgen- und vorzeichenauswertung bei der bewegungsperzeption des €fers \emphChlorophanus. ru¨sselka Z. Naturforschung B 11, 513–524. 6. Reichardt, W. (1961). Autocorrelation, a principle for evaluation of sensory information by the central nervous system. In Principles of Sensory Communication, W.A. Rosenblith, ed. (New York: Wiley), pp. 303–317. 7. Krapp, H.G., and Hengstenberg, R. (1996). Estimation of self-motion by optic flow processing in single visual interneurons. Nature 384, 463–466. 8. Longden, K., Wicklein, M., Hardcastle, B., Huston, S., and Krapp, H.G. (2017). Spike burst coding of translatory optic flow and depth from motion in the fly visual system. Curr. Biol. 21, 3225–3236.

Current Biology

Dispatches 9. Theobald, J.C., Duistermars, B.J., Ringach, D.L., and Frye, M.A. (2008). Flies see secondorder motion. Curr. Biol. 18, R464–R165. 10. Barlow, H.B., Hill, R.M., and Levick, W.R. (1964). Retinal ganglion cells responding selectively to direction and speed of image motion in the rabbit. J. Physiol. 173, 377–407. 11. van Santen, J.P., and Sperling, G. (1985). Elaborated Reichardt detectors. J. Opt. Soc. Am. A 2, 300–321. 12. Adelson, E.H., and Bergen, J.R. (1985). Spatiotemporal energy models for the

perception of motion. J. Opt. Soc. Am. A 2, 284–299. 13. Gabbiani, F., Krapp, H.G., Koch, C., and Laurent, G. (2002). Multiplicative computation in a visual neuron sensitive to looming. Nature 420, 320–324. 14. Koch, C. (2004). Biophysics of Computation: Information Processing in Single Neurons (Oxford: Oxford University Press). 15. Steinman, R.M., Pizlo, Z., and Pizlo, F.J. (2000). Phi is not beta, and why Wertheimer’s discovery launched the Gestalt revolution. Vision Res. 40, 2257–2264.

16. Clark, D.A., Bursztyn, L., Horowitz, M.A., Schnitzer, M.J., and Clandinin, T.R. (2011). Defining the computational structure of the motion detector in Drosophila. Neuron 70, 1165–1177. 17. Pitkow, X., and Meister, M. (2012). Decorrelation and efficient coding by retinal ganglion cells. Nat. Neurosci. 15, 628. 18. Safi, K., Seid, M.A., and Dechmann, D.K.N. (2005). Bigger is not always better: when brains get smaller. Biol. Lett. 1, 283–286.

Evolution: Flip-Flopping Flower Color Defies Dollo’s Law Robin Hopkins1,2 1Department

of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA 021382, USA Arnold Arboretum of Harvard University, 1300 Centre St, Boston, MA 02131, USA Correspondence: [email protected] https://doi.org/10.1016/j.cub.2018.09.058 2The

Even complex traits can re-evolve after being lost. A new study details the molecular mechanisms causing the regain of floral color pigment in a lineage that evolved white flowers. Despite our great efforts to predict the path of evolution, there are very few rules that evolution follows. One classic rule, Dollo’s law of irreversibility, argues that once a complex trait is lost it is unlikely, if impossible, to re-evolve that trait — at least in its original form [1,2]. With evermore detailed descriptions of biodiversity and advancements in phylogenetic techniques it is becoming increasingly clear that evolution breaks Dollo’s law. Complicated traits such as wings [3] and coiled shells [4] have re-evolved after ancestral loss. The intriguing question now becomes how does evolution reverse? A new study by Esfeld and colleagues reported in this issue of Current Biology beautifully details the molecular basis of gain-of-function mutations that resurrect the production of color pigments in Petunia flower petals [5]. The incredible diversity of flower colors makes for splendid bouquets, but this variation of form did not evolve for our enjoyment. Rather, flower color is an important signal that plants use to

advertise resources (such as food) to pollinators [6] with the aim of exchanging their gametes with other plants. Due to variable selection pressures across space and time, flower color varies extensively both within and between species of flowering plants. The evolution of color pigment production in flowers was long thought to follow Dollo’s law: once pigment is lost, it should not re-evolve. It has been argued [7], and in some cases demonstrated [8], that changes of pigment production in the flowers result in relaxed purifying selection on the genes necessary for pigment production. These shifting selection regimes allow for sequence degradation over time through random mutations in unused flower pigment genes. Yet, more and more evidence indicates that floral anthocyanin pigment production can re-evolve in plant lineages; indeed, the gain of pigments may actually be more likely than the loss of pigment in some clades [9]. But how does this happen? Are functiondegrading mutations reversed? Are different genes co-opted to perform a

new function? Now, for one case, we finally have an answer. Detailed study of a small clade of longtube Petunia species reveals a fascinating story of flower color (re-)evolution. The players are P. axillaris, which has lost floral pigment and produces white flowers that are predominantly pollinated by hawkmoths; P. exserta, which has red flowers and is predominantly pollinated by hummingbirds; and the purpleflowered P. secreta, which is pollinated by small bees. Other, more distantly-related Petunia species are pigmented, indicating that pigment production is the ancestral floral condition in this group. The loss of pigment production in P. axillaris is due to mutations in the R2R3-MYB transcription factor AN2 [10,11]. The outstanding question is why and how the other two species produce pigment. The most likely hypothesis is that the loss of pigment is due to a lineage-specific mutation in P. axillaris and that P. secreta and P. exserta pigment are expressing the ancestral trait. But Esfeld et al. offer an alternative hypothesis — that pigment

Current Biology 28, R1335–R1355, December 3, 2018 ª 2018 Elsevier Ltd. R1337