Refine Search

New Search

Results: 70

(searched for: doi:10.1364/josa.73.001674)
Save to Scifeed
Page of 2
Articles per Page
by
Show export options
  Select all
Published: 28 March 2021
Biological Cybernetics pp 1-4; https://doi.org/10.1007/s00422-021-00870-0

The publisher has not yet granted permission to display this abstract.
Journal of Vision, Volume 20, pp 8-8; https://doi.org/10.1167/jov.20.10.8

Abstract:
During self-motion, an independently moving object generates retinal motion that is the vector sum of its world-relative motion and the optic flow caused by the observer's self-motion. A hypothesized mechanism for the computation of an object's world-relative motion is flow parsing, in which the optic flow field due to self-motion is globally subtracted from the retinal flow field. This subtraction generates a bias in perceived object direction (in retinal coordinates) away from the optic flow vector at the object's location. Despite psychophysical evidence for flow parsing in humans, the neural mechanisms underlying the process are unknown. To build the framework for investigation of the neural basis of flow parsing, we trained macaque monkeys to discriminate the direction of a moving object in the presence of optic flow simulating self-motion. Like humans, monkeys showed biases in object direction perception consistent with subtraction of background optic flow attributable to self-motion. The size of perceptual biases generally depended on the magnitude of the expected optic flow vector at the location of the object, which was contingent on object position and self-motion velocity. There was a modest effect of an object's depth on flow-parsing biases, which reached significance in only one of two subjects. Adding vestibular self-motion signals to optic flow facilitated flow parsing, increasing biases in direction perception. Our findings indicate that monkeys exhibit perceptual hallmarks of flow parsing, setting the stage for the examination of the neural mechanisms underlying this phenomenon.
Naïg Aurelia Ludmilla Chenais, Marta Jole Ildelfonsa Airaghi Leccardi,
Published: 24 August 2020
Abstract:
Retinal prostheses hold the promise of restoring artificial vision in profoundly and totally blind people. However, a decade of clinical trials highlighted quantitative limitations hampering the possibility to reach this goal. A key obstacle to suitable retinal stimulation is the ability to independently activate retinal neurons over a large portion of the subject’s visual field. Reaching such a goal would significantly improve the perception accuracy in the users of retinal implants, along with their spatial cognition, attention, ambient mapping and interaction with the environment. Here we show a wide-field, high-density and high-resolution photovoltaic epiretinal prosthesis for artificial vision. The prosthesis embeds 10,498 physically and functionally independent photovoltaic pixels allowing for both wide retinal coverage and high-resolution stimulation. Single-pixel illumination reproducibly induced network-mediated responses from retinal ganglion cells at safe irradiance levels. Furthermore, the prosthesis enables a sub-receptive field response resolution for retinal ganglion cells having a dendritic tree larger than the pixel’s pitch. This approach could allow the restoration of mid-peripheric artificial vision in patients with retinitis pigmentosa.
, Simon D. Lilburn
Psychonomic Bulletin & Review, Volume 27, pp 882-910; https://doi.org/10.3758/s13423-020-01742-7

Abstract:
Evidence accumulation models like the diffusion model are increasingly used by researchers to identify the contributions of sensory and decisional factors to the speed and accuracy of decision-making. Drift rates, decision criteria, and nondecision times estimated from such models provide meaningful estimates of the quality of evidence in the stimulus, the bias and caution in the decision process, and the duration of nondecision processes. Recently, Dutilh et al. (Psychonomic Bulletin & Review 26, 1051–1069, 2019) carried out a large-scale, blinded validation study of decision models using the random dot motion (RDM) task. They found that the parameters of the diffusion model were generally well recovered, but there was a pervasive failure of selective influence, such that manipulations of evidence quality, decision bias, and caution also affected estimated nondecision times. This failure casts doubt on the psychometric validity of such estimates. Here we argue that the RDM task has unusual perceptual characteristics that may be better described by a model in which drift and diffusion rates increase over time rather than turn on abruptly. We reanalyze the Dutilh et al. data using models with abrupt and continuous-onset drift and diffusion rates and find that the continuous-onset model provides a better overall fit and more meaningful parameter estimates, which accord with the known psychophysical properties of the RDM task. We argue that further selective influence studies that fail to take into account the visual properties of the evidence entering the decision process are likely to be unproductive.
, Paul R. MacNeilage
Published: 19 January 2018
Abstract:
Optic flow patterns generated by self-motion relative to the stationary environment result in congruent visual-vestibular self-motion signals. Incongruent signals can arise due to object motion, vestibular dysfunction, or artificial stimulation, which are less common. Hence, we are predominantly exposed to congruent rather than incongruent visual-vestibular stimulation. If the brain takes advantage of this probabilistic association, we expect observers to be more sensitive to visual optic flow that is congruent with ongoing vestibular stimulation. We tested this expectation by measuring the motion coherence threshold, which is the percentage of signal versus noise dots, necessary to detect an optic flow pattern. Observers seated on a hexapod motion platform in front of a screen experienced two sequential intervals. One interval contained optic flow with a given motion coherence and the other contained noise dots only. Observers had to indicate which interval contained the optic flow pattern. The motion coherence threshold was measured for detection of laminar and radial optic flow during leftward/rightward and fore/aft linear self-motion, respectively. We observed no dependence of coherence thresholds on vestibular congruency for either radial or laminar optic flow. Prior studies using similar methods reported both decreases and increases in coherence thresholds in response to congruent vestibular stimulation; our results do not confirm either of these prior reports. While methodological differences may explain the diversity of results, another possibility is that motion coherence thresholds are mediated by neural populations that are either not modulated by vestibular stimulation or that are modulated in a manner that does not depend on congruency.
Teresa Tannazzo, , Farhan Bukhari
Published: 1 October 2014
Vision Research, Volume 103, pp 101-108; https://doi.org/10.1016/j.visres.2014.08.011

The publisher has not yet granted permission to display this abstract.
, Shigeru Ichihara
Published: 1 June 2012
Vision Research, Volume 62, pp 201-208; https://doi.org/10.1016/j.visres.2012.04.008

The publisher has not yet granted permission to display this abstract.
Jake Hayward, Grace Truong, ,
Published: 15 October 2011
Vision Research, Volume 51, pp 2216-2223; https://doi.org/10.1016/j.visres.2011.08.023

The publisher has not yet granted permission to display this abstract.
ACM Transactions on Applied Perception, Volume 7, pp 1-18; https://doi.org/10.1145/1773965.1773969

Abstract:
We examined the eye movements of pilots as they carried out simulated aircraft landings under day and night lighting conditions. Our five students and five certified pilots were instructed to quickly achieve and then maintain a constant 3-degree glideslope relative to the runway. However, both groups of pilots were found to make significant glideslope control errors, especially during simulated night approaches. We found that pilot gaze was directed most often toward the runway and to the ground region located immediately in front of the runway, compared to other visual scene features. In general, their gaze was skewed toward the near half of the runway and tended to follow the runway threshold as it moved on the screen. Contrary to expectations, pilot gaze was not consistently directed at the aircraft's simulated aimpoint (i.e., its predicted future touchdown point based on scene motion). However, pilots did tend to fly the aircraft so that this point was aligned with the runway threshold. We conclude that the supplementary out-of-cockpit visual cues available during day landing conditions facilitated glideslope control performance. The available evidence suggests that these supplementary visual cues are acquired through peripheral vision, without the need for active fixation.
Santosh G. Mysore, Rufin Vogels, Steve E. Raiguel,
Journal of Neurophysiology, Volume 95, pp 1864-1880; https://doi.org/10.1152/jn.00627.2005

Abstract:
We used gratings and shapes defined by relative motion to study selectivity for static kinetic boundaries in macaque V4 neurons. Kinetic gratings were generated by random pixels moving in opposite directions in the neighboring bars, either parallel to the orientation of the boundary (parallel kinetic grating) or perpendicular to the boundary (orthogonal kinetic grating). Neurons were also tested with static, luminance defined gratings to establish cue invariance. In addition, we used eight shapes defined either by relative motion or by luminance contrast, as used previously to test cue invariance in the infero-temporal (IT) cortex. A sizeable fraction (10–20%) of the V4 neurons responded selectively to kinetic patterns. Most neurons selective for kinetic contours had receptive fields (RFs) within the central 10° of the visual field. Neurons selective for the orientation of kinetic gratings were defined as having similar orientation preferences for the two types of kinetic gratings, and the vast majority of these neurons also retained the same orientation preference for luminance defined gratings. Also, kinetic shape selective neurons had similar shape preferences when the shape was defined by relative motion or by luminance contrast, showing a cue-invariant form processing in V4. Although shape selectivity was weaker in V4 than what has been reported in the IT cortex, cue invariance was similar in the two areas, suggesting that invariance for luminance and motion cues of IT originates in V4. The neurons selective for kinetic patterns tended to be clustered within dorsal V4.
Published: 31 January 2006
Progress in Neurobiology, Volume 78, pp 38-60; https://doi.org/10.1016/j.pneurobio.2005.11.006

The publisher has not yet granted permission to display this abstract.
E. Poljac, B. Neggers,
Published: 3 December 2005
Experimental Brain Research, Volume 171, pp 35-46; https://doi.org/10.1007/s00221-005-0257-x

The publisher has not yet granted permission to display this abstract.
E. Peterhans, B. Heider, R. Baumann
European Journal of Neuroscience, Volume 21, pp 1091-1100; https://doi.org/10.1111/j.1460-9568.2005.03919.x

The publisher has not yet granted permission to display this abstract.
K. M. Yemelyanov, M. A. Lo, E. N. Pugh, Nader Engheta
Published: 30 June 2003
Optics Express, Volume 11, pp 1577-1584; https://doi.org/10.1364/oe.11.001577

Abstract:
It is known that human eyes are effectively polarization-blind. Therefore, in order to display the polarization information in an image, one may require exhibiting such information using other visual cues that are compatible with the human visual system and can be easily detectable by a human observer. Here, we present a technique for displaying polarization information in an image using coherently moving dots that are superimposed on the image. Our examples show that this technique would allow the image segments with polarization signals to �??pop out�?? easily, which will lead to better target feature detection and visibility enhancement.
, Christian Casanova, Jocelyn Faubert
Published: 14 November 2002
Vision Research, Volume 42, pp 2843-2852; https://doi.org/10.1016/s0042-6989(02)00355-3

The publisher has not yet granted permission to display this abstract.
, Risto Näsänen, Jyrki Rovamo, Dean Melmoth
Published: 20 February 2001
Vision Research, Volume 41, pp 599-610; https://doi.org/10.1016/s0042-6989(00)00259-5

The publisher has not yet granted permission to display this abstract.
K. Lam, Y. Kaneoke, , H. Yamasaki, E. Matsumoto, T. Naito, R. Kakigi
Published: 13 April 2000
Neuroscience, Volume 97, pp 1-10; https://doi.org/10.1016/s0306-4522(00)00037-3

The publisher has not yet granted permission to display this abstract.
William R Uttal, , Frank Stürzel, Allison B Sekuler
Published: 9 December 1999
Vision Research, Volume 40, pp 301-310; https://doi.org/10.1016/s0042-6989(99)00177-7

The publisher has not yet granted permission to display this abstract.
, Jan J Koenderink, Andrea J Van Doorn
Published: 8 December 1999
Vision Research, Volume 40, pp 187-199; https://doi.org/10.1016/s0042-6989(99)00167-4

The publisher has not yet granted permission to display this abstract.
Colin W.G. Clifford, ,
Published: 1 June 1999
Vision Research, Volume 39, pp 2213-2227; https://doi.org/10.1016/s0042-6989(98)00314-9

The publisher has not yet granted permission to display this abstract.
, Robert Edmunds
Published: 3 March 1999
Vision Research, Volume 39, pp 1813-1822; https://doi.org/10.1016/s0042-6989(98)00201-6

The publisher has not yet granted permission to display this abstract.
Published: 1 September 1998
Perception, Volume 27, pp 1041-1054; https://doi.org/10.1068/p271041

Abstract:
Identical visual targets moving across each other with equal and constant speed can be perceived either to bounce off or to stream through each other. This bistable motion perception has been studied mostly in the context of motion integration. Since the perception of most ambiguous motion is affected by attention, there is the possibility of attentional modulation occurring in this case as well. We investigated whether distraction of attention from the moving targets would alter the relative frequency of each percept. During the observation of the streaming/bouncing motion event in the peripheral visual field, visual attention was disrupted by an abrupt presentation of a visual distractor at various timings and locations (experiment 1; exogenous distraction of attention) or by the demand of an additional discrimination task (experiments 2 and 3; endogenous distraction of attention). Both types of distractions of attention increased the frequency of the bouncing percept and decreased that of the streaming percept. These results suggest that attention may facilitate the perception of object motion as continuing in the same direction as in the past.
Published: 1 July 1998
Perception, Volume 27, pp 817-825; https://doi.org/10.1068/p270817

Abstract:
The grain of the retina becomes progressively coarser from the fovea to the periphery. This is caused by the decreasing number of retinal receptive fields and decreasing amount of cortex devoted to each degree of visual field (= cortical magnification factor) as one goes into the periphery. We simulate this with a picture that is progressively blurred towards its edges; when strictly fixated at its centre it looks equally sharp all over.
, James T. Todd
Published: 1 June 1998
Perception & Psychophysics, Volume 60, pp 558-574; https://doi.org/10.3758/bf03206046

The publisher has not yet granted permission to display this abstract.
Published: 1 November 1997
Perception, Volume 26, pp 1341-1352; https://doi.org/10.1068/p261341

Abstract:
This overview takes the reader from the classical contrast and assimilation studies of the past to today's colour research, in a broad sense, with its renewed emphasis on the phenomenological qualities of visual perception. It shows how the shift in paradigm from local to global effects in single-unit recordings prompted a reappraisal of appearance in visual experiments, not just in colour, but in the perception of motion, texture, and depth as well. Gestalt ideas placed in the context of modern concepts are shown to inspire psychophysicists, neurophysiologists, and computational vision scientists alike. Feedforward, horizontal interactions, and feedback are discussed as potential neuronal mechanisms to account for phenomena such as uniform surfaces, filling-in, and grouping arising from processes beyond the classical receptive field. A look forward towards future developments in the field of figure–ground segregation (Gestalt formation) concludes the article.
Published: 1 August 1997
Perception, Volume 26, pp 995-1010; https://doi.org/10.1068/p260995

Abstract:
Human subjects can perceive global motion or motions in displays containing diverse local motions, implying representation of velocity at multiple scales. The phenomena of flexible global direction judgments, and especially of motion transparency, also raise the issue of whether the representation of velocity at any one scale is single-valued or multi-valued. A new performance-based measure of transparency confirms that the visual system represents directional information for each component of a transparent display. However, results with the locally paired random-dot display introduced by Qian et al, show that representations of multiple velocities do not coexist at the finest spatial scale of motion analysis. Functionally distinct scales of motion processing may be associated with (i) local motion detectors which show a strong winner-take-all interaction; (ii) spatial integration of local signals to disambiguate velocity; (iii) selection of reliable velocity signals as proposed in the model of Nowlan and Sejnowski; (iv) object-based or surface-based representations that are not necessarily organised in a fixed spatial matrix. These possibilities are discussed in relation to the neurobiological organisation of the visual motion pathway.
R.Eric Fredericksen, Frans A.J. Verstraten, Wim A. Van De Grind
Published: 31 January 1997
Vision Research, Volume 37, pp 99-119; https://doi.org/10.1016/s0042-6989(96)00074-0

The publisher has not yet granted permission to display this abstract.
, , Jane E. Raymond
Published: 31 August 1996
Vision Research, Volume 36, pp 2579-2586; https://doi.org/10.1016/0042-6989(95)00325-8

The publisher has not yet granted permission to display this abstract.
, R.Eric Fredericksen, Richard J.A Van Wezel, Jane C Boulton, Wim A Van De Grind
Published: 31 August 1996
Vision Research, Volume 36, pp 2333-2336; https://doi.org/10.1016/0042-6989(95)00297-9

The publisher has not yet granted permission to display this abstract.
, Astrid M. L. Kappers, Jan J. Koenderink
Published: 1 January 1996
Perception & Psychophysics, Volume 58, pp 401-408; https://doi.org/10.3758/bf03206816

The publisher has not yet granted permission to display this abstract.
, Aleksander Pulver
Journal of the Optical Society of America A, Volume 12, pp 1185-1197; https://doi.org/10.1364/josaa.12.001185

Abstract:
The ability to identify the direction of apparent motion in a sequence of two short light pulses of different amplitudes at separate spatial locations was studied. The product of pulse amplitudes is a very poor predictor of such performance when one of the two signals is much higher in amplitude than the other: above a certain amplitude the probability of correct identification becomes virtually independent of the amplitude of the larger pulse. There was no noticeable difference in performance between low–high and high–low contrast sequences. Both the direction identification and the simple contrast-detection probabilities can be represented by the same psychometric function of the luminance increment ΔL, provided that ΔL is normalized by the nth power of the background luminance level, Lb. These results suggest that the general Reichardt-type scheme of movement encoding should be modified in the manner proposed for the fly's visual system [J. Opt. Soc. Am. A 6, 116 (1989)]: (1) the mean luminanceis subtraced from the input signal before the signal is subjected to a nonlinear compression and (2) saturation characteristics are inserted into both branches of the two mirror-symmetric motion-detection subunits before multiplication of the input signals. The identical metric of the contrast response suggests that movement discrimination and luminance detection are two different special-purpose computations performed on the output of the same encoding network.
R.E. Fredericksen, F.A.J. Verstraten, W.A. Van De Grind
Published: 31 December 1994
Vision Research, Volume 34, pp 3171-3188; https://doi.org/10.1016/0042-6989(94)90082-5

The publisher has not yet granted permission to display this abstract.
Wim A Van De Grind, , Karin M Zwamborn
Published: 1 October 1994
Perception, Volume 23, pp 1171-1179; https://doi.org/10.1068/p231171

Abstract:
Moving random-pixel arrays (RPAs) were used to study the movement aftereffect (MAE) for translational texture motion and to quantify the contribution of RPA-sensitive motion sensors to the MAE as a function of eccentricity. Size-scaled patterns were used to make a fair comparison across eccentricities. At the upper end of the velocity range it was found, for all eccentricities, that motion sensors tuned to velocities exceeding about 10–20 deg s−1 do not contribute to the translational MAE, even though they do contribute to motion perception. As a consequence the subpopulation of local motion sensors that contributes to the MAE shrinks with eccentricity, because there are fewer low-velocity-tuned and more high-velocity-tuned motion sensors for increasing eccentricity. Thus there is a quantitative, but not a qualitative, difference between the MAEs generated at different eccentricities.
Ehtibar N. Dzhafarov, Robert Sekuler,
Published: 1 November 1993
Perception & Psychophysics, Volume 54, pp 733-750; https://doi.org/10.3758/bf03211798

The publisher has not yet granted permission to display this abstract.
Ikuya Murakami, Shinsuke Shimojo
Published: 31 October 1993
Vision Research, Volume 33, pp 2091-2107; https://doi.org/10.1016/0042-6989(93)90008-k

The publisher has not yet granted permission to display this abstract.
R.E. Fredericksen, F.A.J. Verstraten, W.A. Van De Grind
Published: 30 June 1993
Vision Research, Volume 33, pp 1193-1205; https://doi.org/10.1016/0042-6989(93)90208-e

The publisher has not yet granted permission to display this abstract.
W.A. Van De Grind, J.J. Koenderink, A.J. Van Doorn, M.V. Milders, H. Voerman
Published: 31 May 1993
Vision Research, Volume 33, pp 1089-1107; https://doi.org/10.1016/0042-6989(93)90242-o

The publisher has not yet granted permission to display this abstract.
Bennett I Bertenthal, Tom Banton, Anne Bradbury
Published: 1 February 1993
Perception, Volume 22, pp 193-207; https://doi.org/10.1068/p220193

Abstract:
Recent findings suggest that the visual system is biased by its past stimulation to detect one direction of motion over others. Three experiments were designed to investigate whether this bias is mediated by the direction or by the velocity of the past stimulation, and whether this bias is offset by contradictory pattern or depth information. Observers were presented with two solid or random-dot patterns that moved across a display screen in antiphase. As the two patterns reached the center of the screen, they became superimposed in such a way that their subsequent directions were ambiguous. Results from experiment 1 showed that the probability of perceiving these patterns as continuing to move in the same directions was significantly greater when they moved at a constant velocity than when they moved at a variable velocity. Results from experiments 2 and 3 revealed that this directional bias was reversed only gradually as an increasing amount of contradictory pattern information was introduced, but that this reversal was quite abrupt when a relatively small amount of contradictory depth information was introduced. Collectively, these results suggest that a directional bias in the perception of moving patterns is mediated not only by the direction of the previous stimulation, but also by the velocity of that stimulation. Moreover, the analyses of pattern and motion information appear relatively independent during the early stages of visual processing, but the analyses of depth and motion information appear considerably more interdependent.
, A. M. L. Kappers, Jan J. Koenderink
Published: 1 November 1992
Perception & Psychophysics, Volume 51, pp 569-579; https://doi.org/10.3758/bf03211654

The publisher has not yet granted permission to display this abstract.
William H. Warren, Kenneth J. Kurtz
Published: 1 September 1992
Perception & Psychophysics, Volume 51, pp 443-454; https://doi.org/10.3758/bf03211640

The publisher has not yet granted permission to display this abstract.
Page of 2
Articles per Page
by
Show export options
  Select all
Back to Top Top