Translate

Σάββατο 20 Ιουλίου 2019

Attention, Perception, & Psychophysics

Multisensory feature integration in (and out) of the focus of spatial attention

Abstract

Anne Treisman transformed the way in which we think about visual feature integration. However, that does not mean that she was necessarily right, nor that she looked much beyond vision when considering how features might be bound together into perceptual objects. While such a narrow focus undoubtedly makes sense, given the complexity of human multisensory information processing, it is nevertheless somewhat surprising to find that Treisman herself never extended her feature integration theory outside of the visual modality. After all, she first cut her ‘attentional teeth’ thinking about problems of auditory and audiovisual selective attention. In this article, we review the literature concerning feature integration beyond the visual modality, concentrating, in particular, on the integration of features from different sensory modalities. We highlight a number of the challenges, as far as any straightforward attempt to extend feature integration to the non-visual (i.e. auditory and tactile) and cross-modal (or multisensory) cases, is concerned. These challenges include the problem of how basic features should be defined, the question of whether it even makes sense to talk of objects of perception in the auditory and olfactory modalities, the possibility of integration outside of the focus of spatial attention, and the integration of features from different sensory modalities in the control of action. Nevertheless, despite such limitations, Treisman’s feature integration theory still stands as the standard approach against which alternatives are assessed, be it in the visual case or, increasingly, beyond.

A theoretical attempt to revive the serial/parallel-search dichotomy

Abstract

A core distinction in Anne Treisman’s feature-integration theory (FIT) is in that between parallel and serial search. We outline this dichotomy and selectively review the reasons why it has largely been abandoned in the visual-search community—namely, its theoretical dispensability, failure to find reliable yardsticks for differentiating parallel and serial search, and falsification of core predictions. We then go on to introduce a new theoretical framework that, we argue, clears up some of the theoretical confusion by merging FIT with various competing theories. This framework’s core feature is the distinction between and characterization of two fundamentally different search modes: one in which attention is guided to a single item via a priority map (priority guidance), and one in which clumps of multiple items are scanned in parallel in a spatially systematic order (clump scanning). Finally, we will elaborate how this new theoretical framework can resolve current controversies in the literature and how it relates to other existing theories. We (somewhat optimistically) believe that the outcome of this theoretical exercise is a unification of theories of visual search that can explain, or at least is consistent with, all phenomena reported in the visual-search literature that have previously been accounted for by various conflicting theories.

Using the flicker task to estimate visual working memory storage capacity

Abstract

Studies of visual working memory (VWM) typically have used a “one-shot” change detection task to arrive at a capacity estimate of three to four objects, with additional limits imposed by the precision of the information needed for each object. Unlike the one-shot task, the flicker change detection task permits measurement of VWM capacity over time and with larger numbers of objects present in the scene, but it has rarely been used to assess the capacity of VWM. We used the flicker task to examine (a) whether capacity is close to the typical three to four items when using subtly different stimuli; (b) which dependent measure provides the most meaningful estimate of the capacity of VWM in the flicker task (response time or number of changes viewed); (c) whether capacity remains fixed at three to four items for displays containing many more objects; and (d) how VWM operates over time, with repeated opportunities to encode, retain, and compare elements in a display. Four experiments using grids of simple items varying only in luminance or color revealed a range for VWM capacity limits that was largely impervious to changes in display duration, interstimulus intervals, and array size. This estimate of VWM capacity was correlated with an estimate from the more typical one-shot task, further validating the flicker task as a tool for measuring the capacity of VWM.

In search of exogenous feature-based attention

Abstract

Visual attention prioritizes the processing of sensory information at specific spatial locations (spatial attention; SA) or with specific feature values (feature-based attention; FBA). SA is well characterized in terms of behavior, brain activity, and temporal dynamics—for both top-down (endogenous) and bottom-up (exogenous) spatial orienting. FBA has been thoroughly studied in terms of top-down endogenous orienting, but much less is known about the potential of bottom-up exogenous influences of FBA. Here, in four experiments, we adapted a procedure used in two previous studies that reported exogenous FBA effects, with the goal of replicating and expanding on these findings, especially regarding its temporal dynamics. Unlike the two previous studies, we did not find significant effects of exogenous FBA. This was true (1) whether accuracy or RT was prioritized as the main measure, (2) with precues presented peripherally or centrally, (3) with cue-to-stimulus ISIs of varying durations, (4) with four or eight possible target locations, (5) at different meridians, (6) with either brief or long stimulus presentations, (7) and with either fixation contingent or noncontingent stimulus displays. In the last experiment, a postexperiment participant questionnaire indicated that only a small subset of participants, who mistakenly believed the irrelevant color of the precue indicated which stimulus was the target, exhibited benefits for valid exogenous FBA precues. Overall, we conclude that with the protocol used in the studies reporting exogenous FBA, the exogenous stimulus-driven influence of FBA is elusive at best, and that FBA is primarily a top-down, goal-driven process.

Conjunction search: Can we simultaneously bias attention to features and relations?

Abstract

Attention allows selection of sought-after objects by tuning attention in a top-down manner to task-relevant features. Among other possible search modes, attention can be tuned to the exact feature values of a target (e.g., red, large), or to the relative target feature (e.g., reddest, largest item), in which case selection is context dependent. The present study tested whether we can tune attention simultaneously to a specific feature value (e.g., specific size) and a relative target feature (e.g., relative color) of a conjunction target, using a variant of the spatial cueing paradigm. Tuning to the specific feature of the target was encouraged by randomly presenting the conjunction target in a varying context of nontarget items, and feature-specific versus relational tuning was assessed by briefly presenting conjunction cues that either matched or mismatched the relative versus physical features of the target. The results showed that attention could be biased to the specific size and the relative color of the conjunction target or vice versa. These results suggest the existence of local and relatively low-level attentional control mechanisms that operate independently of each other in separate feature dimensions (color, size) to choose the best search strategy in line with current top-down goals.

Can unconscious sequential integration of semantic information occur when the prime Chinese characters are displayed from left to right?

Abstract

Recent studies have investigated whether conscious awareness is necessary for semantic integration. Although results have varied, simultaneous presentation of words have consistently led to greater semantic integration than sequential presentation in a single location. The current studies were designed to investigate whether the disadvantage of sequential presentation for unconscious semantic integration is specific to unfamiliar word-by-word presentation in one location or extends to the more natural reading conditions of viewing items sequentially from left to right. In Experiment 1, when the first three characters of Chinese idioms were presented simultaneously under masked conditions, performance on a separate two-alternative forced-choice recognition task was at chance level. Despite being unaware of the identity of prime characters, participants were faster to indicate that a subsequent item was a Chinese character when it was congruent with the beginning of the idiom, thus providing evidence of semantic integration. In contrast, when the three (Experiment 2) or two (Experiment 3) prime characters were presented sequentially in time from left to right, there was no evidence of semantic integration. These results indicate that unconscious semantic integration is more limited than previously reported, and may require simultaneous visual presentation.

Selection history in context: Evidence for the role of reinforcement learning in biasing attention

Abstract

Attention is biased towards learned predictors of reward. The influence of reward history on attentional capture has been shown to be context-specific: When particular stimulus features are associated with reward, these features only capture attention when viewed in the context in which they were rewarded. Selection history can also bias attention, such that prior target features gain priority independently of reward history. The contextual specificity of this influence of selection history on attention has not been examined. In the present study, we demonstrate that the consequences of repetitive selection on attention robustly generalize across context, such that prior target features capture attention even in contexts in which they were never seen previously. Our findings suggest that the learning underlying attention driven by outcome-independent selection history differs qualitatively from the learning underlying value-driven attention, consistent with a distinction between associative and reinforcement learning mechanisms.

Seeing eye-to-eye: Social gaze interactions influence gaze direction identification

Abstract

We tested whether gaze direction identification of individual faces can be modulated by prior social gaze encounters. In two experiments, participants first completed a joint-gaze learning task using a saccade/antisaccade paradigm. Participants would encounter some ‘joint-gaze faces’ that would consistently look at the participants saccade goal before participants looked there (Experiment 1) or would follow the participants gaze to the target (Experiment 2). ‘Non-joint-gaze faces’ would consistently look in the opposite direction. Participants then completed a second task in which they judged the gaze direction of the faces they had previously encountered. Participants were less likely to erroneously report faces with slightly deviated gaze as looking directly at them if the face had previously never engaged in joint gaze with them. However, this bias was only present when those faces had looked first (Experiment 1) and not when the faces looked after participants (Experiment 2). Comparing these data with gaze identification responses of a control group that did not complete any joint-gaze learning phase revealed that the difference in gaze identification in Experiment 1 is likely driven by a lowering of direct gaze bias in response to non-joint-gaze faces. Thus, previous joint-gaze experiences can affect gaze direction judgements at an identity-specific level. However, this modulation may rely on the socio-cognitive information available from viewing other’s initiation behaviours, especially when they fail to engage in social contact.

Can you have multiple attentional templates? Large-scale replications of Van Moorselaar, Theeuwes, and Olivers (2014) and Hollingworth and Beck (2016)

Abstract

Stimuli that resemble the content of visual working memory (VWM) capture attention. However, theories disagree on how many VWM items can bias attention simultaneously. According to some theories, there is a distinction between active and passive states in VWM, such that only items held in an active state can bias attention. The single-item-template hypothesis holds that only one item can be in an active state and thus can bias attention. In contrast, the multiple-item-template hypothesis posits that multiple VWM items can be in an activate state simultaneously, and thus can bias attention. Recently, Van Moorselaar, Theeuwes, and Olivers (Journal of Experimental Psychology: Human Perception and Performance40(4):1450, 2014) and Hollingworth and Beck (Journal of Experimental Psychology: Human Perception and Performance42(7):911–917, 2016) tested these accounts, but obtained seemingly contradictory results. Van Moorselaar et al. (2014) found that a distractor in a visual-search task captured attention more when it matched the content of VWM (memory-driven capture). Crucially, memory-driven capture disappeared when more than one item was held in VWM, in line with the single-item-template hypothesis. In contrast, Hollingworth and Beck (2016) found memory-driven capture even when multiple items were kept in VWM, in line with the multiple-item-template hypothesis. Considering these mixed results, we replicated both studies with a larger sample, and found that all key results are reliable. It is unclear to what extent these divergent results are due to paradigm differences between the studies. We conclude that is crucial to our understanding of VWM to determine the boundary conditions under which memory-driven capture occurs.

The transfer of location-based control requires location-based conflict

Abstract

It is well supported that stimulus-driven control of attention varies depending on the degree of conflict previously encountered in a given location. Previous research has further shown that control settings established in conflict-biased locations can transfer to nearby unbiased items. However, these spatial transfer effects have only been shown using incompatible flanking arrows (i.e., stimuli that trigger spatial information) to elicit conflict in a flanker task. Here we examine the generalizability of transfer of control by examining if it can occur across a range of tasks. We employ a classic Stroop task (Experiment 1), a spatially segregated Stroop task (Experiment 2), and a spatial Stroop task (Experiment 3). Location-specific proportion compatibility effects were observed in all variations of the Stroop task tested; however, transfer to unbiased items occurred only in the spatial Stroop task in Experiment 3. This suggests that the transfer of cognitive control settings within spatial categories may occur only in tasks where the source of conflict is spatial, as arises in tasks with arrow and direction word stimuli.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου

Translate