Translate

Τετάρτη 5 Φεβρουαρίου 2020

Attention, Perception, & Psychophysics

Correction to: Desirable and undesirable difficulties: Influences of variability, training schedule, and aptitude on nonnative phonetic learning
Due to a production error, some IPA symbols were not included. The original article has been corrected.

The attractiveness of salient distractors to reaching movements is task dependent

Abstract

Previous studies in visual attention and oculomotor research showed that a physically salient distractor does not always capture attention or the eyes. Under certain top-down task sets, a salient distractor can be actively suppressed, avoiding capture. Even though previous studies showed that reaching movements are also influenced by salient distractors, it is unclear if and how a mechanism of active suppression of distractors would affect reaching movements. Active suppression might also explain why some studies find reaching movements to curve towards a distractor, while others find reaching movements to curve away. In this study, we varied the top-down task set in two separate experiments by manipulating the certainty about the target location. Participants had to reach for a diamond present among three circles. In Experiments 1 and 3, participants had to search for the reach targets; hence, the target’s location certainty was low. In Experiments 2 and 3, the target’s location was cued before the reach; hence, the target’s location certainty was high. We found that reaches curved towards the physically salient, color singleton, distractor in the search-to-reach task (Experiments 1 and 3), but not in the cued reach task (Experiments 2 and 3). Thus, the saliency of the distractor only attracted reaching movements when the certainty of the target’s location was low. Our findings suggest that the attractiveness of physically salient distractors to reaching movements depends on the top-down task set. The results can be explained by the effect of active attentional suppression on the competition between movement plans.

A target contrast signal theory of parallel processing in goal-directed search

Abstract

Feature Integration Theory (FIT) set out the groundwork for much of the work in visual cognition since its publication. One of the most important legacies of this theory has been the emphasis on feature-specific processing. Nowadays, visual features are thought of as a sort of currency of visual attention (e.g., features can be attended, processing of attended features is enhanced), and attended features are thought to guide attention towards likely targets in a scene. Here we propose an alternative theory – the Target Contrast Signal Theory – based on the idea that when we search for a specific target, it is not the target-specific features that guide our attention towards the target; rather, what determines behavior is the result of an active comparison between the target template in mind and every element present in the scene. This comparison occurs in parallel and is aimed at rejecting from consideration items that peripheral vision can confidently reject as being non-targets. The speed at which each item is evaluated is determined by the overall contrast between that item and the target template. We present computational simulations to demonstrate the workings of the theory as well as eye-movement data that support core predictions of the theory. The theory is discussed in the context of FIT and other important theories of visual search.

Slow and fast beat sequences are represented differently through space

Abstract

The Spatial-Numerical Association of Response Codes (SNARC) suggests the existence of an association between number magnitude and response position, with faster left-hand responses to small numbers and faster right-hand responses to large numbers. Recent studies have revealed similar spatial association effects for non-numerical magnitudes, such as temporal durations and musical stimuli. In the present study we investigated whether a spatial association effect exists between music tempo, expressed in beats per minutes (bpm), and response position. In particular, we were interested in whether this effect is consistent through different bpm ranges. We asked participants to judge whether a target beat sequence was faster or slower than a reference sequence. Three groups of participants judged beat sequences from three different bpm ranges, a wide range (40, 80, 160, 200 bpm) and two narrow ranges (“slow” tempo, 40, 56, 88, 104 bpm; “fast” tempo 133, 150, 184, 201 bpm). Results showed a clear SNARC-like effect for music tempo only in the narrow “fast” tempo range, with faster left-hand responses to 133 and 150 bpm and faster right-hand responses to 184 and 201 bpm. Conversely, a similar association did not emerge in the wide nor in the narrow "slow" tempo ranges. This evidence suggests that music tempo is spatially represented as other continuous quantities, but its representation might be narrowed to a particular range of tempos. Moreover, music tempo and temporal duration might be represented across space with an opposite direction.

Loads of unconscious processing: The role of perceptual load in processing unattended stimuli during inattentional blindness

Abstract

Inattentional blindness describes the failure to detect an unexpected but clearly visible object when our attention is engaged elsewhere. While the factors that determine the occurrence of inattentional blindness are already well understood, there is still a lot to learn about whether and how we process unexpected objects that go unnoticed. Only recently it was shown that although not consciously aware, characteristics of these stimuli can interfere with a primary task: Classification of to-be-attended stimuli was slower when the content of the task-irrelevant, undetected stimulus contradicted that of the attended, to-be-judged stimuli. According to Lavie’s perceptual load model, irrelevant stimuli are likely to reach awareness under conditions of low perceptual load, while they remain undetected under high load, as attentional resources are restricted to the content of focused attention. In the present study, we investigated the applicability of Lavie’s predictions for the processing of stimuli that remain unconscious due to inattentional blindness. In two experiments, we replicated that unconsciously processed stimuli can interfere with intended responses. Also, our manipulation of perceptual load did have an effect on primary task performance. However, against our hypothesis, these effects did not interact with each other. Thus, our results suggest that high perceptual load cannot prevent task-irrelevant stimuli that remain undetected from being processed to an extent that enables them to affect performance in a primary task.

Understanding the visual perception of awkward body movements: How interactions go awry

Abstract

Dyadic interactions can sometimes elicit a disconcerting response from viewers, generating a sense of “awkwardness.” Despite the ubiquity of awkward social interactions in daily life, it remains unknown what visual cues signal the oddity of human interactions and yield the subjective impression of awkwardness. In the present experiments, we focused on a range of greeting behaviors (handshake, fist bump, high five) to examine both the inherent objectivity and impact of contextual and kinematic information in the social evaluation of awkwardness. In Experiment 1, participants were asked to discriminate whether greeting behaviors presented in raw videos were awkward or natural, and if judged as awkward, participants provided verbal descriptions regarding the awkward greeting behaviors. Participants showed consensus in judging awkwardness from raw videos, with a high proportion of congruent responses across a range of awkward greeting behaviors. We also found that people used social-related and motor-related words in their descriptions for awkward interactions. Experiment 2 employed advanced computer vision techniques to present the same greeting behaviors in three different display types. All display types preserved kinematic information, but varied contextual information: (1) patch displays presented blurred scenes composed of patches; (2) body displays presented human body figures on a black background; and (3) skeleton displays presented skeletal figures of moving bodies. Participants rated the degree of awkwardness of greeting behaviors. Across display types, participants consistently discriminated awkward and natural greetings, indicating that the kinematics of body movements plays an important role in guiding awkwardness judgments. Multidimensional scaling analysis based on the similarity of awkwardness ratings revealed two primary cues: motor coordination (which accounted for most of the variability in awkwardness judgments) and social coordination. We conclude that the perception of awkwardness, while primarily inferred on the basis of kinematic information, is additionally affected by the perceived social coordination underlying human greeting behaviors.

Spatial filtering restricts the attentional window during both singleton and feature-based visual search

Abstract

We investigated whether spatial filtering can restrict attentional selectivity during visual search to a currently task-relevant attentional window. While effective filtering has been demonstrated during singleton search, feature-based attention is believed to operate spatially globally across the entire visual field. To test whether spatial filtering depends on search mode, we assessed its efficiency both during feature-guided search with colour-defined targets and during singleton search tasks. Search displays were preceded by spatial cues. Participants responded to target objects at cued/relevant locations, and ignored them when they appeared on the uncued/irrelevant side. In four experiments, electrophysiological markers of attentional selection and distractor suppression (N2pc and PD components) were measured for relevant and irrelevant target-matching objects. During singleton search, N2pc components were triggered by relevant target singletons, but were entirely absent for singletons on the irrelevant side, demonstrating effective spatial filtering. Critically, similar results were found for feature-based search. N2pcs to irrelevant target-colour objects were either absent or strongly attenuated (when these objects were salient), indicating that the feature-based guidance of visual search can be restricted to relevant locations. The presence of PD components to salient objects on the irrelevant side during feature-based and singleton search suggests that spatial filtering involves active distractor suppression. These results challenge the assumption that feature-based attentional guidance is always spatially global. They suggest instead that when advance information about target locations becomes available, effective spatial filtering processes are activated transiently not only in singleton search, but also during search for feature-defined targets.

Task-related gaze control in human crowd navigation

Abstract

Human crowds provide an interesting case for research on the perception of people. In this study, we investigate how visual information is acquired for (1) navigating human crowds and (2) seeking out social affordances in crowds by studying gaze behavior during human crowd navigation under different task instructions. Observers (n = 11) wore head-mounted eye-tracking glasses and walked two rounds through hallways containing walking crowds (n = 38) and static objects. For round one, observers were instructed to avoid collisions. For round two, observers furthermore had to indicate with a button press whether oncoming people made eye contact. Task performance (walking speed, absence of collisions) was similar across rounds. Fixation durations indicated that heads, bodies, objects, and walls maintained gaze comparably long. Only crowds in the distance maintained gaze relatively longer. We find no compelling evidence that human bodies and heads hold one’s gaze more than objects while navigating crowds. When eye contact was assessed, heads were fixated more often and for a total longer duration, which came at the cost of looking at bodies. We conclude that gaze behavior in crowd navigation is task-dependent, and that not every fixation is strictly necessary for navigating crowds. When explicitly tasked with seeking out potential social affordances, gaze is modulated as a result. We discuss our findings in the light of current theories and models of gaze behavior. Furthermore, we show that in a head-mounted eye-tracking study, a large degree of experimental control can be maintained while many degrees of freedom on the side of the observer remain.

Training attenuates the influence of sensory uncertainty on confidence estimation

Abstract

Confidence is typically correlated with perceptual sensitivity, but these measures are dissociable. For example, confidence judgements are disproportionately affected by the variability of sensory signals. Here, in a preregistered study we investigate whether this signal variability effect on confidence can be attenuated with training. Participants completed five sessions where they viewed pairs of motion kinematograms and performed comparison judgements on global motion direction, followed by confidence ratings. In pre- and post-training sessions, the range of direction signals within each stimulus was manipulated across four levels. Participants were assigned to one of three training groups, differing as to whether signal range was varied or fixed during training, and whether or not trial-by-trial accuracy feedback was provided. The effect of signal range on confidence was reduced following training, but this result was invariant across the training groups, and did not translate to improved metacognitive insight. These results suggest that the influence of suboptimal heuristics on confidence can be mitigated through experience, but this shift in confidence estimation remains coarse, without improving insight into confidence estimation at the level of individual decisions.

Dyadic and triadic search: Benefits, costs, and predictors of group performance

Abstract

In daily life, humans often perform visual tasks, such as solving puzzles or searching for a friend in a crowd. Performing these visual searches jointly with a partner can be beneficial: The two task partners can devise effective division of labor strategies and thereby outperform individuals who search alone. To date, it is unknown whether these group benefits scale up to triads or whether the cost of coordinating with others offsets any potential benefit for group sizes above two. To address this question, we compare participants’ performance in a visual search task that they perform either alone, in dyads, or in triads. When the search task is performed jointly, co-actors receive information about each other’s gaze location. After controlling for speed–accuracy trade-offs, we found that triads searched faster than dyads, suggesting that group benefits do scale up to triads. Moreover, we found that the triads’ divided the search space in accordance with the co-actors’ individual search performances but searched less efficiently than dyads. We also present a linear model to predict group benefits, which accounts for 70% of the variance. The model includes our experimental factors and a set of non-redundant predictors, quantifying the similarities in the individual performances, the collaboration between co-actors, and the estimated benefits that co-actors would attain without collaborating. Overall, the present study demonstrates that group benefits scale up to larger group sizes, but the additional gains are attenuated by the increased costs associated with devising effective division of labor strategies.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου

Translate