Translate

Δευτέρα 5 Αυγούστου 2019

Response-level processing during visual feature search: Effects of frontoparietal activation and adult age

Abstract

Previous research suggests that feature search performance is relatively resistant to age-related decline. However, little is known regarding the neural mechanisms underlying the age-related constancy of feature search. In this experiment, we used a diffusion decision model of reaction time (RT), and event-related functional magnetic resonance imaging (fMRI) to investigate age-related differences in response-level processing during visual feature search. Participants were 80 healthy, right-handed, community-dwelling individuals, 19–79 years of age. Analyses of search performance indicated that targets accompanied by response-incompatible distractors were associated with a significant increase in the nondecision-time (t0) model parameter, possibly reflecting the additional time required for response execution. Nondecision time increased significantly with increasing age, but no age-related effects were evident in drift rate, cautiousness (boundary separation, a), or in the specific effects of response compatibility. Nondecision time was also associated with a pattern of activation and deactivation in frontoparietal regions. The relation of age to nondecision time was indirect, mediated by this pattern of frontoparietal activation and deactivation. Response-compatible and -incompatible trials were associated with specific patterns of activation in the medial and superior parietal cortex, and frontal eye field, but these activation effects did not mediate the relation between age and search performance. These findings suggest that, in the context of a highly efficient feature search task, the age-related influence of frontoparietal activation is operative at a relatively general level, which is common to the task conditions, rather than at the response level specifically.

The role of foreign accent and short-term exposure in speech-in-speech recognition

Abstract

Daily speech communication often takes place in suboptimal listening conditions, in which interlocutors typically need to segregate the target signal from the background sounds. The present study investigated the influence on speech recognition of a relatively familiar foreign accent in background speech (Exp. 1) and whether short-term immediate exposure to the target talker’s voice (Exp. 2) or the background babble (Exp. 3) would either help or hinder the segregation of target from background. A total of 72 native Dutch participants were asked to listen to Dutch target sentences in the presence of Dutch or German-accented Dutch babble without (Exp. 1) or with (Exps. 2 and 3) an exposure phase. Their task was to write down what they heard. The results of Experiment 1 revealed that listeners gained a release from masking when the background speech was accented, indicating that dissimilar and less familiar signals are easier to segregate effectively. Experiment 2 demonstrated that short-term immediate exposure to the target talker had no effect on speech-in-speech recognition, whereas exposure to the background babble could hinder separating the target voice from the background speech (Exp. 3). However, this reduced release from masking only appeared in the more difficult and more familiar babble condition (Dutch in Dutch), in which the speech recognition system may have remained attuned to the babble as a potential source of communicatively relevant information. Overall, this research provides evidence that both short-term adaptation and the degrees of target–background similarity and familiarity are of importance for speech-in-speech recognition.

Invalidly cued targets are well localized when detected

Abstract

Considerable attention has been devoted to understanding how objects are localized when there is ample time and attention to detect them. However, in the real world, we often must react to, or act upon, objects that we have glimpsed only briefly and are not directly at the focus of our attention. This paper describes two experiments examining the role of attentional constraints on 2-D (directional) localization, particularly in cases in which targets have been detected but are not within the spatial focus of attention. Targets were asterisks presented briefly (34–150 ms) above or below a central fixation point. Just prior to the target’s appearance, a cue directed attention toward, or away from, the target. Participants indicated whether or not they saw the target, and then used a mouse to indicate the target’s location. The impact of guessing was mitigated by removing trials that participants had flagged as not detected. Longer glimpses generally benefitted localization; by contrast, cue validity had very little effect on response sensitivity, bias or precision. At very brief durations, invalid cueing did result in a small increase in foveal bias. These results indicate that the directional location of objects can be extracted reasonably well from brief glimpses even with reduced attention. This directional information provides an important basis for 3-D localization of objects on the ground, via their angular declination. The current studies suggest that egocentric distance perception might be similarly robust to reduced attention when localization is based primarily on a target’s angular declination.

Letter migration errors reflect spatial pooling of orthographic information

Abstract

Prior research has shown that readers may misread words by switching letters across words (e.g., the word sand in sand lane being recognized as land). These so-called letter migration errors have been observed using a divided attention paradigm whereby two words are briefly presented simultaneously, and one is postcued for identification. Letter migrations might therefore be due to a task-induced division of attention across the two words. Here, we show that a similar rate of migration errors is obtained in a flanker paradigm in which a central target word is flanked to the left and to the right by task-irrelevant flanking words. Three words were simultaneously presented for the same brief duration. Asked to type the target word postoffset, participants produced more migration errors when the migrating letter occupied the same position in the flanker and target words, with significantly fewer migrations occurring across adjacent positions, and the effect disappearing across nonadjacent positions. Our results provide further support for the hypothesis that orthographic information spanning multiple words is processed in parallel and spatially integrated (pooled) within a single channel. It is the spatial pooling of sublexical orthographic information that is thought to drive letter migration errors.

Investigating the role of verbal templates in contingent capture by color

Abstract

To investigate if top-down contingent capture by color cues relies on verbal or semantic templates, we combined different stimuli representing colors physically or semantically in six contingent-capture experiments. In contingent capture, only cues that match the top-down search templates lead to validity effects (shorter search times and fewer errors for validly than for invalidly cued targets) resulting from attentional capture by the cue. We compared validity effects of color cues and color-word cues in top-down search for color targets (Experiment 1a) and color-word targets (Experiment 2). We also compared validity effects of color cues and color-associated symbolic cues during search for color targets (Experiment 1b) and of color-word cues during search for both color and color-word targets (Experiment 3). Only cues of the same stimulus category as the target (either color or color-word cues) captured attention. This makes it unlikely that color search is based on verbal or semantic search templates. Additionally, the validity effect of matching color-word cues during search for color-word targets was neither changed by cue-target graphic (font) similarity versus dissimilarity (Experiment 4) nor by articulatory suppression (Experiment 5). These results suggested either a phonological long-term memory template or an orthographically mediated effect of the color-word cues during search for color-words. Altogether, our findings are in line with a pronounced role of color-based templates during contingent capture by color and do not support semantic or verbal influences in this situation.

“Paying” attention to audiovisual speech: Do incongruent stimuli incur greater costs?

Abstract

The McGurk effect is a multisensory phenomenon in which discrepant auditory and visual speech signals typically result in an illusory percept. McGurk stimuli are often used in studies assessing the attentional requirements of audiovisual integration, but no study has directly compared the costs associated with integrating congruent versus incongruent audiovisual speech. Some evidence suggests that the McGurk effect may not be representative of naturalistic audiovisual speech processing – susceptibility to the McGurk effect is not associated with the ability to derive benefit from the addition of the visual signal, and distinct cortical regions are recruited when processing congruent versus incongruent speech. In two experiments, one using response times to identify congruent and incongruent syllables and one using a dual-task paradigm, we assessed whether congruent and incongruent audiovisual speech incur different attentional costs. We demonstrated that response times to both the speech task (Experiment 1) and a secondary vibrotactile task (Experiment 2) were indistinguishable for congruent compared to incongruent syllables, but McGurk fusions were responded to more quickly than McGurk non-fusions. These results suggest that despite documented differences in how congruent and incongruent stimuli are processed, they do not appear to differ in terms of processing time or effort, at least in the open-set task speech task used here. However, responses that result in McGurk fusions are processed more quickly than those that result in non-fusions, though attentional cost is comparable for the two response types.

Holistic word context does not influence holistic processing of artificial objects in an interleaved composite task

Abstract

Holistic processing, a hallmark of expert processing, has been shown for written words, signaled by the word composite effect, similar to the face composite effect: fluent readers find it difficult to focus on just one half of a written word while ignoring the other half, especially when the two word halves are aligned rather than misaligned. This effect is signaled by a significant interaction between alignment and congruency of the two word parts. Face and visual word recognition, however, involve different neural mechanisms with an opposite hemispheric lateralization. It is then possible that faces and words can both involve holistic processing in their own separate face and word processing systems, but by using different mechanisms. In the present study, we replicated with words a previous study done with faces (Richler, Bukach, & Gauthier, 2009, Experiment 3). In a first experiment we showed that in a composite task with aligned artificial objects, no congruency effects are found. In a second experiment, using an interleaved task, a congruency effect for Ziggerins was induced in trials in which a word was first encoded, but more strongly when it was aligned. However, in a stricter test, we found no differences between the congruency effect for Ziggerins induced by aligned words versus pseudowords. Our results demonstrate that different mechanisms can underlie holistic processing in different expertise domains.

Common or distinct attention mechanisms for contrast and assimilation?

Abstract

The ability to inhibit distractors while focusing on specific targets is crucial. In most tasks, like Stroop or priming, the to-be-ignored distractors affect the response to be more like the distractors. We call this assimilation. Yet, in some tasks, the opposite holds. Constrast occurs when the response is caused to be least like the distractors. Contrast and assimilation are opposing behavioral effects, but they both occur when to-be-ignored information affects judgments. We ask here whether inhibition across contrastive and assimilative tasks is common or distinct. Assimilation and contrast are often thought to have different underlying psychological mechanisms, and we use a correlational analysis with hierarchical Bayesian models as a test of this hypothesis. We designed tasks with large assimilation or contrast effects. The stimuli are morphed letters, and whether there is contrast or assimilation depends on whether the surrounding information is a letter field (contrast) or a word field (assimilation). Critically, a positive correlation was found—individuals who better inhibited contrast-inducing contexts also better inhibited assimilation-inducing contexts. These results indicate that inhibition is common, at least in part, across contrast and assimilation tasks.

What makes a prototype a prototype? Averaging visual features in a sequence

Abstract

After viewing a series of sequentially presented visual stimuli, subjects can readily generate mean representations of various visual features. Although these average representations seem to be formed effortlessly and obligatorily, it is unclear how such averages are actually computed. According to conventional prototype models, the computation entails an equally weighted average taken over all the stimuli. To test this hypothesis, we had subjects estimate the running averages of some feature in a series of sequentially presented stimuli. Part way through the series, we perturbed the distribution from which stimuli were drawn, which allowed us to test alternative models of the computations behind subjects’ estimates. In both explanatory and predictive tests, a model in which the most recent items had disproportionate high weight outperformed a model in which all items carried equal weight. Such recency-weighted behavior was shown consistently in multiple experiments in which subjects estimated running averages of length of vertical lines. However, the degree to which recent items were prioritized varied with the type of stimulus, such that when estimating the running averages of a series of numerals, subjects showed less recency prioritization. We conclude that previous evaluations of prototype models have made unrealistic assumptions about the nature of a prototype, and that a reassessment of prototype models of visual memory and perceptual categorization may be in order.

The nature of visual awareness at stimulus energy and feature levels: A backward masking study

Abstract

The level of processing (LoP) hypothesis proposes that low-level stimulus perception (i.e., stimulus energy and features) is a graded process whereas high-level (i.e., letters, words, meaning) stimulus perception is all-or-none. In the present study, we set up a visual masking design in order to examine the nature of visual awareness at stimulus energy (i.e., detection task) and feature levels (identification task) at specific individual target durations (13, 27, 40, 53, and 80 ms). We manipulated the strength of the masking to produce different visibility conditions and gathered participants’ subjective (across a 4-point awareness scale) and objective (accuracy levels) awareness performances. We found that intermediate ratings (i.e., ratings 2 and 3, which index graded awareness experiences) were used in more than 50% of the trials for target presentations of 27, 40, 53, and 80 ms. In addition, objective accuracy performances for target presentations of 27 and 80 ms produced linearly increasing detection and identification accuracies across the awareness scale categories, respectively. Overall, our results suggest that visual awareness at energy and feature levels of stimulus perception may be graded. Furthermore, we found a divergence in detection and identification performance results, which emphasizes the need for an adequate election of target durations when studying different perceptual processes such as detection versus more complex stimulus identification processes. Finally, “clarity” in the perceptual awareness scale should be exhaustively defined depending on the level of processing of the stimulus, as participants may recalibrate the meaning of the different awareness categories depending on task demands.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου

Translate