Publication Title

PLoS Computational Biology

Document Type

Article

Department or Program

Neuroscience

Publication Date

7-1-2018

Abstract

Visual scene category representations emerge very rapidly, yet the computational transformations that enable such invariant categorizations remain elusive. Deep convolutional neural networks (CNNs) perform visual categorization at near human-level accuracy using a feedforward architecture, providing neuroscientists with the opportunity to assess one successful series of representational transformations that enable categorization in silico. The goal of the current study is to assess the extent to which sequential scene category representations built by a CNN map onto those built in the human brain as assessed by high-density, time-resolved event-related potentials (ERPs). We found correspondence both over time and across the scalp: earlier (0–200 ms) ERP activity was best explained by early CNN layers at all electrodes. Although later activity at most electrode sites corresponded to earlier CNN layers, activity in right occipito-temporal electrodes was best explained by the later, fully-connected layers of the CNN around 225 ms post-stimulus, along with similar patterns in frontal electrodes. Taken together, these results suggest that the emergence of scene category representations develop through a dynamic interplay between early activity over occipital electrodes as well as later activity over temporal and frontal electrodes.

PubMed ID

30040821

Copyright Note

This is the publisher's version of the work. This publication appears in Bates College's institutional repository by permission of the copyright owner for personal use, not for redistribution.

Required Publisher's Statement

Original version is available from the publisher at: https://doi.org/10.1371/journal.pcbi.1006327

Share

COinS