Eye Tracking & Social Attention research
We use eye tracking in screen-based and real world environments to explore how observers visually attend to their environments. We are particularly interested in how observers attend to social information, such as people’s faces and eyes, and how they use this information to construct social perceptions.
Real world studies
Using a portable eye tracker (SMI ETG2) and high definition video recordings, this line of research examines real world attention. For example, in a current project we are examining social attention (looking at the face) within a face to face interaction between children with ASD and an adult experimenter. Children in the study started by looking at a display of toys (see below), and after choosing their preferred toy, they sat down with experimenter and engaged in a conversation about the toy they chose.
Screen-based studies
We measure observers’ eye movements to computer images (e.g., natural scenes) to understand attentional mechanisms underlying social perception. In addition to measuring gaze selection (looking at the faces and eyes of others), we examine observers’ tendency to follow where others are looking (gaze following) vs. where nonsocial directional cues (e.g., arrows) are pointing. We are also exploring the conditions under which the seemingly automatic bias to select and follow gaze is overridden.
Sample publications:
Pereira, E., Birmingham, E., & Ristic, J. (2019). The eyes don’t have it after all? Attention is not automatically biased towards faces and eyes. Published online on January 2, 2019, Psychological Research. doi: 10.1007/s00426-018-1130-4.
Birmingham, E., Johnston, K.H.S., & Iarocci, G. (2017). Spontaneous gaze following during naturalistic social interactions in school-aged children and adolescents with Autism Spectrum Disorder. Canadian Journal of Experimental Psychology, 71(3), 243-257, doi: 10.1037/cep0000131.
Birmingham, E., Bischof, W.F., & Kingstone, A. (2009). Saliency does not account for fixations to eyes within social scenes. Vision Research, 49, 2992-3000, doi:10.1016/j.visres.2009.09.014.