Matching Face with Voice Identity Using Static and Dynamic Stimuli in Accordance with Ethnicity
DOI:
https://doi.org/10.36317/kaj/2023/v1.i57.12112Keywords:
Face-voice matching, Static stimuli, Dynamic stimuli, Cross-modal, EthnicityAbstract
The current study tackles the ability of humans to match faces with voices depending on static and dynamic stimuli with ethnicity. The first aim of the current study is the possibility of matching a face with a voice using the static stimuli with the ethnicity and the second aim is the possibility of matching a face with a voice using dynamic stimuli with ethnicity and to find out which of the two gives a more accurate matching. In this study, the Cross-Modal was adopted. The results show that the participants successfully selected the true speakers in most of the items that are shown to them. The dynamic stimuli with the ethnicity gave more accurate results than the static ones.
Downloads
References
Bernstein, L. E., Tucker, P. E., & Demorest, M. E. (2000). Speech perception without hearing. Percept. Psychophys. 62, 233–252. doi: 10.3758/BF03205546
Dobs, K., Bülthoff, I., & Schultz, J. (2016). Identity information content depends on the type of facial movement. Sci. Rep. 6, 1–9. doi: 10.1038/srep34301
Ellis, A.W. (1989) Neuro-cognitive processing of faces and voices. In Handbook of Research on Face Processing (Young, A.W. and Ellis, H.D., eds), pp. 207–215, Elsevier
Girges, C., Spencer, J., & O’Brien, J. (2015). Categorizing identity from facial motion. Q. J. Exp. Psychol. 68, 1832–1843. doi: 10.1080/17470218.2014.993664 Gold, J. M., Barker, J. D., Barr, S., Bittner, J. L., Bromfield, W. D., Chu, N., et al.
Hartman, D.E. & Danahuer, J.L. (1976) Perceptual features of speech for males in four perceived age decades. J. Acoust. Soc. Am. 59, 713–715
Hill, H., & Johnston, A. (2001). Categorizing sex and identity from the biological motion of faces. Curr. Biol. 11, 880–885. doi: 10.1016/S0960-9822(01)00243-3
Kaulard, K., Cunningham, D. W., Bülthoff, H. H., & Wallraven, C. (2012). The MPI facial expression database — a validated database of emotional and conversational facial expressions. PLoS ONE 7:e32321. doi: 10.1371/journal.pone.0032321.s002
Lass, N.J. et al. (1976) Speaker sex identification from voiced, whispered, and filtered isolated vowels. J. Acoust. Soc. Am. 59, 675–678
Nummenmaa, L., & Calder, A. J. (2009). Neural mechanisms of social attention. Trends Cogn. Sci. 13, 135–143. doi: 10.1016/j.tics.2008.12.006
O’Toole, A. J., Roark, D. A., & Abdi, H. (2002). Recognizing moving faces: a psychological and neural synthesis. Trends Cogn. Sci. 6, 261–266. doi: 10.1016/S1364-6613(02)01908-3
Papcun, G. et al. (1989) Long-term memory for unfamiliar voices. J. Acoust. Soc. Am. 85, 913–925
Rosenblum, L. D., Yakel, D. A., Baseer, N., Panchal, A., Nodarse, B. C., & Niehus, R. P. (2002). Visual speech information for face recognition. Percept. Psychophys. 64, 220–229. doi: 10.3758/BF03195788
Ross, L. A., Saint-Amour, D., Leavitt, V. M., Javitt, D. C., & Foxe, J. J. (2007). Do you see what i am saying? exploring visual enhancement of speech comprehension in noisy environments. Cereb. Cortex 17, 1147–1153. doi: 10.1093/cercor/bhl024
Russell, J. A. (1994). Is there universal recognition of emotion from facial expressions? A review of the cross-cultural studies. Psychol. Bull. 115, 102–141. doi: 10.1037/0033-2909.115.1.102
Scherer, K.R. (1995) Expression of emotion in voice and music. J. Voice 9, 235–248
Downloads
Published
How to Cite
Issue
Section
Categories
License
Copyright (c) 2023 Mustafa Rafid Abdul-Ally , Prof. Balqis Issa Gatta

This work is licensed under a Creative Commons Attribution 4.0 International License.