The 90's saw an explosion of ideas in merging traditional signal processing techniques with personal communication and entertainment supported by www technologies. We are presently experiencing yet another paradigm change in human interaction and communication such as through social media and in online information sharing. Notably, there has been significant movement in employing information and communications technologies towards transforming access and participation of people in their health and well-being.
This exciting convergence of signal processing, multimedia, and speech applications centered on novel processing of signals from, to, and for humans, is the focus of my research. This effort entails a range of challenges in the sensing, recognition, interpretation, and context exploitation of complex human behavior, both at the explicit and implicit levels. Importantly, the effort includes the creation of algorithms and models that are inspired by, and emulate, how humans make use of the behavioral signal information in specific, societally-meaningful application settings.
In this talk, I will focus on one aspect of my work, Behavioral Signal Processing — technology and algorithms for quantitatively and objectively understanding typical, atypical and distressed human behavior — for mental health care, especially in the domain of Family Studies but also in the domain of Addiction. I will discuss how we exploit both existing data and pursue new multimodal data acquisition approaches
Panayiotis G. Georgiou received his B.A. and M.Eng degrees with Honors from Cambridge University (Pembroke College), U.K. in 1996. He received his MSc and PhD degrees from the University of Southern California in 1998 and 2002 respectively. During the period 1992-96 he was awarded a Commonwealth scholarship from Cambridge-Commonwealth Trust.
Since 2003 he has been a member of the Speech Analysis and Interpretation Lab, first as a Research Associate and currently as a Research Assistant Professor. His interests span the fields of Human Social and Cognitive Signal Processing. He has worked on and published over 90 papers in the fields of statistical signal processing, alpha stable distributions, speech and multimodal signal processing and interfaces, speech translation, language modeling, immersive sound processing, sound source localization, and speaker identification. He has been an investigator, on several federally funded projects, notably PI of NSF SHB: Medium: Quantitative Observational Practice in Family Studies: The case of reactivity., and co-PI DARPA Transtac "SpeechLinks" and NSF (Large) "An Integrated Approach to Creating Enriched Speech Translation Systems". He is currently serving as guest editor of the Computer Speech and Language journal and is a member of the Speech and Language Processing Technical Committee (SLTC). He has received best paper awards for his pioneering work in analyzing the multimodal behaviors of users in speech-to-speech translation and for automatic classification of married couples’ behavior using audio features.
His current focus is on multimodal environments, behavioral signal processing, and speech-to-speech translation. His work has been supported by NSF, NIH, and DARPA.