Search Constraints
Number of results to display per page
Results for:
1 - 2 of 2
Search Results
-
Video
Human brain stores tremendous amount of knowledge about this world, which is the foundation of object recognition, language, thought, and reasoning. What’s the neural codes of semantic knowledge representation? Is the knowledge “roses are red” simply the memory trace of perceiving the color of roses, stored in the brain circuits within color-sensitive neural systems? What about knowledge that is not directly perceived by senses, such as “freedom” or “rationality”? I will present a set of studies from my lab that addresses this issue, including object color (and other visual) knowledge in several populations (congenitally blind humans, color blind humans, and typically developed macaques), and semantic neural representation in individuals with early language experience deprivation. The findings point to the existence of two different types of knowledge coding in different regions of the human brain – one conservative, based on sensory experiences, and one based on language-derived machinery that support fully nonsensory information. The relationship between these two types of knowledge coding will be discussed.
Event date: 09/04/2025
Speaker: Professor Yanchao BI (Peking University)
Hosted by: Faculty of Humanities
- Subjects:
- Language and Languages
- Keywords:
- Neurobiology Semantics Brain
- Resource Type:
- Video
-
Video
People are now regularly interacting with voice assistants (VAs), which are conversational agents that allow users to use spoken language to interface with a machine to complete tasks. The huge adoption and daily use of VAs by millions of people - and its increasing use for financial, healthcare, and educational applications - raises important questions about the linguistic and social factors that affect spoken language interactions with machines.
We are exploring issues of linguistic and social biases that impact speech communication in human-computer interaction - particularly during cross-language transfer, learning, or adaptation of some kind. In this talk, I will present two case studies illustrating some of our most recent work in this area. The first study looks at a case of cross-language ASR transfer. We find systematic linguistic and phonetic disparities in language transfer by machines trained on a source language to speech recognition of a novel target, low-resource language. The second study looks at a case of social bias in word learning by humans using voice-enabled apps. We find the word learning is inhibited when there are mismatching social cues presented by the voice and the linguistic information.
Together, along with highlights from other ongoing work in my lab, the aim of this talk is to underscore that human-computer linguistic communication is a rich testing ground for investigating issues in speech and language variation. Examining linguistic variation during HCI can enrich and elaborate linguistic theory, as well as present opportunities for linguists to provide insights for improving both the function and fairness of these technologies.
Event date: 25/03/2025
Speaker: Professor. Georgia ZELLOU (University of California, Davis)
Hosted by: Faculty of Humanities
- Subjects:
- Communication and Language and Languages
- Keywords:
- Linguistics English language -- Variation Speech processing systems English language -- Spoken English Human-computer interaction
- Resource Type:
- Video