Until recently, there were two main ways of obtaining information about words and expressions. The first was to analyze large text data sets (corpora) and calculate the frequency with which words and phrases occur, as well as the typical contexts in which they occur. The second was to ask participants to provide subjective information about words and phrases, such as the familiarity of the stimuli or the age at which they are typically acquired. The development of large language models has given us a third option. Instead of asking participants for information, we can query large language models. The results show that the information obtained from those models is just as good and often even better than the information obtained from people, especially when the model is tuned to a few thousand stimuli. Event date: 30/05/2025 Speaker: Prof. Marc BRYSBAERT Hosted by: Faculty of Humanities