Talk at Humboldt University of Berlin

I presented our work on Tracing Phonetic Features in Automatic Speech Recognition Models in the joint colloqium of the phonetics and corpus linguistics groups at Humboldt University of Berlin. Thank you for hosting me, Tine Mooshammer, Malte Belz, and Sarah Wesolek!

The talk was based on our Interspeech 2024 paper, where we used ASR to examine coarticulation, and our recent Computer, Speech & Language paper, where we investigated how ASR models handle locally-perturbed speech signals. ๐Ÿ‘€

Foto

If you are planning a trip to Berlin one of these days, there surely is no shortage of activites. I can recommend booking a free tour of the Reichstag for a behind-the-scenes perspective on German politics ๐Ÿ›, taking the bus to Dahlem to enjoy expressionist art at the Brรผcke Museum ๐ŸŽจ, and strolling through the city with open eyes to discover gems like the Yellow Man mural by Brasilian street artists Os Gรชmeos. ๐Ÿ’š ๐Ÿ’› ๐Ÿ’™

ASRU 2025 (December 6-10, 2025) 🌺

Under the motto Towards the New Era of Speech Understanding, this yearโ€™s IEEE Automatic Speech Recognition and Understanding (ASRU) Workshop brought researchers together in Honolulu, Hawaiสปi. ๐Ÿ๏ธ

Foto

Together with Suyoun Kim (Amazon, USA), we had the exciting role of Student and Volunteer Chairs for the workshop. We had the pleasure of working with a group of highly motivated students from the University of Hawaiสปi at Mฤnoa, who helped us make sure the workshop ran seamlessly. On top of that, we organized a mentoring program, bringing together early-career researchers and experienced mentors. Mentees shared that the meet-ups gave them valuable clarity on research directions and career paths, while mentors enjoyed fresh perspectives from the next generation of researchers. Some impressions from these meet-ups are shared below.

Foto

It was fantastic working alongside the tireless General Chairs, Bowon Lee (Inha University, Korea), Kyu Han (Oracle, USA), and Chanwoo Kim (Korea University), whose dedication made the conference a success! Mahalo! ๐Ÿ’› ๐Ÿ’› ๐Ÿ’›

New study! 🎉 How visuals shape the way we perceive voice assistant gender

Most voice assistants still sound female โ€“ even when designed to be neutral? ๐Ÿค”

In our new study, The Influence of Visual Context on the Perception of Voice Assistant Gender, we explored how people perceive Apple Siriโ€™s gender-neutral voice Quinn. We found that listeners tended to rate Quinn as more female-sounding โ€“ especially when a female portrait was shown at the same time (see Figure 1a). This confirms that what we see ๐Ÿ‘€ can strongly influence what we hear ๐Ÿ‘‚, even if it is unrelated to the task at hand.

Foto

Designing truly gender-neutral voice assistants isnโ€™t just about the sound itself โ€” our expectations and the visual context play a powerful role too.

We had the pleasure of presenting this work at P&P 2024 in Halle, Germany. Read the full paper in the proceedings (pp. 55-63). #openaccess ๐Ÿ”“

P&P 2025 (October 6-7, 2025)

This yearโ€™s P&P conference took place at the beautiful Leipzig University Library, Bibliotheca Albertina. A great opportunity to discuss our work on Question Intonation in Bilingual Speakers of Bulgarian and Judeo-Spanish with the vibrant community of phoneticians and phonologists from the DACH+ region.

Foto

New study! 🎉 How personality shapes speech adaptation

I am delighted to share that our article Exploring the Relationship Between Mental Boundary Strength and Phonetic Accommodation is now published in Language and Speech. ๐Ÿฅ‚ ๐Ÿพ The article is available online. #openaccess ๐Ÿ”“

๐Ÿ“– In this study, we explore whether individuals with thinner mental boundaries are more likely to phonetically adapt to their conversation partners. Our results suggest that speakers may accommodate to different types of phonetic features depending on their personality structure.

๐Ÿ‘€ We look forward to further research investigating how individual personality differences influence accommodation behaviour โ€” for example, using the German Boundary Questionnaire version we provided.