04 Feb 2026
I presented our work on Tracing Phonetic Features in Automatic Speech Recognition Models
in the joint colloqium of the phonetics and corpus linguistics groups at Humboldt University of Berlin.
Thank you for hosting me, Tine Mooshammer, Malte Belz,
and Sarah Wesolek!
The talk was based on our Interspeech 2024 paper, where we used ASR to examine coarticulation, and our recent Computer, Speech & Language paper, where we investigated how ASR models handle locally-perturbed speech signals. ๐

If you are planning a trip to Berlin one of these days, there surely is no shortage of activites.
I can recommend booking a free tour of the Reichstag for a behind-the-scenes
perspective on German politics ๐, taking the bus to Dahlem to enjoy expressionist art at
the Brรผcke Museum ๐จ, and strolling through the city with open eyes to discover gems like the Yellow Man mural by Brasilian street artists Os Gรชmeos. ๐ ๐ ๐
11 Dec 2025
Under the motto Towards the New Era of Speech Understanding, this yearโs
IEEE Automatic Speech Recognition and Understanding (ASRU) Workshop brought researchers together in Honolulu, Hawaiสปi.
๐๏ธ

Together with Suyoun Kim (Amazon, USA), we had the exciting role of Student and Volunteer Chairs for the workshop.
We had the pleasure of working with a group of highly motivated students from the University of Hawaiสปi at Mฤnoa, who helped us make sure the workshop ran seamlessly.
On top of that, we organized a mentoring program, bringing together early-career researchers and experienced mentors. Mentees shared that the meet-ups gave them valuable clarity on research directions and career paths, while mentors enjoyed fresh perspectives from the next generation of researchers. Some impressions from these meet-ups are shared below.

It was fantastic working alongside the tireless General Chairs, Bowon Lee (Inha University, Korea),
Kyu Han (Oracle, USA), and Chanwoo Kim (Korea University), whose dedication made the conference a success! Mahalo! ๐ ๐ ๐
20 Oct 2025
Most voice assistants still sound female โ even when designed to be neutral? ๐ค
In our new study, The Influence of Visual Context on the Perception of Voice Assistant Gender, we explored how people perceive Apple Siriโs gender-neutral voice Quinn.
We found that listeners tended to rate Quinn as more female-sounding โ especially when a female portrait was shown at the same time (see Figure 1a). This confirms that what we see ๐ can strongly influence what we hear ๐, even if it is unrelated to the task at hand.

Designing truly gender-neutral voice assistants isnโt just about the sound itself โ our expectations and the visual context play a powerful role too.
We had the pleasure of presenting this work at P&P 2024 in Halle, Germany. Read the full paper in the proceedings (pp. 55-63). #openaccess ๐
08 Oct 2025
This yearโs P&P conference took place at the beautiful Leipzig University Library, Bibliotheca Albertina.
A great opportunity to discuss our work on Question Intonation in Bilingual Speakers of Bulgarian and Judeo-Spanish
with the vibrant community of phoneticians and phonologists from the DACH+ region.

28 Jun 2025
I am delighted to share that our article Exploring the Relationship Between Mental Boundary Strength
and Phonetic Accommodation is now published in Language and Speech. ๐ฅ ๐พ
The article is available online. #openaccess ๐
๐ In this study, we explore whether individuals with thinner mental boundaries are more likely to phonetically adapt to their conversation partners. Our results suggest that speakers may accommodate to different types of phonetic features depending on their personality structure.
๐ We look forward to further research investigating how individual personality differences influence accommodation behaviour โ for example, using the German Boundary Questionnaire version we provided.