Hi!
I'm a sound and hearing researcher interested to decypher the neurocomputational bases of audition. My researches combine advanced mathematical modeling of sound signals with statistical learning techniques, behavioral testing and neuroinspired techniques in order understand how these processes guide human communication and behaviour.

I'm currently post-doc between the Perception, Representation, Image, Sound, Music lab (PRISM) and the Laboratoire d'Informatique & Systèmes lab (LIS) in Marseille through the Institute of Language Communication & the Brain (ILCB) of Aix-Marseille University. I'm advised by Richard Kronland-Martinet (PRISM) and Valentin Emiya & Stéphane Ayache (LIS).

I'm grateful having been advised by Daniel Pressnitzer & Christian Lorenzi at the Ecole Normale Supérieure de Paris, Stephen McAdams & Philippe Depalle at McGill University in Montreal, and by Sølvi Ystad & Mistuko Aramaki at the CNRS Mechanics and Acoustic Lab in Marseille.

Few hints on recent works
___________
You haven't sleep enough the last days?
Do you think we can detect this by just asking you to read a text?



The publication is in preparation but if you want to try our machine learning probing method
go reading this preprint and try our python scripts

___________

What makes the sound of music?
We revisited 17 former experiments with a neuro-inspired machine learning approach

Recently published in Nature Human Behaviour