I work with languages: natural, logical, formal, musical.

Check back soon...

Until 2020, I was a researcher and lecturer in linguistics. The last I had my hand in it, I published a paper on the biological origins of language [⇣pdf], in which I used a combination of Chomsky, Peirce and Wittgenstein to unify theories of animal communication and theories of natural language inference. I also laid down the beginnings of a research programme to explore the consequences for cognitive science but I left academia soon after.

Since 2020, I've been working in AI research as an NLP engineer. In that capacity, I've been more concerned with trying to understand how we should model reasoning in intelligent systems, specifically so that we can leverage the recent advances in LLMs while overcoming their inherent limitations.

If there's one idea I think the industry should grok better, it's that human reasoning doesn't happen in language; language is only an impression of non-linguistic cognitive processes, with about as much fidelity as a Monet painting has to a real pond of water lilies.

Because of this, AI systems which rely on natural language data to model intelligence will always fall short of full generalisation, and the hope that reasoning will just 'emerge' from such models as a consequence of scale is as baseless as hoping that gold will emerge from the alchemist's cauldron—arguments over system architecture (statistical versus symbolic) are besides the point; the problem is the ingredients.

For research correspondence, please email callumjhackett@gmail.com