Welcome to the Center for Language, Intelligence, & Computation

 
It is an exciting time for the science of human language. We now have large language models that produce and comprehend language to nearly human levels, demonstrating surprising abilities to reason intelligently. We have gigantic datasets of language use, careful documentation of thousands of languages, new experimental paradigms for understanding language processing, and new computational paradigms for modeling that data. The time is ripe to reconsider classic and foundational questions about language, such as: Why is human language the way it is? How does it fit into the bigger picture of human intelligence? What is the relationship between language and thought? What function does language serve, how did it evolve, and where is it going, in a world where machines that can use language like us are becoming ubiquitous?

The Center for Language, Intelligence, and Computation brings together researchers from across disciplinary boundaries, including language science, cognitive science, computers science, philosophy, and others, who are bringing modern computational models and ideas to bear to understand these core scientific questions about language.

Our Philosophy

We champion a pluralistic approach to the science of language, without any commitment to any particular set of tools or theoretical commitments, but with a strong commitment to rigorous, quantitative, formal understanding that is enabled using computational models. Indeed, there are many computational approaches to language in which UCI has particular strength. Symbolic formalisms for understanding language offer insight into grammatical structure. Bayesian approaches enable rigorous estimation of model parameters, principled integration of prior knowledge, and effective modeling of humans’ remarkably rapid language acquisition, which remains much more efficient than what is possible using large language models. Neural-network architectures, such as Transformers, recurrent neural networks (RNNs), and their modern relatives, achieve human‑competitive performance on prediction tasks, and their internal representations can be probed with modern methods to discover how they represent the compositional structure of language and thought. Information‑theoretic frameworks can predict fine‑grained behavioral measures like reading times and eye‑tracking patterns. Corpus‑driven methods reveal typological universals and cross‑linguistic variability at scale. Only by bringing together these diverse paradigms within a single, collaborative environment can we synthesize their strengths, reconcile their differences, and move toward a comprehensive science of language.