sqIRL

sqIRL

Interpretable Representation Learning

The Interpretable Representation Learning lab, sqIRL, at the University of Antwerp pursues fundamental research at the intersection of machine learning and interpretability/explainability. Our research focuses on the inner-workings of AI systems and the learning processes that produce them. We aim to develop AI systems that are interpretable/explainable and more efficient in their use of data and computational resources.

News

Saja's paper on positioning a taxonomy of interpretation and explanation methods for Capsule Networks was accepted in the journal Neurocomputing.

Nov 2025

One paper accepted at Neurocomputing on Interpretable HDC Classifiers.

Oct 2025

sqIRL welcomes Renata to the lab.

Sep 2025

Two papers accepted at ECML-PKDD'25 on Smooth-InfoMax and Interpretability of SNNs.

Jun 2025

Two papers accepted at ICLR'25 on Twin Network Augmentation and Interpretability via Bilinear MLPs.

Jan 2025