My main research area is Natural Language Inference (NLI). Particularly, I am interested in exploring how current advances in deep learning can be combined with more traditional approaches. To this end, I work on Hy-NLI , a hybrid NLI system that combines the precision of more traditional symbolic approaches with the power and robustness of state-of-the-art deep models, like BERT and XLNet . To gain the precision of traditional symbolic approaches, I build GKR4NLI, a symbolic inference engine, which computes inference based on Natural Logic and GKR . Within the field of NLI, I also focus on the theoretical notion of NLI and on how the task can be refined to better represent the different nuances that human reasoning encompasses. To this end, I investigate closely the SICK corpus, correct portions of it and conduct suitable annotation experiments . Such computational and theoretical aspects are the main focus of my dissertation. Check out the relevant publications for more details.
When semantic tasks such as NLI are pursued in a hybrid direction, it is necessary that the symbolic components on which they operate are efficient and high-performing. When these components rely on semantic parsing, then it also becomes essential to have an expressive semantic representation on which inference can be computed. To this end, I work on a novel semantic representation, GKR , which represents a sentence as a layered graph, with different kinds of information included in different layers (subgraphs) for increased expressivity and modularity. Such a representation is suitable not only for symbolic approaches to semantic tasks such as NLI (see GKR4NLI ), but also for hybrid approaches to semantic tasks (see Kalouli et al, 2018 and Hy-NLI ). Check out the relevant publications for more details.
I am currently a member of the Research Unit 2111 Questions at the Interfaces. The Research Unit is dedicated to investigating question formation, with a particular emphasis on non-canonical questions (e.g., rhetorical, echo, self-addressed, suggestive). The Unit combines expertise from theoretical, computational and experimental linguistics as well as visual analytics. More precisely, I am a member of the P8 subproject, which focuses on the automatic classification of questions into their types, i.e. non-canonical (non-information-seeking) questions vs. canonical (information-seeking) questions. To this end, we develop an active learning system that contributes to the faster and more reliable annotation of the necessary large amounts of data on which further training can be performed. The system is currently targeting English but our research has also investigated question classification in multilingual settings . Check out the relevant publications for more details.