neuralnoise.com


Homepage of Dr Pasquale Minervini
Researcher at the University of Edinburgh, School of Informatics, and ELLIS


PhD Projects

As mentioned here, in September 2022 I joined the Institute for Language, Cognition and Communication (ILCC) at the School of Informatics, University of Edinburgh, one of the world’s best schools in NLP and related areas, as a faculty member in NLP! If you are interested in working with me, I have funding for multiple PhD students: make sure to apply either to the UKRI CDT in Natural Language Processing or to the ILCC 3-year PhD program!

Some more details on the ILCC PhD program – there are two deadlines for applying: the first round is on 25th November 2022, and the second round is on 27th January 2023. I strongly recommend that non-UK applicants submit their applications in the first round, to maximise their chances of funding.

Regarding the NLP CDT program – there are also two deadlines for applying: the first round is on 25th November 2022, and the second round is on 27th January 2023. Likewise, I strongly recommend that non-UK applicants submit their applications in the first round, to maximise their chances of funding.

If you are interested in working with me, you can apply via the ILCC PhD program’s and the NLP CDT program’s application portals. You will be asked to submit a research proposal: this is mostly used for assessing candidate PhD students and for matching them with potential faculty supervisor, and you can decide to work on different problems during your PhD. If you would like some feedback on your research proposal, get in touch!

In the following there’s a (non-exhaustive but fairly up-to-date) list of PhD topics we may decide to work on – this list is also available on the Possible PhD topics in ILCC page. An older list of possible research topics is also available at this link, and feel free to propose new project topics that intest you! I’m always happy to explore new directions!

Open-Domain Complex Question Answering at Scale

Open-Domain Question Answering (ODQA) is a task where a system needs to generate the answer to a given general-domain question, and the evidence is not given as input to the system. A core limitation of modern ODQA models (and, more generally, of all models for solving knowledge-intensive tasks) is that they remain limited to answering simple, factoid questions, where the answer to the question is explicit in a single piece of evidence. In contrast, complex questions involve aggregating information from multiple documents, requiring some form of logical reasoning and sequential, multi-hop processing in order to generate the answer. Projects in this area involve proposing new ODQA models for answering complex questions, for example, by taking inspiration from models for answering complex queries in Knowledge Graphs (Arakaleyan et al., 2021; Minervini et al., 2022a) and Neural Theorem Provers (Minervini et al., 2020a; Minervini et al., 2020b) and proposing methods by which neural ODQA models can learn to search in massively large text corpora, such as the entire Web.

Neuro-Symbolic and Hybrid Discrete-Continuous Natural Language Processing Models

Incorporating discrete components, such as discrete decision steps and symbolic reasoning algorithms, in neural models can significantly improve their interpretability, data efficiency, and predictive properties — for example, see (Niepert et al., 2021; Minervini et al., 2022b; Minervini et al., 2020a; Minervini et al., 2020b). However, approaches in this space rely either on ad-hoc continuous relaxations (e.g., Minervini et al., 2020a, Minervini et al., 2020b) or on gradient estimation techniques that require some assumptions on the distributions of the discrete variables (Niepert et al., 2021; Minervini et al., 2022b). Projects in this area involve devising neuro-symbolic approaches for solving NLP tasks that require some degree of reasoning and compositionality and identifying gradient estimation techniques (for back-propagating through discrete decision steps) that are both data-efficient, hyperparameter-free, accurate, and require fewer assumptions on the distribution of the discrete variables.

Learning from Graph-Structured Data

Graph-structured data is everywhere – e.g. consider Knowledge Graphs, social networks, protein and drug interaction networks, and molecular profiles. In this project, we aim to improve models for learning from graph-structured data and their evaluation protocols. Projects in this area involve incorporating invariances and constraints in graph machine learning models (e.g., see Minervini et al., 2017), proposing methods of transferring knowledge between graph representations, automatically identifying functional inductive biases for learning from graphs from a given domain (such as Knowledge Graphs – for example, see our NeurIPS 2022 paper on incorporating the inductive biases used by factorisation-based models into GNNs) and proposing techniques for explaining the output of black-box graph machine learning methods (such as graph embeddings).

comments powered by Disqus