neuralnoise.com


Homepage of Dr Pasquale Minervini
Researcher/Faculty at the University of Edinburgh, School of Informatics
Co-Founder and CTO at Miniml.AI
ELLIS Scholar, Edinburgh Unit


Real-World Impact of Our Research

Academic research sometimes risks to be disconnected from real-world applications. Research from our group demonstrated significant real-world impact across multiple domains, from improving the efficiency of LLM inference and training to new state-of-the-art evaluation protocols, and contributing to several industry products.

Industry Adoption

KV Cache Compression

Our work on KV cache compression, presented at EMNLP 2024 as an oral presentation (top 9% of papers), has been directly adopted by NVIDIA’s KV Press library. This library is now widely used across the industry for reducing the inference footprint of LLMs

MMLU-Redux

Our MMLU-Redux benchmark, presented at NAACL 2025, is being widely adopted for evaluating LLMs across multiple frontier labs. After identifying a concerning 57% error rate in the original MMLU benchmark’s Virology subset, we created a manually curated, expert-verified dataset of 5,700 questions. Some industry adopters include DeepSeek, Alibaba Qwen, Tencent Hunyuan, Moonshot KIMI, NVIDIA NeMo Skills, and LG AI Research EXAONE.

Frontier Model Pre-Training

One of our pre-training techniques from “Analysing The Impact of Sequence Composition on Language Model Pre-Training” (ACL 2024), specifically intra-document causal masking, was adopted by Meta in training Llama3, their flagship line of LLMs.

Complex Query Answering

Our work on answering complex queries on large and incomplete Knowledge Graphs (“Complex Query Answering with Neural Link Predictors”, which received an Outstanding Paper Award at ICLR 2021), is also having some impact beyond academia. Based on personal communications, our complex query answering techniques are being adopted by several tech companies to develop their AI products (I am happy to discuss this verbally). We are now continuing this line of research – for example, see our NeurIPS 2023 and ICML 2025 papers on this topic.

Media Coverage

Our research received some media coverage. For example, our work “Lost in Time: Clock and Calendar Understanding Challenges in Multimodal LLMs”, led by the amazing Rohit Saxena, received extensive media coverage, including The New York Times, VICE, Gizmodo, The Engineer, and Yahoo! News. Our project “Inverse Scaling in Test-Time Compute” led by Aryo Gema in collaboration with Anthropic also received some media attention, with coverage in VentureBeat, YourStory, AI Times, TechZine, The Interview Times, AI News, and others.

comments powered by Disqus