The academic research landscape is evolving rapidly, especially with the advent of generative AI tools developed on large language machine learning models emerging as game-changers for scholars. In research environments, AI tools can mostly be used as general-purpose tools (e.g. Microsoft Copilot) or as task-specific tools (e.g. to deduplicate papers in literature reviews).
General-purpose tools such as large language model (LLM) chatbots can best be applied as research assistants (e.g. to do prompting for ideation or data analysis), or as a model to do data analysis on (such as sentiment-analysis on datasets added to a language model). The latter is accomplished computationally using LLM application programming interfaces (APIs), or by creating Custom GPTs (generative pretrained transformers) with domain-specific datasets. Both of these methods require a paid subscription to an LLM chatbot service.
In this blog entry we focus our attention on task-specific AI tools within the context of research tasks in search and discovery, topic comparison, text summation and writing. This is discussed by way of focusing on the three main application areas of AI tools in research environments, namely in:
- Reviewing prior studies
- Identifying gaps in knowledge
- Generating new research hypotheses for testing
Let’s briefly discuss each of these separately by reference of two tools within each application area that are available to SU researchers:
1. Reviewing prior studies
AI tools can help automate systematic reviews by scanning the abstracts and full texts of documents to extract key terms and then use clustering algorithms to group similar studies to identify trends. The benefits of using AI to review prior studies include the speed of processing thousands of papers in hours, finding hidden patterns across studies and handling growing volumes of research, although some niche domains still lack sufficient training data.
By example, the EPPI-Reviewer systematic review software package uses machine learning to screen and categorize research papers for systematic reviews. Developed by the EPPI-Centre at University College London, the EPPI-Reviewer is a recommender web-based tool originally developed for Cochrane authors to support the development of systematic reviews from study screening through to data collection, analysis and synthesis. It manages references, stores PDF files, and facilitates qualitative and quantitative analyses such as meta-analysis and thematic synthesis. It also contains some new ‘text mining’ technology which is promising to make systematic reviewing more efficient. It works with modern browsers and web-enabled devices, and one can sign up for a free one-month trial before considering the paid version (https://eppi.ioe.ac.uk/eppireviewer-web).
AI tools can also enhance ones understanding of the semantics of scientific literature and recommend relevant papers and highlight key findings in research by mapping relationships between studies using word embeddings and graph-based algorithms. This is typically accomplished on the back of large language models to summarize findings across papers. Semantic Scholar (https://www.semanticscholar.org) is a free AI-powered search and discovery tool that is an evidence synthesis platform that uses a combination of machine learning, natural language processing and machine vision to add a layer of semantic analysis to the traditional methods of citation analysis to extract relevant figures, tables, and entities from papers. It allows you to search across approximately 200 000 000 papers from all fields of science, for free.
2. Identifying gaps in knowledge
AI can also assist in identifying gaps in human knowledge across different domains of study, from niche fields to broad interdisciplinary research. In doing so it allows researchers to process large amounts of data, detect patterns, and highlight what’s missing.
Various GPTs that are accessible through a paid LLM chatbot service – such as ChatGPT – are useful in accomplishing this. Notably, the Wolfram Alpha GPT allows a researcher to uncover connections between disparate fields by analysing structured data to highlight unexplored correlations. The VOSviewer is a software tool for constructing and visualizing bibliometric networks and can visualize citation networks to show declining interest in older theories versus emerging clusters. Although not an AI tool per se, the VOSviewer offers text mining functionality like that deployed in AI models that can be used to construct and visualize co-occurrence networks of important terms extracted from a body of scientific literature. The software can be used freely for any purpose (https://www.vosviewer.com/).
Although SU does not have a subscription to it, Scopus AI combines generative artificial intelligence with Scopus’ trusted content and data to help researchers accelerate their research. It also assists in mapping new research areas and finding opportunities for interdisciplinary cooperation. Built in close collaboration with the academic community, it provides a unique window into humanity’s accumulated knowledge through Scopus, the world’s largest multidisciplinary and trusted abstract and citation database.
The key benefit in using these tools is to analyse millions of papers/patents in hours, and to link gaps in one field to solutions in another.
3. Generating new research hypotheses for testing
Hypothesis generation involves analysing existing data, finding patterns, and suggesting new areas to explore. AI tools work by finding statistical anomalies, under-explored correlations or conflicting results in data and literature. They also merge ideas from disparate fields using embeddings or graph networks and, in the process, they can generate 100+ hypotheses in minutes.
A practical AI tool to assist in such literature-driven hypothesis generation is Elicit (https://elicit.com/), which uses language models to help researchers quickly find relevant papers and summarize critical findings. Instead of sifting through hundreds of articles manually, researchers can rely on Elicit to scan abstracts, identify noteworthy points, and even suggest potential methods for study. Another valuable platform is Scite.ai (https://scite.ai/) which helps users see how an article has been cited – whether supportively, neutrally, or even in contradiction.
Cross-disciplinary tools designed on LLMs such as GPT-4 or Claude can also be prompted to brainstorm hypotheses by combining fields of study by using prompt engineering to merge concepts from unrelated fields.
Summary
While AI isn’t a replacement for human expertise, it’s a powerful ally. By integrating tools like Elicit or Scite into workflows, researchers can tackle complex projects with greater speed and confidence. As these technologies advance, they’ll continue to democratize access to knowledge and push the boundaries of academic inquiry.
AI tools such as Grammarly can also assist in the writing process but that applies to academic work beyond only research environments and is not discussed here separately.
Author: Wouter Klapwijk
Leave a Reply