An interview with the co-founder of – the world’s first Artificial Intelligence science assistant

Editor-in-chief Olivia Gavoyannis talks to the developers of about the software, and its implications for the future of research and development.

Photo: Wikimedia Commons

Have you ever spent hours sifting through journal papers? Ever got frustrated at your inability to find relevant research? Ever wished that there was an easier way to filter the seemingly endless stream of information on the web?

The team at certainly did, which is why they have created an AI-powered science assistant to help anyone that wants to find related papers for an original research question.

The software – – can be used to build a precise reading list of research documents, and the company claims that it can solve your research problems 78% faster (without compromising quality) than if you were carrying out the tasks manually.

The concept for was first established three years ago at NASA Ames Research Centre. The team was taking part in a summer programme run by Singularity University (SU) when they were set the task of creating a concept that would positively affect the lives of a billion people. This exercise got the team thinking about the current state of scientific research, and more specifically about the restrictions created by paywalls, and the inability of human intelligence alone to process the three thousand or so research papers that are published around the world every single day.

Maria Ritola, co-founder of, talked to The Saint about the software, and the targets that the team are currently working towards. “Our motivation at is really to build tools that help researchers to leverage the existing scientific knowledge in the research process much more efficiently,” she explained.

At the moment the company does that through two specific tools incorporated within the framework. The first tool is the exploration tool, which allows anyone to input a research question to and then export the related research and science around that.

The second tool – the focus tool – semi-automates the literature review process. This means that if a researcher or student starts with hundreds or thousands of papers and needs to come up with only those few that are really relevant, the software can condense the available data to a set of key information that is essentially for a specific topic.

When asked about how compares to existing research technology, Ms Ritola explained that is different from systems such as Google Scholar in that it is able to function without keyword searches. The team envision that this will make the beginning of the research process (when you might not know the right keywords much easier), as well as aiding in multidisciplinary research.

Ms Ritola explained that is able to function without keywords because it “has contextual understanding, which we have built into the tool by using contextual synonyms, as well as topic models generated based on training the AI with more than 17 million documents.”

“The semi-automation process actually helps you to do the drudgery part of the research process much faster, which is nice for researchers because then they can actually focus on reading the relevant documents and really figuring out the core hypothesis and combining knowledge, rather than going through thousands and thousands of papers, which is a very strenuous process,” Ms Ritola added.

As is the case with most artificial intelligence software, will continue to get smarter the more it is used, and the team are making the most of this by training using two different techniques.

The first approach – supervised machine learning – involves 10,000 people training the software directly, reading the abstracts of research articles and telling what they think is the most relevant part of these articles. The second method used for training is called unsupervised machine learning – where the AI and machine learn directly from research articles.

When asked about challenges that the team have experienced so far, Ms Ritola was quick to point out the issue of paywalls. She explained that the system is connected to about 130 million open access papers – almost all those available to the public – but that many useful documents are still hidden behind systems that require users to pay for access.

However, rather than just accepting this situation as it is, the team have devised a scheme to solve the problem– Project Aiur – an initiative that aims to revolutionise the current workings of the research world.

“What we’re trying to do is to build a community, which is not owned by us, but by a community of researchers, a community of coders, anyone who wants to contribute to building a new economic model for science that works around a community governed AI-based Knowledge Validation Engine and an open, validated repository of science. Over time, the goal is to give access to all the research articles that are in this world”, Ms Ritola told The Saint.

This is not a straightforward task, as the team are faced with the challenge of encouraging researchers to publish and carry out their investigations using Aiur rather than the current systems- something that will take a fair amount of research and incentivisation. The team have started a pledge, offering students and researchers the chance to be an “advocate for validated, reproducible, open-access scientific research.” At the time of the interview, Ms Ritola informed The Saint that more than 5,000 people had signed the pledge.

As with many emerging technologies, the concept of has been met with some scepticism. To convince those that don’t believe a machine to be capable of building the right context around a research paper, the team have organised scithons, or science hackathons –events where researchers are invited to spend the day solving a research challenge that is defined by a university or corporate.

Ms Ritola said, “based on a research article, presented at WOSP18, we discuss the results of six scithons. We found that when people actually use’s tools, they’re able to build an overview much faster compared to the other teams, and then they can come up with a more balanced summary by the end of the day.”

At the moment, the team have been testing the tool on harder sciences, but Ms Ritola explained that there is the potential for the software’s scope to expand to the field of humanities, explaining that: “the challenge is that the concepts in, say, social sciences are more ambiguous, and they’re more dynamic as well, than say mathematics or natural sciences.”

Another development that the team are working towards is the creation of a software that’s capable of forming and testing its own hypothesis – something that would make the software a researcher in its own right.

As part of their immediate plans for, the team hope to expand their collaboration with universities. So far the team are collaborating with a number of Scandinavian universities and other continental European research partners, including Aalto University, the University of Helsinki, and Chalmers University, but Ms Ritola told The Saint that the team are looking to expand to universities elsewhere in Europe, so St Andrews could be a feature further down the roadmap.



Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.