Call for Research Partnerships with Logically
Introducing our first funding call for our global academic research partnership program, set up with the purpose to collaborate with leading academic institutions and researchers on innovative interdisciplinary efforts to effectively combat fake news, misinformation and disinformation.
The call for applications is now closed.
Logically Research Partners
Logically is proud to work with a number of prestigious research partners, pursuing cutting edge solutions to complex problems in the misinformation space. Our work with the Indraprastha Institute of Information Technology - Delhi (IIIT-D) is exploring the motivations and psychology of misinformation shared online, hoping to better predict what social media posts will elicit a hateful reaction. And our recent KTN partnership with the University of Sheffield is aimed at improving the detection of false and misleading information in multimedia content such as video.


Become a Research Partner with Logically
We aim to expand our range of research partners to increase the impact we can have on this problem space. Therefore we are delighted to announce the launch of our first funding call for our global academic research partnership program, which will see us collaborate with leading academic institutions and researchers on innovative interdisciplinary efforts to effectively combat fake news, misinformation and disinformation.
Program participants will be able to collaborate closely with Logically’s technology and product staff, as well as subject matter experts who lead Logically’s investigative and research efforts on disinformation monitoring and fact checking. As concerns rise globally about the impact and influence of mis/disinformation propaganda and conspiratorial narratives, it is important for collective interdisciplinary efforts that can lead to innovative and effective technologies that can counter mis/disinformation.

Specific goals of the research program include
- Stimulate new, high-impact research to effectively combat multimodal and multilingual disinformation and fake news
- Design theoretical research frameworks, taxonomies, interdisciplinary approaches, and methodologies to advance computational models and improve their effectiveness to combat misinformation and disinformation
- Improve information attribution and segmentation methods to accurately detect misinformation narratives, influencers and campaigns
- Develop practices and tools that can mitigate online harms and digital threats

We are particularly interested in applications that focus on
- Regionalisation, cultural, linguistic nuances
- Impact focused social science and humanities research to develop computable theoretical frameworks to enable sophisticated large scale analyses of online misinformation
- Development of data sets and knowledge bases that can offer unique insights into the problem characteristics and evolution patterns pertaining to different geographies, communities and topics
- Metrics and frameworks for threat life cycle modelling and impact assessment of threats resulting from problematic online activities
- Initiatives to define and identify novel threat vectors and other adversarial attack types that are increasingly used to proliferate disinformation and online threats
- Research into countermeasures to help determine which interventions to specific mis/disinformation are likely to be effective, and frameworks within which to determine the proportionality of such measures
- Computational approaches to simulate user behaviour patterns online and in social networks in order to detect adversarial tactics to promote disinformation campaigns
- System and process innovation for scalable and efficient OSINT and fact checking operations
Qualifications and eligibility
- We welcome applications from research consultants, research bodies and labs, NGOs and academic institutions that have a strong background in world-class computational science or social science research
- Prior research experience into the problem space or related problem spaces is essential
- We strongly encourage multidisciplinary and collaborative proposals
- Applications are requested in English but we welcome applications from candidates anywhere in the world


Resources
- Access to Logically’s leading team of data scientists, engineers, product owners and other expert stakeholders to conduct advanced research on the problem areas of fake news and misinformation
- Access to cutting edge compute infrastructure to experiment with advanced computational methods and also to conduct novel social science experiments
- Access to budget in the range of $10,000 - $150,000 for a project duration anywhere between three months - two years
How to respond
Please fill in the application form and submit your completed form to applications@logically.ai by midnight anywhere in the world on Friday 7th January 2022
Process
First round applications will be assessed and shortlisted for a second round, comprising a presentation of your proposal
Presentations and second round applications: February and March
About us
At Logically we are on a mission to reduce and eventually eliminate the harm caused by the spread of misinformation and targeted disinformation campaigns. In recent years, we have been experiencing an information crisis that is causing a breakdown in civic discourse, polarisation of our media and our politics, and real world harm and violence caused by mis- and disinformation. We believe that, although complex and far-reaching, these problems are not insurmountable.
Logically combines advanced AI and machine learning technologies with OSINT analysts, researchers and fact-checkers to detect, classify and debunk damaging misinformation and disinformation at scale. Through our work with Governments and platforms around the world, we help to identify problematic content, track disinformation campaigns, identify bad actors and spot threats early. In addition, our semi-automated fact-checking service and published investigations help both organisations and the public gain access to swift and reliable information.