Why we do it
Logically is proud to work with a number of prestigious research partners, pursuing cutting edge solutions to complex problems in the misinformation space.
The misinformation and disinformation problem space is rapidly evolving with advancements on tactics, techniques, and procedures to cause harm to information integrity, digital safety and trust in democracy.
As concerns rise globally about the impact and influence of misinformation and disinformation, it is important for collective interdisciplinary efforts that can lead to innovative and effective technologies that can counter these threats.
Our research goals
- Stimulate new, high-impact research to effectively combat multimodal and multilingual disinformation and fake news
- Design theoretical research frameworks, taxonomies, interdisciplinary approaches, and methodologies to advance computational models and improve their effectiveness to combat misinformation and disinformation
- Improve information attribution and segmentation methods to accurately detect misinformation narratives, influencers and campaigns
- Develop practices and tools that can mitigate online harms and digital threats
Our research partners
Research is in progress on developing advanced technologies to counter hate speech and online mis-and disinformation.
The partnership will also enable us to build multilingual models that understand regional nuances to the hate speech and misinformation problems in India.
SIGKDD, ICDM 2021, PNAS, ACL 2023
The University of Sheffield addresses multimedia based misinformation and hate speech detection that tackles problematic multimedia content posing harm for real-world use cases such as multimedia content trust and integrity.
Submitted to ACMMM 2022
CTEC is creating a first of its kind cross-platform social media database with a focus on accelerationism.
This database will provide the global research community with an invaluable resource to better understand and respond to an emerging dominant extremist ideology.
Uni of Waterloo is experimenting with ways that figleaves - an understudied variety of rhetorical deception aid in the dissemination and uptake of misinformative messages online.
Logically will benefit from a unique corpus to develop large scale detection and analysis of rhetorically sophisticated misinformation.
CSDR is developing an interdisciplinary framework to measure the harm potential of multilingual public online posts in India.
The framework will be developed in conjunction with linguists, social anthropologists, legal experts, law enforcement agents, online activists, and victims of hate speech and disinformation campaigns.
Our research partnerships
Our research partnerships focus on the following key areas.
- System and process innovation for scalable and efficient OSINT and fact checking operations
- Regionalisation, cultural, linguistic nuances
- Impact-focused social science and humanities research to develop computable theoretical frameworks to enable sophisticated large scale analyses of online misinformation
- Development of data sets and knowledge bases that offer unique insights into the problem characteristics and evolution patterns pertaining to different geographies, communities and topics
- Metrics and frameworks for threat life cycle modelling and impact assessment of threats resulting from problematic online activities
- Initiatives to define and identify novel threat vectors and other adversarial attack types that are increasingly used to proliferate disinformation and online threats
- Research into countermeasures to help determine which interventions to specific mis/disinformation are likely to be effective, and frameworks within which to determine the proportionality of such measures
- Computational approaches to simulate user behaviour patterns online and in social networks in order to detect adversarial tactics to promote disinformation campaigns