We build AI software and digital forensics tools to analyse misinformation. Our team of dedicated fact-checkers work in tandem with our tech to help the public separate facts from falsehoods.
Misinformation, from fake news to state propaganda, has never been more prominent, more impactful or more dangerous than it is today. Our services and tools are built to help our users, partners and community take back the power to navigate the fractured information ecosystem.
Logically employs the world's largest dedicated fact checking team- working together our in-house AI to provide rigorous, evidence-based fact checks.
Our Image verification APIs are built to highlight manipulated imagery and extract text from images to allow manual and fully automated analysis of visual media and content.
Logically's ensemble machine learning models are constantly and iteratively evaluated to improve their efficacy. We benchmark the performance of our AI against our diverse, expert analysts, to eliminate the typical challenges associated with crowdsourcing data.
Our AI analyses and understands the content of an article, its metadata such as its authors and publishers, and its patterns of propagation on social media, and uses this rich understanding of a piece of content and its source to determine its credibility.
We assess a source based on the content it produces, its metadata, and its direct and indirect associations with known agents of disinformation.
The anonymous nature of internet-based platforms allows malicious actors to spread malicious information at an unprecedented scale and to devastating effect. Recent disinformation campaigns, widely attributed to state actors, have shaken democracies, provoked public health epidemics and induced lethal violence. Our services are designed to detect, monitor and deploy countermeasures to mitigate risks posed by disinformation.
Our expert investigators use a suite of advanced Open Source Intelligence tools to map malicious information flows through social media and across the internet - tracing and attributing viral and memetic disinformation campaigns to malicious actors or groups.
OSINT - Our skilled investigators use a suite of advanced Open Source Intelligence tools to monitor and map malicious information flows through and between social media platforms and across the wider Internet.
Attribution - We trace and attribute viral and memetic disinformation campaigns to malicious actors - both nation-state and for-profit.
We use a variety of Social Media Intelligence tools to detect and monitor automated (bot) accounts, cyborg accounts, and other coordinated user disinformation strategies.
We monitor known information threats 24/7 to identify potential operations by nation-state and politically motivated actors. Our analysts and investigators also support our Information Integrity War Rooms set up to monitor elections and crisis events.
It has never been easier to generate manipulated, doctored and entirely fake media. From Instagram filters that dramatically change the look of a landscape, to journalism and research written by AI and deepfake videos which can convincingly show anything; the threat posed by synthetic content is clear. We build technology that helps you identify, analyse and understand synthetic content.
Logically build machine learning models capable of identifying the increasing use of Deepfake technologies.
Our AI can determine whether a piece of text has been written by a human or using Natural Language Generation AI.
We use Natural Language Generation to contextualise verified information and remove the barriers to navigating today's convoluted information ecosystem.
It's never been easier to find information, but never been harder to determine what information is useful, accurate or credible. Our tools are built to help businesses and individuals find the right information, from the right source, for the right purpose.
Our AI provides our users with a concise summary based on the facts of each story, without spin, slant, or bias, by reading and analysing all articles related to that story, guided by human analysts.
We contextualise each story and recent development with historic context so that our users can understand how a situation has evolved from its very beginning.
Our team of dedicated fact-checkers, journalists and OSINT researchers use Logically's own tools, and expertise in data science, logic and the social sciences to produce original intelligence reports and journalistic content to provide unparalleled insight at unprecedented scale.
Exposure to diverse perspectives is vital for fostering critical thinking and making informed decisions, but today algorithm-based platforms have created echo chambers and filter bubbles which have proven virtually impossible to escape. At Logically, we build tools to improve civic discourse and facilitate critical thinking.
Our AI picks out divergent viewpoints from reliable publishers across the political spectrum.
Our machine learning algorithms are capable of determining political bias within text.
Our advanced AI uses a combination of domain, metadata and content analysis to provide an accurate credibility rating for articles and sources.
Logically Launches the misinformation-fighting app on Google Play Store
Logically conduct 'Fake news in the digital age" workshop at School of Broadcasting and Communications, Mumbai
Logically detect over 40,000 unreliable articles during Indian General Election
Logically named finalist for Mayor of London’s Tech Challenge
Improve Civic discourse and facilitate Critical thinking by:
- Providing insight into and analysis of the news.
- Guiding people away from echo chambers and filter bubbles, and exposing them to different beliefs, ideas and perspectives.
- Placing credibility and context at the heart of digital news approaches.
Foster web literacy by providing future generations with an awareness of:
- Ownership of and right to control their data.
- The economics of content online.
- How to access rigorous, trustworthy information.
Provide individuals and professionals with the tools to face the challenges of the misinformation ecosystem by:
- Detecting various degrees of misinformation.
- Classifying these instances.
- Recording them so as to create a long-term picture of their evolution.
- Monitoring the forces that shape news coverage, and educate people about these dynamics.
Recognise the constantly-evolving nature of the misinformation ecosystem, and adapt to the challenges this poses.
- We are politically conscious, but scrupulously nonpartisan
- We believe in the necessity of a free and independent media
- We are transparent about our interests, and declare potential conflicts
- Our commercial interests will never undermine our editorial independence
- We do things because we should, not just because we can
- We will not let our technology develop beyond our understanding of its ethical implications
- We are collegial, respectful, inclusive, and collaborative in our work
- We are truthful with our users and partners, and respect their autonomy, their intelligence and their good faith
- We are clear, we are precise, we say what we mean and are prepared to stand by it
- We adhere at all times to the highest standards of evidence based reasoning, of rational argument and of intellectual honesty
- We respect expertise, and support the efforts of experts to effectively communicate their work
- We always welcome constructive criticism, from any source, on any subject
- We question our assumptions, we question our values, we question our friends and we question ourselves
- We do not pretend that we have no individual biases; instead, we work to challenge them, change them and mitigate their effects
- We are pragmatists; we work on the basis that an idea, a tool or a technology is only valuable to the extent that it is useful
- We create impact by empowering people with technology, not by replacing them with it