Logically TechLogically’s media intelligence, credibility assessment, veracity assessment, and social network intelligence capabilities are all made possible by combining cutting edge AI and human expertise. These capabilities power our products for consumers, businesses, and our public sector partners.
Analyse trends, mitigate risks, seize opportunities.
Our AI is built on a set of modular processes which can be customised to analyse boundless amounts of data.
Platform Agnostic with Limitless Possibilities.
We continually collect data from every corner of the internet. We actively monitor over a million domains and social media platforms in real-time, and can add further sources, including niche platforms, within hours. This enables Logically to comprehensively analyse all relevant data for any project and any partner, whether broad and extensive or meticulously targeted.
Automatically Extract and Link Entities, Topics and Concepts.
We use advanced Natural Language Processing (NLP) and Knowledge Engineering techniques to identify and disambiguate entities, topics and concepts,
The sheer volume of information online can make it difficult for any individual or organisation to sort through the noise and identify the most salient signals, trends and insight . Our advanced contextualisation algorithms group structured and unstructured information to show you undiscovered patterns so you can make informed decisions based on sound reasoning and evidence.
Appreciate nuance and empathise with alternate views.
We compare and contrast aggregated information and identify agreement and disagreement between content, their authors and their audiences.
Navigate dynamic subjects and understand their history to see the whole picture.
We update our analysis in near real-time and link related events together to provide a chronology of developments that have resulted in emergent situations and threats.
The Précis - Multi Document Summarisation
Insight made easy.
We identify the most salient points from across multiple documents to produce controllable summaries that get you up to speed in seconds. The Précis can be guided by specific goals such as informational value, objectivity, diversity and actionability.
Structure & Analyse
Types of Data.
We support the analysis of long-form content such as news articles as well as short-form content such as social posts. These are processed through fine-tuned models to identify:
Lenses of Analysis.
Our primary lenses of analysis for phrases, sentences and longer form content include:
- Patient Zero
“More than ****11,400 investors are likely to lose more than £230m in savings due to the collapse of London Capital & Finance after it was announced that only 159 affected mini-bond customers would receive compensation.”
Publisher: The Guardian
Date Published:Thu 9 Jan 2020 16.19 GMT
Last Modified: Thu 9 Jan 2020 16.44 GMT
Author: Kalyeena Makortoff
Title: Investors face £230m loss in London Capital & Finance collapse
More than (Modifier) 11,400 (number) investors (units) are likely to lose more than £230m (number | currency | gbp) in savings due to the collapse of London Capital & Finance (entity) after it was announced that only 159 (number) affected mini-bond customers (units) would receive compensation.
“I appreciate that the initial decisions and outlook we are announcing today are likely to be disappointing to many LC&F customers. (NEGATIVE SENTIMENT) We are, however, working as quickly as we can to establish a suitable process for determining customers’ claims, and expect to be in a position to start this process in the next few weeks.”
Logically’s cutting edge technology and expert investigations team work together to perform in-depth social network analysis of users, their communities, the content they produce and their engagement patterns to identify signals for automated and coordinated campaigns.
Our state-of-the-art forensics and attribution techniques are able to identify the involvement of nation-state actors such as Russia, China and Iran, as well as know for-profit actors by correlating indicators of compromise (IOCs) and attack signatures. This allows us to identify signals for sock-puppeting, and brigading attack vectors and also provide us with an early detection mechanism for narratives proliferating from fringe networks and sources with a high propensity to disinform.
Automated Credibility Assessment
Our three-pronged approach to verification:
Using propagation analysis to model the virality of content, the authenticity of engagement and the incentives of actors who promote it..
We examine associates data; In the case of article assessment, we model authors, ad networks, domain ownership and track records to evaluate content.
We identify semantic and psycho-linguistic indicators of misinformation within text to accurately assess it’s credibility and produce interpretable inferences.
We assess a source based on the content it produces, its metadata, and its direct and indirect associations with known agents of disinformation. Our automated source assessment systems will soon be paired with dedicated expert analysts to validate and verify our automates assessments. Our ensemble models are trained using an extensive database of over 30k known sources of disinformation.
In-line with our three-pronged approach our article credibility is an extension of our source credibility models which then pick out what particular kinds of reliable or unreliable information, and make a prediction on a web page’s credibility; labelled as Highly Credible, Medium Credibility and Low Credibility.
Although geo-politically motivated misinformation and disinformation have been in prominent focus over the last few years, for-profit misinformation and disinformation has been equally prominent and impactful in poisoning our information feeds. Advertisers risk associating their brand with problematic content unless they deploy mechanism to mitigate the risks of advertising on websites and webpages with a propensity to misinform.
Claim Extraction & Matching
Logically employs advanced AI to identify claims within a text and rank them based on plausibility and their fact check worthiness. Once a claim has been extracted or submitted we match them against trusted sources and fact-checks using state-of-the-art, natural language understanding technologies that highlight the inherent syntactic and semantic details.
Our powerful workflow maximise fact-checking efficiency by cross-referencing data related to the claim before it’s manually verified — bringing the most questionable and pressing claims to fact-checkers and allowing them to thrive by co-working with bots dedicated to tasks such as retrieving evidence and creating visual assets.
Logically are also developing Multimedia Forensics tools to identify manipulated media including deep fakes.