<img src="https://trc.taboola.com/1321591/log/3/unip?en=page_view" width="0" height="0" style="display:none">

Responsibility, Explainability and Ethics in AI

Using AI responsibly

The AI software industry is growing rapidly, evolving at a faster rate than legislators can keep up with. Due to the power of AI and its potential to have a profound impact on people’s lives, this technology needs to be developed with care. It’s essential for companies to ensure they use AI responsibly and ethically.

 

What is responsible AI?

Responsible AI is about creating and adhering to a set of principles to ensure AI is being used ethically. This area of AI is evolving, and it’s clear that without taking steps towards clarity of purpose, ethical commitment with full buy-in from all stakeholders and accountability, the technology has the potential to cause real harm.

To guard against this at Logically, we strive to continuously improve the transparency standards behind our AI models. Part of this is turning black box models into more explainable services that actually offer insights, such as "Why is this piece of information classified as misleading? Why is this particular article flagged as containing toxicity." We've worked internally to validate different approaches to explainable AI, and how that can be implemented on top of the deep learning models that we've developed.

 

What is explainable AI? 

Explainable AI is a broad area, and there are many ways you can develop explainable AI algorithms. One way to think of it is, if a machine learning model is serving insights at scale on collections of data, how can we generally improve end-to-end transparency levels for that model? It's mostly about offering layers of transparency to the end-user. As the name suggests – explainability – we want the models to not only output things, but also output some kind of reasoning or explanation behind decisions it’s made. A machine has different stages of learning so there are multiple points where transparency can be inserted. Depending on the use case, we can develop additional mechanisms through which we're able to improve these transparency standards across different stages.

There are many ways this transparency can happen. You can have explicit rules alongside a machine learning pipeline that can coordinate with the pipeline and then output certain explanations. You can embed rules into the technology pipeline itself, to focus not just on learning certain patterns, but also on developing knowledge to attribute why. For example, in language sentiment analysis, "good" might be associated with positive sentiment, and "hate" might be associated with negative sentiment. These are labels that can be added to AI to make it explainable.

 

How do you ensure privacy and anonymity?  

Machines should not be fed any personally identifiable information, which can then be used to negatively target individuals. This means they're not learning any specific associations between individuals, communities, or organizations, and then creating opportunities for it to be used negatively or compromise user privacy.

You can anonymize by using different data transformation techniques. For example, before the machine is given data, a person's name might be replaced with basic characters like “xx.” In this way, you're not informing the machine, which prevents negative usage of the model so as not to compromise privacy and security. Also, there will be use cases where we have to bear in mind user privacy and security, so our work is driven by alertness towards ensuring we're not designing any feature where we will be compromising user privacy and security.

 

Who is responsible for recognizing and solving bias?

It's a shared responsibility. If you look at how bias occurs, it is because of humans who've created data, where they've consciously or unconsciously inserted their own biased thinking. Then that data is used by machines to train on a certain task, meaning your bias is transferred into the model's knowledge. When it comes to data analysis at Logically, we have analysts who focus on understanding whether this data is compliant with certain standards that have been defined to mitigate bias. These stakeholders investigate as much as possible to make sure we are countering bias. 

Once the data gets fed, there are also metrics through which you can understand if the model is skewed or biased towards predicting certain types of outputs. Handling bias requires a lot of specific mechanisms to be implemented, and it needs to be owned by different stakeholders at different stages so that we are able to track, trace and mitigate it.

fact-checking process

How AI Helps Us Fact-Check in a Crisis

The power of AI lies in processing large amounts of information at a scale that humans cannot.

 

Sugar mill pollution

Climate Change Misinformation Report

APCO Worldwide and Logically undertook a collaborative research project focused on identifying and understanding the most prominent climate misinformation narratives ahead of COP26.

How Do Deepfakes Work Promo

Video: Seeing is Deceiving - How Deepfakes Work

Find out how the relatively new phenomenon of deepfakes work.