10.9 C
Budapest
April 27, 2024
2020 – 2024 © MOCOHU Magyarország Hírek, Hungary News
Image default
FeaturedFounder InterviewsfundingHÍREKlondonSaaSVILÁG ANGOL

London-based startup Checksteps raises £1.3M funding to employ AI-based content moderation solution

A gargantuan amount of data is posted online every day, and it becomes really difficult to ensure that all of it is non-toxic or harmful. To make this process easier, multiple companies such as Facebook, YouTube and others hire teams of moderators, whose sole purpose is to go through and vet comments and posted data. A solution to help speed up the process is to use Artificial Intelligence or AI, which is exactly what the London-based startup Checkstep offers. 

Checkstep uses AI-based solutions that can deliver contextual moderation. Since it is a software based solution, it is obviously bound to be faster than humans. The company has now announced a £1.3 million funding round, which will help them further develop their offerings. In addition, they have hired Kyle Dent as their Head of AI and Ethics. 

Bigwigs participated in the funding 

The latest seed round for Checkstep was led by Shutterstock’s Founder Jon Oringer, the former Chief Business Officer CBO of Uber, Emil Michael, and Microsoft’s former head of Corporate Strategy Charles Songhurst. VCs, Angel and private investors also supported the raise. This new funding will be used by the company to advance its scale-to-market plans and hire more people.

In a conversation with UKTN, the company’s CEO and founder Guillaume Bouchard notes, “Most of the funding will go into R&D to scale the software and policy coverage to deliver more functionality required to build a full “end-to-end” content moderation process from multi-policy definition to management of the appeal process.”

The company is also hiring Kyle Dent as Head of AI Ethics at Checkstep. Bouchard says, “With Kyle’s focus on the intersection of people and technology, we aim to humanise our content moderation process and AI tools.His expertise will definitely help us ask the right questions during product development to mitigate the potentially adverse effects of AI deployments through serious consideration of ethical concerns.”

Leveraging AI to simplify tasks

Artificial Intelligence is usually fed data sets to learn from. As it improves, it can be introduced to fresh data, which it can go through and make quick work of. It is something similar with Checkstep as their software “enables users to set and create their own policies, back-test them on historical data, run automatic content flagging (potentially using their own internal AI categoriser), and manage the appeal process for users who disagree with the moderation decisions,” Bouchard notes. 

Checkstep does in-house AI training and offers curated models to its users. “After we have trained the model we issue a detailed report visible in the Checkstep UI with the summary of the model, i.e., performance metrics on an evaluation set, which can be chosen by the client, summary of the data used, possible biases and other important model details into a “model card,” Bouchard reveals. Their system also gathers moderator feedback, which is used in model updates. 

Speaking on challenges, Bouchard notes that it mainly revolves around addressing the diversity of types of harm. In content moderation, this can be especially difficult since, for example, hate speech in itself can be divided into ten different categories. The company addresses such issues while trying to make sure that there’s no AI bias as training it to understand language nuances and context can be quite challenging. “ At the same time, we need to train an AI system that actively promotes healthy conversations without censuring or limiting individuals’ free expression,” Bouchard adds. 

The competition, future and more

Checkstep faces direct competition from companies that have been developing in-house trust and safety systems. However, their rivals usually encounter problems in doing it all by themselves and that’s where the startup comes in. Checkstep is touted to aid such platforms by developing specialised models and enabling them to deploy their own models within it.

“Other moderation companies are also focused on subsets of harms, for example, misinformation or terrorist content while others provide child safety,” notes Bouchard.  “Checkstep offers a full set of solutions to tackle the whole range of online harms even enabling multi-policy flagging.” 

Talking about the future of content moderation, Bouchard says that while it is currently focused on removing toxic content and other types of harm, it needs to evolve further. He adds that it needs to become “one of the essential tools protecting democracy.” He notes that content moderation is censoring free speech, but then, it is about removing voices of bad-faith actors who would purposely disrupt conversations, which in turn increases the freedom of speech for the wider population. 

“Still, it’s critically important to balance content actioning, i.e., banning without biases but also doing it in a way that cannot be confused with any form of censorship. For example, making sure people have the right to appeal and simplifying the process for everyone,” Bouchard concludes.

Checksteo currently involves 20 people on a day-to-day basis with locations in London, Sofia and soon in the USA. The company is currently hiring in Engineering and Machine Learning operations and plans to grow its sales and marketing team by the end of the year.

The post London-based startup Checksteps raises £1.3M funding to employ AI-based content moderation solution appeared first on UKTN (UK Tech News).

Related posts

These fintech startups in Wales might have the best tech role for you

MOCOHU

Ramp raises £40m to become PayPal of crypto payments

MOCOHU

Digital assistant for beauty professionals: Watalook picks £730K to expand in the UK

MOCOHU

DMCA.com Protection Status


Pin It on Pinterest

Share This