The rise of mis and dis-information is being countered by a mix of automated technology, third-party fact-checking and human-review. But is it the blend of human and robot working? What can we do to stop false narratives and bad actors emerging earlier in the information cycle? And, what are the effects if these processes become — as many would like — completely free of human oversight?
Questions for discussion could include:
What are the big risks in combining humans and AI, from both sides?
What tasks can AI do on their own and what will always need human intervention (training datasets, rate the potential harm of social media posts and triage reports of abusive behaviour)?
What potential does greater structured data hold for combatting false narratives?
Is fact-checking working to the extent that we’d like or is it a sticking plaster?
Organised in association with Kinzen.