Sir Robert Buckland KC is a former Secretary of State for Justice, Lord Chancellor, and Secretary of State for Wales.
The first of six laws outlined by technological historian Melvin Kranzberg in a seminal 1986 article was that “technology is neither good nor bad; nor is it neutral.”
Written in an age of technological change, and with the increasing mass production of computers and the World Wide Web only a few short years away, questions were being posed about where people would fit into an economy and society increasingly dominated by machines. With the rapid development of AI in recent years, policymakers, businesses, and the public are once again asking themselves those same questions.
One of the most profound effects of AI is its impact on public trust. From audio of opposition leaders appearing to discuss rigging the Slovakian election to the current occupant of the Oval Office seeming to encourage New Hampshire voters to stay at home, the potential electoral impact of deepfakes is plain for all to see.
Many may be detectable to the observant bystander. However, the proliferation of open-source AI models and rapid advancement of generative AI means that the deepfakes of tomorrow will not be. As Sam Altman, the CEO of OpenAI, warned at a Congressional hearing, the technology is becoming so advanced it will soon be able to spread highly targeted disinformation.
When one considers potential foreign interference in recent referenda in the UK, the 2016 US presidential election, or China’s repeated efforts to discredit the UNHCR (UN Refugee Agency) reports on human rights in Xinjiang, the prospect of interference from hostile states and organisations is a sobering one. Soon, they will have capabilities that, only a decade ago, they could not even have dreamt of.
Thankfully, many Western governments and tech companies are taking action. Last summer, the key AI players – Meta, Google, and OpenAI – all committed to allowing independent security experts to test their new systems. Moreover, they continue to develop tools to alert the public to AI-generated content. Google’s new tool, Synthid, is just one such promising example. It embeds a digital manipulation-resistant ‘watermark’ into images. At the same time, the Adobe-led ‘Content Authenticity Initiative’ allows media consumers to verify whether content is AI-generated.
From a legislative perspective, the 2022 Elections Act gave the UK Electoral Commission the power to ensure online election advertising has digital imprints, meaning that information regarding the advert’s funding or the identity of its creator will be easily accessible. The 2023 National Security Act has also made it illegal to act on behalf of a foreign power in devising or disseminating AI deepfakes to influence an election.
Despite these positive steps taken by the Government and tech companies, we should remain vigilant. Apps such as HeyGen made turning a brief script into a convincing audio-visual deepfake simple for anyone with a few images and a brief sample of a person’s voice. Proving content has been manipulated, particularly audio content, often remains challenging. Fact-checkers are not used to assessing potentially AI-generated material, and training AI to detect AI-generated content will leave us open to a perpetual race between AI’s ability to generate lifelike content and its ability to detect it.
A partnership between the Government and the private companies operating in this space will be key to tackling deepfakes and fairly and effectively regulating AI more generally. But even that still leaves a great deal of power in the hands of some unreliable actors. Recent alteration of X’s algorithm allows state-backed disinformation campaigns, such as those run by Iran, to potentially go unlabelled, and has led to widespread false information and ill-informed comment spreading on social media.
For all the sincere attempts by governments to tackle deepfakes, cynical political opportunists can undermine any amount of good work. Indeed, malicious actors will seek to benefit from the ‘liar’s dividend’ once disinformation and deepfakes become commonplace enough, that nothing – including legitimate content – will seem believable.
To tackle such opportunism and prevent wider public cynicism, the Government must partner with businesses and wider society to ensure transparency and buttress trust in responsible information sources.
To achieve this, the Government should work with Ofcom to accelerate the formation of its advisory committee on misinformation and disinformation as set out in the Online Safety Act. In addition, the UK Government should follow the lead of existing practice in Scotland and expand the digital imprints scheme, requiring all election-related content circulated online to have an imprint, unless it is someone expressing a personal opinion.
In our divisive political age, those in positions of power mustn’t seek to benefit from the liar’s dividend: objective truth still exists online and programmes such as the Content Authenticity Initiative and the similar Microsoft- and BBC-led ‘Project Origin’ are examples of where nongovernmental organisations are working to protect it.
We should all be questioning the content we see online – not simply disbelieving by default, as conspiratorial thinking is not always far from constructive doubt. Rather, we should take the time to pause when viewing a video or looking at an image, especially in times of heightened tension or insecurity, consider the content’s source, and verify it through a trusted, fully transparent alternative when necessary.
Kranzberg’s sixth and final law was “technology is a very human activity;” an activity by its very nature incapable of being a neutral influence on our lives. And if the AI revolution, like the last technological revolution, is unable to be neutral in human hands, we must work together to ensure it is used for good rather than ill.
This article appears in Bright Blue’s Centre Write Magazine.
The post Robert Buckland: On AI, governments and businesses must work together to maintain trust and accountability appeared first on Conservative Home.
Click this link for the original source of this article.
Author: Sir Robert Buckland MP
This content is courtesy of, and owned and copyrighted by, http://www.conservativehome.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.