While we are witnessing a great deal of enthusiasm about Artificial Intelligence (AI) technology, we are also concurrently seeing widespread trepidation and concern especially about the far-reaching adverse side-effects of fast-paced AI technology development from all corners of society. Civil society, minority groups, worker unions, academics, policy makers and even voices from within the tech industry itself have expressed worries about AI development directions, its pace, and its real and potential risks. As people of all walks of life are already directly and indirectly experiencing some of the adverse effects, policy makers have rushed to pass regulation to ensure people’s safety while also trying to balance and ensure that AI’s benefits are also harnessed.
The European AI-Act stands out as a clear illustration of how the EU has moved to regulate AI with an aim to “ensure that AI works for people and is a force for good in society”. We already wrote an extensive article on the topic, covering the details and what it means for organizations that want to provide GenAI applications and models, which you can find here.
Not just within the EU, but worldwide, governments are moving to regulate AI: the Canadian Artificial Intelligence and Data Act (AIDA), Australia’s AI Balancing Act, and the Chinese government’s introduction of rules for generative AI, are among some of the first examples that immediately come to mind. Even the US government has recently taken steps towards regulating AI, albeit with a voluntary commitment approach that is likely to be toughened up in the years ahead because of emerging criticism of its lack of safety and accountability guarantees.
So, what are the specific risks driving such regulatory developments around AI?
To answer that, we will unpack and elaborate on some of the specifics, most notably around so-called generative-AI technology. The whitepaper largely focuses on the security implications from a private and public perspective to illustrate some of the risks. Notwithstanding this narrow focus, it remains important to note that security considerations are by far not the only risks of this type of technology nor of AI in general but a good entry point into the AI-risk landscape.