LOADING

Type to search

Africa’s push to regulate AI starts now

Share

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

In the Zanzibar archipelago of Tanzania, rural farmers are using an AI-assisted app called Nuru that works in their native language of Swahili to detect a devastating cassava disease before it spreads. In South Africa, computer scientists have built machine learning models to analyze the impact of racial segregation in housing. And in Nairobi, Kenya, AI classifies images from thousands of surveillance cameras perched on lampposts in the bustling city’s center.

The projected benefit of AI adoption on Africa’s economy is tantalizing. Estimates suggest that four African countries alone—Nigeria, Ghana, Kenya, and South Africa—could rake in up to $136 billion worth of economic benefits by 2030 if businesses there begin using more AI tools.

Now, the African Union—made up of 55 member nations—is preparing an ambitious AI policy that envisions an Africa-centric path for the development and regulation of this emerging technology. But debates on when AI regulation is warranted and concerns about stifling innovation could pose a roadblock, while a lack of AI infrastructure could hold back the technology’s adoption.

“We’re seeing a growth of AI in the continent;  it’s really important there be set rules in place to govern these technologies,” says Chinasa T. Okolo, a fellow in the Center for Technology Innovation at Brookings, whose research focuses on AI governance and policy development in Africa.

Some African countries have already begun to formulate their own legal and policy frameworks for AI. Seven have developed national AI policies and strategies, which are currently at different stages of implementation.

On February 29, the African Union Development Agency published a policy draft that lays out a blueprint of AI regulations for African nations. The draft includes recommendations for industry-specific codes and practices, standards and certification bodies to assess and benchmark AI systems, regulatory sandboxes for safe testing of AI, and the establishment of national AI councils to oversee and monitor responsible deployment of AI.

The heads of African governments are expected to eventually endorse the continental AI strategy, but not until February 2025, when they meet next at the AU’s annual summit in Addis Ababa, Ethiopia. Countries with no existing AI policies or regulations would then use this framework to develop their own national strategies, while those that already have will be encouraged to review and align their policies with the AU’s.

Elsewhere, major AI laws and policies are also taking shape. This week, the European Union passed the AI Act, set to become the world’s first comprehensive AI law. In October, the United States issued an executive order on AI. And the Chinese government is eyeing a sweeping AI law similar to the EU’s, while also setting rules that target specific AI products as they’re developed.

If African countries don’t develop their own regulatory frameworks that protect citizens from the technology’s misuse, some experts worry that Africans will face social harms, including bias that could exacerbate inequalities. And if these countries don’t also find a way to harness AI’s benefits, others fear these economies could be left behind.

“We want to be standard makers”

Some African researchers think it’s too early to be thinking about AI regulation. The industry is still nascent there due to the high cost of building data infrastructure, limited internet access, a lack of funding, and a dearth of powerful computers needed to train AI models. A lack of access to quality training data is also a problem. African data is largely concentrated in the hands of companies outside of Africa.

In February, just before the AU’s AI policy draft came out, Shikoh Gitau, a computer scientist who started the Nairobi-based AI research lab Qubit Hub, published a paper arguing that Africa should prioritize the development of an AI industry before trying to regulate the technology.

“If we start by regulating, we’re not going to figure out the innovations and opportunities that exist for Africa,” says David Lemayian, a software engineer and one of the paper’s co-authors.

Okolo, who consulted on the AU-AI draft policy, disagrees. Africa should be proactive in developing regulations, Okolo says. She suggests African countries reform existing laws such as policies on data privacy and digital governance to address AI.

But Gitau is concerned that a hasty approach to regulating AI could hinder adoption of the technology. And she says it’s critical to build homegrown AI with applications tailored for Africans to harness the power of AI to improve economic growth.

“Before we put regulations [in place], we need to do the hard work of understanding the full spectrum of the technology and invest in building the African AI ecosystem,” she says.

More than 50 countries and the EU have AI strategies in place, and more than 700 AI policy initiatives have been implemented since 2017, according to the Organisation for Economic Co-operation and Development’s AI Policy Observatory. But only five of those initiatives are from Africa and none of the OECD’s 38 member countries are African.

Africa’s voices and perspectives have largely been absent from global discussions on AI governance and regulation, says Melody Musoni, a policy and digital governance expert at ECDPM, an independent-policy think tank in Brussels.

“We must contribute our perspectives and own our regulatory frameworks,” says Musoni. “We want to be standard makers, not standard takers.”

Nyalleng Moorosi, a specialist in ethics and fairness in machine learning who is based in Hlotse, Lesotho and works at the Distributed AI Research Institute, says that some African countries are already seeing labor exploitation by AI companies. This includes poor wages and lack of psychological support for data labelers, who are largely from low-income countries but working for big tech companies. She argues regulation is needed to prevent that, and to protect communities against misuse by both large corporations and authoritarian governments.

In Libya, autonomous lethal weapons systems have already been used in fighting, and in Zimbabwe, a controversial, military-driven national facial-recognition scheme has raised concerns over the technology’s alleged use as a surveillance tool by the government. The draft AU-AI policy didn’t explicitly address the use of AI by African governments for national security interests, but it acknowledges that there could be perilous AI risks.

Barbara Glover, program officer for an African Union group that works on policies for emerging technologies, points out that the policy draft recommends that African countries invest in digital and data infrastructure, and collaborate with the private sector to build investment funds to support AI startups and innovation hubs on the continent.

Unlike the EU, the AU lacks the power to enforce sweeping policies and laws across its member states. Even if the draft AI strategy wins endorsement of parliamentarians at the AU’s assembly next February, African nations must then implement the continental strategy through national AI policies and laws.

Meanwhile, tools powered by machine learning will continue to be deployed, raising ethical questions and regulatory needs and posing a challenge for policymakers across the continent.

Moorosi says Africa must develop a model for local AI regulation and governance which balances the localized risks and rewards. “If it works with people and works for people, then it has to be regulated,” she says.

 

Leave a Comment

Your email address will not be published. Required fields are marked *