Global Voices

A brief overview of AI use in West Asia and North Africa

In general, using Artificial Intelligence technologies almost always allows for greater surveillance of individuals, and therefore, is especially concerning in authoritarian regimes.

Originally published on Global Voices

Image courtesy Ameya Nagarajan

This piece was first published by SMEX, on May 26, 2023, and was written by Sarah Culper.  An abridged version is republished here, under a content-sharing agreement.

Governments and private sectors in West Asia and North Africa (WANA) are calling for the employment of AI to strengthen their economies and streamline their services. The frameworks for using AI vary in each country, but, overall, most countries have either no regulations or soft non-binding recommendations. 

Almost every Arabic-speaking country in WANA has created an AI strategy with very little thought given to regulation. In contrast, most strategies emphasize the concern of regulation stifling innovation. This has been a common argument in AI regulation around the world, and has led to dangerous uses of data and harm to individuals. 

In general, most countries in the region are in the preliminary phase of AI use and are hoping to encourage its use to stimulate the economy.

It is a huge focus in the Gulf region, largely in the business sector rather than the public sector, though many governments aim to eventually deploy AI in almost every sector. The main reason behind investing in AI is to diversify away from oil-dependent economies. The UAE is the leader in terms of the implementation of AI, including public sector use. 

Generally, it causes the most concern when AI is used in the public sector, as public sector groups often deal with very vulnerable populations and offer essential services. 

At the crux of AI is data, and similar to issues seen with other technology, a huge problem is the use of data and the lack of effective privacy protections. It is difficult to assess these programs’ privacy protections as the proprietary nature of the programming means that it can be hard to see how data is being used.

Furthermore, accountability is harder to determine with AI. The liability of decisions needs to be stated clearly in the framework developed by each country for the regulation of AI, and this generally goes unmentioned. 

Concerns of AI development and data protection in MENA

No country in the region has developed a safe and secure AI framework, and any moves towards it always defer to developers’ need to innovate without restrictions.

Although these countries do have data protection laws, the issue is that if the state is the one requesting the data, it becomes much easier to bypass data protection regulations as they usually have stipulations that allow for the government to access such data. If the data becomes necessary for a contract or business deal, the data can then be processed, for example.

The partnership that Saudi Arabia and the UAE have made with various Chinese organizations is a concern, as it is unclear if data sharing is occurring and shows a move towards developing technologies to monitor populations extensively.

The UAE is using AI in multiple sectors that can have a negative impact on people’s liberties. The use of predictive policing and cameras is especially concerning. This technology could be used to harm LGBTQ+ people as well as activists, as a persecuted community in the region for example. AI systems used by governments can lead to discrimination and harmful outcomes when relied upon by law enforcement to profile, target, or predict future law violations.

Healthcare seems to be the main focus for implementing AI in this region. As of now, in hospitals at least, AI is mostly used to help speed up processes and assessments (for example with MRI scans), rather than relying on autonomous devices that are not supervised by human experts. 

With healthcare, the main focus is the security of the data and who has access to it. Can the government look at the data beyond diagnoses needs if it's from government hospitals?

A significant concern may be that if technology develops and starts making recommendations to doctors, this can lead to compliance and automation bias, which can significantly impact people’s health. The false positive/false negative rate must be published, and doctors must be trained to second-guess the technology at the bare minimum.

Smart cities are being developed in several Gulf countries, and their development is raising many issues. Firstly, smart cities are highly susceptible to being hacked and rendering the area unusable, so a way to increase engagement is to determine what cyber security measures are in place and how secure they are.

If an entire city is controlled by software, that software needs to be extremely robust, updated, and monitored extensively. Data hacks can become an even more significant issue than they already are if people's whole lives are embedded within one software.

Furthermore, many proposed technologies to “streamline” and “anticipate your every need” require extensive data monitoring, meaning every resident will be under excessive surveillance. 

In general, using these technologies almost always allows for greater surveillance of individuals, and therefore, is especially concerning in authoritarian regimes.

The use of AI in the private sector gives rise to surveillance capitalism, whereby privacy is traded for “convenience,” but in reality, people's behavior is often predictable and highly influenced and modified by outside forces, making it easily anticipated by AI.

In the private sector, the use of driverless taxis and human-free shops and grocery stores (with no contact and no checkouts) reduces possibilities for employment. Furthermore, these kinds of shops all require apps to function and are equipped with extensive surveillance capabilities. If these become the norm, people will have no choice but to opt in to these measures. 

Engagement with AI is difficult in any region due to the ability to invoke trade secrecy and the government’s lack of AI-focused frameworks. But the lack of even minor accountability frameworks makes accessing information much more difficult in the WANA region. Although each country has made statements aiming for ethical and accountable AI, there is no concrete legislation to achieve this.

Originally published in Global Voices.

More from Global Voices

Global Voices5 min read
The Global Coalition For Language Rights: A Space For Language Justice
Half of the world's languages are endangered. One new prize champions language justice by awarding language activists from across the world the Language Rights Defenders Award.
Global Voices5 min read
Elon Musk, Superhero Of The Latin American Right
Musk praises Milei, Bukele and Bolsonaro, while picking fights with Chavismo and Lula. Along with his business ventures, he is increasingly active in politics.
Global Voices4 min read
Jamaica, the Dominican Republic and Guyana shine at Caribbean Climate Justice Journalism Awards
Originally published on Global Voices Feature image via Canva Pro. In the face of the escalating climate crisis, the Caribbean region is at a pivotal juncture, where the need for informed discourse and impactful journalism has never been more critica

Related Books & Audiobooks