Enroll Course

100% Online Study
Web & Video Lectures
Earn Diploma Certificate
Access to Job Openings
Access to CV Builder



Online Certification Courses

Is Huawei Safe City Safe For African Cities

artificial intelligence. 

Racing towards global dominance

African countries are expected to play a role in determining the winner of the current race for global dominance in artificial intelligence (AI) between American and Chinese companies, according to industry analysts.

If those forecasts are correct, then American businesses are currently lagging behind their Chinese counterparts in terms of productivity. The United States has been slow to investigate Africa's artificial intelligence potential. Some exciting developments have occurred, such as the establishment of a Google artificial intelligence lab in Ghana and the establishment of IBM Research offices in Kenya and South Africa. Large-scale artificial intelligence projects, on the other hand, have been rare and hard to come by, giving Chinese companies a significant competitive advantage.

Safe City, the company's flagship public safety solution, was developed in 2015 by the Chinese technology company Huawei. In a sense, Safe City is Big Brother-as-a-Service, as it provides local authorities with tools for law enforcement such as video artificial intelligence and facial recognition. Since its inception, Safe City has grown at a rapid pace. In 2019, there were 12 different programs spread across the continent, according to the Center for Strategic and International Studies (CSIS). According to reports, a few of those Safe Cities have been successful. When it comes to crime reduction, Huawei claims that its deployment in Nairobi, Kenya, resulted in a 46 percent decrease from the previous year.

Safe Cities, on the other hand, has its detractors, and not everyone is impressed with the program. Critics have expressed concerns about state surveillance, privacy, and digital authoritarianism, among other things. Aside from that, there isn't enough information available about the actual efficacy of Safe City and other surveillance solutions similar to it that are currently being used in Africa. Part of the reason for this is that there isn't a lot to document to begin with. One significant difference between the artificial intelligence communities in the United States and China is that neither the Chinese government nor its companies are transparent about the error rates associated with facial recognition. That is unquestionably a source of concern. 

 

A history of cross-identification bias

Robert Julian-Borchak Williams was arrested in Michigan, USA, in January 2020 for a crime he did not commit. He was released after posting bond. At first, Williams thought he'd received a prank phone call from officers with the Detroit Police Department, inviting him to come to the station for questioning. The fact that he was about to earn an unenviable place in the history of facial recognition-enabled law enforcement was something he was unaware of.

In 2018, timepieces valued at $3,800 were allegedly stolen from Shinola, an upscale boutique in Detroit, according to reports. On grainy surveillance footage, the perpetrator was seen running away. He was a lanky man, apparently of Black descent, just like Williams, and he looked like Williams. Police officers detained Williams because they believed he was the person depicted in the photographs. When asked directly if he was the one, Williams responded emphatically, "No, this is not me." Do you believe that all black men have the same appearance? ”

Williams was referring to the phenomenon known as cross-race identification bias. Cross-race identification bias occurs when a person of one race is unable to distinguish the facial features of another person of a different race from those of the first. It is not a bias against any particular race, but in the United States, it disproportionately affects minorities. According to a 2017 study by the National Registry of Exonerations, African Americans accounted for the majority of innocent defendants who had been exonerated in the 28 years prior to the study. It also discovered that the risk of eyewitness misidentification in cross-racial crimes was a significant contributing factor to wrongful arrests.

Some of the racial biases that have plagued law enforcement over the years have unfortunately found their way into facial recognition technology. And Robert Julian-Borchak Williams was the first person to fall victim to this crime. This time, however, it was not a case of cross-race identification bias that was to blame, but rather a faulty system that had incorrectly matched images of the shoplifter to Williams's picture on his driver's license. Williams, on the other hand, was released to his family after it was determined that he had been mistaken for someone else.

 

Technology Inherits Racial Bias in Law Enforcement

It has been around since the mid-1960s that facial recognition technology (FRT) has been used. Woodrow Wilson Bledsoe, an American computer scientist, is widely regarded as the founding father of FRT. Early versions of facial recognition were only useful in a limited number of situations. Nonetheless, technological advancements in machine learning have hastened its adoption in a variety of fields, including law enforcement.

However, facial recognition technology is still considered to be a work in progress. Even the best-performing facial recognition systems misidentified blacks at rates five to ten times higher than they misidentified whites, according to studies conducted by the United States government as recently as 2019. Studies like this, combined with the rising tensions between the African-American community and the police following the death of George Floyd in 2020, have prompted a number of Western technology companies, including IBM, Microsoft, and Amazon, to halt their facial recognition work for the purposes of law enforcement and public safety. Although there have been significant advances in the field, the margin for error when identifying Black faces is still far too large. Face recognition-assisted law enforcement quality suffers significantly in Western countries where Black people are disproportionately represented in the population. In Africa, on the other hand, a continent where Black people constitute 60-70 percent of the population, the potential for harm is greater.

 

Identifying and addressing biases in artificial intelligence systems

Biases can infiltrate AI systems in a variety of ways, the most common of which is through the training data. A computer's artificial intelligence algorithm learns to make decisions by analyzing training data, which frequently reflects historical or social inequities. For example, facial recognition algorithms, just like humans, have difficulty distinguishing between races when used together. In one experiment involving Western and East Asian algorithms, Western algorithms were found to be more accurate at recognizing Caucasian faces than East Asian algorithms. East Asian algorithms were more accurate at recognizing East Asian faces than they were at recognizing Caucasian faces.

Furthermore, in order to make accurate facial recognition decisions, facial recognition algorithms must have access to a large amount of data. And the web is the most convenient place to harvest large quantities of photos of people's faces in a "harmless" manner. Black people, on the other hand, are significantly underrepresented in the global internet economy, owing to the fact that they are one of the more minor contributors. As a result of this underrepresentation in the data set, facial recognition systems have comparatively higher error rates than other systems.

One of the other factors that contribute to the high error rates is a phenomenon that I like to refer to as the "Black photogenicity deficit." Photographic technology is optimized for skin tones that are lighter in tone, and the digital photography that we use today is based on the same principles that guided the development of early film technology. Therefore, AI systems have difficulty recognizing Black faces because modern photography was not created with the facial characteristics of Black people in mind.

Given these biases, it's difficult to imagine that the error rates for Chinese artificial intelligence systems would be significantly different from those for US AI systems. However, the effectiveness of their solutions is treated as if it were a non-issue. Chinese artificial intelligence companies operating on the continent are not under any obligation to disclose their error rates or to halt their surveillance operations. But they refuse to back down from their insistence on the use of facial recognition technology to assist law enforcement in a country where the technology is more likely than anywhere else to lead to wrongful arrests and convictions. While this is not a reassuring statement, it does raise the question of how many Robert Julian-Borchak Williams existed in Africa prior to the existence of Robert Julian-Borchak Williams in the United States.

Corporate Training for Business Growth and Schools