Skip to content
Link copied to clipboard
Link copied to clipboard

Will SEPTA’s new artificial intelligence security system racially profile riders?

Some of the analytical features used for the ZeroEyes algorithm could be inherently related to race and socioeconomic status, including skin color and clothing style.

SEPTA riders wait for a subway train at the City Hall station in September 2021. Next month, SEPTA will roll out artificial-intelligence gun detection program on Market-Frankford and Broad Street lines.
SEPTA riders wait for a subway train at the City Hall station in September 2021. Next month, SEPTA will roll out artificial-intelligence gun detection program on Market-Frankford and Broad Street lines.Read moreJOSE F. MORENO / Staff Photographer

The security system for SEPTA is getting a makeover within the next two months. SEPTA recently announced that starting in January 2023, an artificial intelligence software called ZeroEyes will begin scanning surveillance footage at 300 Philadelphia transit stops to detect the presence of guns. If a firearm is detected, ZeroEyes will trigger an alert to trained security specialists, who then request police dispatch.

With Philadelphia’s rising gun-related incident rates, ZeroEyes is a promising solution to prevent gun-related crime through early police intervention. However, as a research group that studies bias in AI, we know that there is a sordid history of image recognition AI perpetuating racial bias in criminal justice. ZeroEyes carries these same risks. Before implementing the program, SEPTA and other Philadelphia agencies can take proactive, proven steps to minimize the risk of racial bias.

ZeroEyes recognizes patterns associated with guns, rather than the gun itself. It scans for patterns like the shape and color of an object. However, some features it uses could be inherently related to race and socioeconomic status — like skin color, clothing style, and the zip code of the station.

Unfortunately, it’s nearly impossible for algorithm developers to remove all potentially biased features from the AI model. A lack of racial or ethnic diversity in the populations used to train the algorithm could result in its inability to distinguish between guns, cell phones, and other small objects held in the hands of people of color, “baking in” this presumption to the algorithm.

Similar image-based AI algorithms have perpetuated racial bias. In 2016, researchers at ProPublica reported that COMPAS, an algorithm widely used by judges to predict the likelihood of recidivism based on a defendant’s photo, incorrectly labeled Black defendants as “high-risk” at twice the rate of white defendants. When digital cameras became widely available, some detected Asian people as perpetually blinking. Algorithmic bias is common in other fields, including mortgage loans, speech recognition, and our lab’s field — health care.

» READ MORE: SEPTA will roll out artificial-intelligence gun detection program on Market-Frankford, Broad Street lines

Algorithmic bias in criminal justice has far-reaching consequences. Prejudiced algorithms can overdirect police into neighborhoods where Black and brown people live, which has been associated with decreased health equity and adverse health outcomes for people of color. In 2018, police in Detroit used an AI facial recognition algorithm that prompted the arrest of an innocent Black Detroit resident, Robert Williams. When he was brought in for questioning, police laid out his driver’s license picture with the stills from the security cameras, thinking they caught him red-handed. He responded, “That’s not me … I hope you don’t think all Black people look alike.

How can we prevent algorithmic bias against SEPTA riders from ZeroEyes?

First, before deployment, SEPTA should “silently” run ZeroEyes, without tying alerts to police notification, to determine if it perpetuates bias. This would allow SEPTA officials to determine the types of populations ZeroEyes identifies. SEPTA could revise the algorithm if it misidentifies a high proportion of Black individuals.

Second, a trained team should carefully review all flagged images before signaling police dispatch. The racial and ethnic makeup of this team should reflect the Philadelphia population. The extra seconds it takes to verify the image could ensure that police do not respond to a “false positive.”

Third, an independent team should evaluate the population that was used to train the ZeroEyes algorithm to ensure it has an adequate representation of people of color. Similar oversight bodies like the FDA have released standards for adequate representation in the training of medical AI.

Finally, SEPTA should verify ZeroEyes’ success by diminishing crime rates rather than the number of guns identified. False positives, like toy guns and concealed-carry guns, would only perpetuate more bias. ZeroEyes’ success should be determined by proactive crime prevention.

With the recent uptick in the number of fatal shootings, there is an urgent need for strategies to reduce gun violence. AI algorithms like ZeroEyes may help. However, we must be cautious when implementing new technology to prevent more racial inequality in our city.

Helayne Drell is a master’s of bioethics student and a research coordinator at the Human-Algorithm Collaboration (HAC) Lab at the University of Pennsylvania. Ravi B. Parikh is an oncologist and the director of the HAC Lab. Also contributing to this op-ed were Likhitha Kolla, who is an M.D.-Ph.D. student at the University of Pennsylvania, and Caleb Hearn, a research coordinator at the HAC Lab.