In the Wild West that is the ongoing artificial intelligence revolution, Cynthia Rudin sees both opportunity and peril.
It’s the peril she wants people to take particularly seriously.
Rudin is a Duke computer scientist and engineering professor who runs the university’s Interpretable Machine Learning Lab. She has studied AI for many years and, with the rise of ChatGPT and other AI-fueled tools, has become a prominent voice warning of the dangers of unregulated technologies.
In a 2023 commentary in The Hill, Rudin likened the rise of AI to a runaway train.
“With little incentive to do good, technology companies don’t appear to care about how their products impact — or even wreck — society,” she wrote. “It seems they make too much money to truly care, so we, the citizens, need to step in and demand regulation. If not, we’re very likely in for a dangerous avalanche of misinformation.”
Rudin has a doctorate from Princeton and held positions at MIT, Columbia and New York University before coming to Duke. Her work has broad utility. One project created an intuitive system to locate New York City manholes at risk of exploding due to degradation and overloaded electrical circuitry; another research project developed algorithms to diagnose breast cancer.
In 2022 she received a Guggenheim Fellowship in support of her research advocating for transparent and socially responsible AI.
Rudin has advocated for immediate federal oversight of AI technology, with particular attention to facial recognition technology and the enforcement of existing laws addressing monopolies.
And she has pushed for transparency and the sharing of data used in the development of AI-driven technologies – like the smartwatch on your wrist that can track your heart rate and detect an irregular heart rhythm.