How Baylor is leading out when it comes to AI and ethics
“FAT” isn’t always a good thing — but in the world of artificial intelligence (AI), it’s crucial.
The acronym stands for Fairness, Accountability, Transparency — three words at the heart of Baylor’s approach to AI, as BU seeks to lead the way in addressing major ethical questions surrounding AI’s rapid growth and its impact on humanity.
Dr. Pablo Rivas, assistant professor of computer science, is a leader among Baylor professors in examining ethical questions surrounding various technologies. As site director for the Center on Responsible Artificial Intelligence and Governance (CRAIG), a partnership with Ohio State, Rutgers and Northeastern universities, Rivas has a seat at the table to discuss ethical questions surrounding AI.
“In many ways, I believe that here at Baylor, the virtues of honesty and justice are part of the educational experience,” says Rivas. “And that’s where Baylor’s faith-based mission really aligns with the future of AI research and development. I see Baylor working with faculty across different disciplines, not only engineering and computer science, but philosophy, theology, psychology, biology, and many, many others, tackling questions that are important on how technology can enhance rather than diminish human dignity.”
The “FAT” acronym, Rivas explains, provides a starting point for examining the complexities of ethical questions about technology. Broad topics such as fairness, accountability and transparency provide pathways for governance and the inclusion of ethical principals.
To that end, Rivas’ work in CRAIG is only part of Baylor’s impact. He also helps lead BU’s efforts in the Baylor Ethics Initiative, a community of scholars on campus who study how Christian beliefs and practices relate to broader cultural, social, economic and political systems. Rivas and Baylor religion’s Dr. Neil Messer co-lead the AI and Data Ethics research group within the Initiative, focusing on how data-driven AI technologies affect society at large, particularly in the areas of privacy, fairness, and issues of the common good.
“I think the world needs a Baylor that cares about responsible use of technology and works collaboratively to address it,” Rivas says. “As I work with other faculty, it helps that we have shared values and that the institution backs us up in issues like social justice, privacy, or serving people who don’t have the means or understanding to defend themselves amidst growing technologies.”
Sic ’em, Baylor AI researchers!