Artificial Intelligence - NCICU Ethics Bowl
I got the opportunity to speak at the 2024 NCICU Ethics Bowl last Friday. The North Carolina Independent Colleges & Universities have been getting students together from across the state to consider complex ethical issues for the past 13 years. This year 18 colleges came together in Raleigh at the state legislative offices and were judged on their well formed arguments and quick thinking.
This year’s theme was Ethics in AI and Cybersecurity, two topics that have been dangerously circling each other over the past several years. Every week we’re confronted with dramatic stories of the risks and potential that advances in technology are bringing. I wanted to capture some of these concerns when addressing these students as well as providing some paths forward. I split up my remarks across two posts, here is the first part focusing on Generative AI.
We can’t trust our eyes or ears
Last week news broke of the first of its kind deep fake heist in Hong Kong. An employee joined a video call, he was greeted by the CFO and other co-workers. The email leading up to the meeting and the requests were a bit suspicious, but after some convincing, the employee transferred $25 million to 5 banks. The employee transferring the funds was the only real person on the call, everyone else on the video call were deep fakes, created using images and audio found online.
Impersonation scams (such as this one) defrauded Americans $11 million in 2022 using AI to simulate peoples voices. Are we ready for phone calls that sound like loved ones in dire need of help? In one case, a couple sent $15,000 through a bitcoin terminal when an AI generated voice that sounded like their son “told them that he needed legal fees after being involved in a car accident that killed a US diplomat.”
Education at the forefront
Education is often at the forefront of rapid technological shifts. 46% of high school students used an AI tool for school assignments according to a 2023 ACT survey. This number is only going to grow as tools become more pervasive across our devices and in our lives. Last year ChatGPT became the fastest growing app in history reaching 100 million users in 2 months, faster than TikTok (9 months) and Instagram (2.5 years).
I like to look to the past in periods of rapid change. Google was founded in 1998 as I entered middle school and Wikipedia in 2001 as I entered high school. Instant access to all of the world’s knowledge made us pioneers and classrooms faced a similar predicament that teachers are facing with the usage of Generative AI tools. There were valid concerns about students copying and pasting answers, low quality or incorrect information, and the predicted death of education.
Similar things were said about calculators in the 1960’s and 70’s suggesting the death of math. Even when I was in school, teachers would encourage memorization by asking, “Do you think you’re going to carry a calculator around in your pocket for the rest of your life?” It’s now called a smartphone.
Today college professors are teaching courses on how to submit updates to Wikipedia. Every situation we have found a way to work alongside the machines. In one forward thinking announcement, last month North Carolina’s superintendent of public instruction, Catherine Truitt, published AI Guidelines for how Generative AI could be embraced by teachers. This shows a recognition that AI-literacy is as important for today's students as computer, Google and calculator literacy was for previous generations.
Policies, Processes, and Review
As we consider ethics in Artificial Intelligence we should look to the existing rules and policies that have already been developed. We just need to connect the dots. The University of Wisconsin has published policies for Generative AI by reaffirming existing rules surrounding privacy, personally identifiable information, security, intellectual property, harassment, discrimination and more. The state of Ohio has declared transparent testing and formal review processes are critical for any Generative AI deployment. Many other states and institutions are taking similar steps to make sure we are approaching these new technologies with the proper framework.
In software development we aim to ship quality code. One way we do that is through code reviews and testing. Everything we send to a production environment goes through a code review process and passes a series of test cases. This ensures all code has another pair of eyes looking at it. Nobody is above review, not interns, not senior developers, and especially not the robots.
AI is a lot like gasoline, if you have open flames burning (bad processes) inside your organization, it will accelerate those flames and you may get an explosion. Given a well oiled internal combustion engine, you can put it to use and go a long way. Like any new technology we are finding the places it fits. We can look to history and be mindful as we take each step forward.