Editorials featured in the Forum section are solely the opinions of their individual authors.

Courtesy of Wikimedia Commons via user OpenAI
ChatGPT, a large language model developed by OpenAI, has spurred on a new conversation about the ethics of generative AI tools.

The ethics of science is a topic that should be treated with the utmost importance and provides a backbone to our research and goals.

It’s also the only thing we talk about. I’m serious, look at the talks given at Carnegie Mellon. The number of times we talk about the ethics of whatever new flavor of AI that has been rolled out this week is staggering. We’re in an endless cycle of “new tech, ethics, new tech, ethics,” and it means that so much science communication these days is just, “Is it okay to do this?”.

It’s important to think about whether or not we should, because we can do a lot and we shouldn’t do most of it. However, it distracts from the actual discussion and advancement of science sometimes — not because ethics talks stop people from doing science, but because they suck up so much of the airwaves. 

Imagine you’re someone just getting into science — a kid let’s say — trying to figure out what exactly the weird “S” in their STEM is. You look it up in the news and you see plenty of articles about the ethics of some new discovery, and very few talking about the design and creation of the discovery itself. We’re not talking about OpenAI’s SORA for its potential and what we can do — the opening conversation has always been, “Man, this is terrifying,” “This is ‘Black Mirror,’” and the like.

So what does that mean for science communication? Well, it’s hard to explain science to people. It’s especially hard to explain science to a layperson when you’re deep in the weeds of some complex jargon and realize the average person still thinks “effect” and “affect” are the same word.

It’s this barrier — and a massive leap between what we can say to the public about discoveries and the conversations that need to be happening — that hurts people’s ability to critically engage with new science. Right now, everyone wants to hop on the AI victory tour, and while that’s great, it’s essentially turning many fields into prodding ChatGPT with a stick until it does something cool. In fact, a ton of AI research at this point is going into, well, just more language learning model (LLM) research, because that’s the only thing in the news.

Here’s the cycle: An LLM thing happens, people start paying attention, people talk about the ethics, sponsors, and monied interests (read: venture capitalists) and start thinking, “Hey, there’s a lot of interest in this whole AI thing, we should throw some money at it,” and the cycle of science communication continues.

This isn’t new. The dot-com bubble operated on the same principles, as did the telecom bubble. These two bubbles were encouraged by the fact that the people  giving away money to small companies were doing so because everyone was doing so, not because the fundamental work these companies were doing was very groundbreaking.

This problem, and this disconnect, becomes worse when the discussion goes from “Look how much money I can make with this tech” to “Let’s have a philosophical discussion on the nature of ethics and AI” because it completely changes the narrative while still pushing the actual science to the side. Nobody actually cares about OpenAI, they only care about the moral and ethical implications of that new deepfake tech, because that’s the only thing experts seem to talk about.

Science communication isn’t easy, and it’s definitely not easy to talk about ethics. However, we cannot have ethics be the only way that scientific communication is received by the average person. Doing so, we not only risk pushing people out of science, but we risk giving a flawed understanding to the layperson that could be devastating in the future.

Author

Leave a Reply

Discover more from The Tartan

Subscribe now to keep reading and get access to the full archive.

Continue reading