top of page

Case Study: Artifical Intelligence

  • alexanderrpreston7
  • Sep 20, 2024
  • 5 min read

By Al Preston

 

From the moment ChatGBT was introduced, there have been a lot of questions about Artificial Intelligence (AI). OpenAI, the company that made ChatGBT and numerous other free to use AI programs, has promoted their programs as a step towards true artificial intelligence but also as a the newest digital tool. ChatGBT learns from the internet and every user who interacts with it, art AIs create new images from image searches on websites like Google.

Since OpenAI released their products, the internet and the world at large have had to come to terms with what AI means for us. Programs like ChatGBT have existed for a long time. Some of us may remember silly little chat games where you ‘talked to a robot’ or even Akinator, the gene who can guess who you’re thinking of with just a few questions.

Back then, they seemed like silly little games, but they were the bases of AIs like ChatGBT. Deep Fakes, videos made by AIs and with a little human helping hand, have been a troubling development as well. People can make eerily life like videos of just about anyone with just images from the internet. They can also sound nearly identical to the person they’re impersonating.

With free AI programs, anyone can make anything. On the internet, where laws and rules are already pretty skewed, what does the ability to use AI mean? Some people have used it to create thoughtless art pieces to sell, the AI system they used creating an eerie image where people have multiple limbs or the same face. Books have been written entirely using AI with little input from humans beyond prompting it to give it more material.

There are a lot of questions to be raised about using AI as it is, without human intervention. However, what is there to say about AI as a tool? Can educators, like historians and museum professionals utilize AI to make their lives just a little easier?

The fact of the matter is, AI is here, and it will only progressively get better. It is something all of us must now come to terms with, so how can we utilize it to our advantage and gain not only a grasp of how it works, but also some control over the situation?

To start, we have to interact with AIs like ChatGBT to see what it does when we ask it very specific questions. In this article from the Oral History Review, a professor of mine and her previous Oral History class asked ChatGBT to provide them with an oral history, effectively asking it to create something for them. It’s a fascinating read and shows that ChatGBT eventually will just start making things up and throwing random things together, like lyrics from an Earth, Wind, and Fire song in an oral history about Hurricane Katrina.

What if we use something like Canva’s AI image generator? I asked it to give me images of “Pittsburgh bridges” which seems simple, right? Any of us can type that into Google and get results that are actual pictures of Pittsburg and its bridges. This, is what the AI gave me:







 

I couldn’t tell you what’s happening here. Some of the supports of those bridges definitely turned into roads, the reflections on the water aren’t right, and apparently there are three Point Parks and one of them has the remains of a bridge piled up on it.

But this is what AI does. In both of these examples, AI has fabricated something that’s just a little wrong (or a lot). Part of this comes down to the rules that AIs have to run themselves by. OpenAI and Canva didn’t just unleash AIs onto the world without some rules attached.

Among those rules, there are stipulations about just grabbing an image from Google and presenting that to the user. Because of copyright laws, AI has been made so that it has to make something from the information that it’s given. And that includes giving completely false information about a historic event. AIs are not nearly smart enough right now to sort out what is and is not copyrighted. We as people struggle with that at times!

There are some other things that it cannot do as well. There are hard stops on AIs that prevent you from asking Canva’s image generator to give you images of Adolf Hitler. Even typing in “World War Two German leader” won’t give you anything related to Hitler or the Nazis (although it does produce images of a white, blond-haired, blue-eyed woman in a military uniform).

So, what if you don’t ask AI to create anything? Instead, you ask it to edit your paper? Or transcribe an audio file? Well, it actually does those tasks really well.

Here’s an example. I put in the first thirty seconds of an oral history with Scott Noxon (more information coming soon) into Whisper, OpenAI’s audio transcription AI. On the left is my manual transcription of the audio, the right is Whisper, left unedited from how it gave me the transcription:

Manual

Whisper

Noxon: At the very end there, it was the police that more or less shut it down. Police were not allowed to go in and watch a bar for traffic coming in and out of it. So, they knew that they--down on to Beaver Avenue. They could sit there, and they were counting cars. They were counting my cars.

Switzer: Sigh.

Noxon: There was two other bars up the street and they weren't worried about them. They were just trying to get in with me and I said;

"Listen. " I said, "I've done everything right here. I don't call you guys for trouble. We don't beat people up under the bridge. We do our own self security kind of thing."

I said, "If there's something here, my off-duty cop will call you."

But I said, "I don't know why you're down here making my bar go to pieces."

"Oh, we're just watching cars."

“No, you're not.”

 Everybody knew they were after the gay bar.

At the very end there, it was the police that just more or less shut it down. The police were not allowed to go in and watch a bar for traffic coming in and out of it. So they knew that they went down onto Beaver Avenue. They could sit there. They were counting cars, but they were counting my cars. There was two other bars up the street, and they weren't worried about them. They were just trying to get in with me, and I said, listen. I said, I've done everything right here i don't call you guys for trouble we don't beat people up and you know under the bridge we don't do our own self uh self-security kind of thing so if there's something here my off-duty cop will call you but i said i don't know why you're down here making my bar go to pieces. Oh, we're just watching cars. No, you're not. Nobody knew they were after the gay bar.

            Looks pretty similar, right? ChatGBT can also find grammatical mistakes in your papers, granted, it won’t fully understand everything you write. If you were to go through and double-check everything AI gives you, that’s a lot less work than having to do all of the steps yourself.

            When it comes down to it, that’s the important thing to learn from all of this. AI is, as it stands, a helpful tool with human guidance. We can’t rely solely on AI to make things for us, it just isn’t good at that yet. But we can ask it to help us with our own work. Using Whisper allows me to turn manually transcribing an oral history into more time refining and editing what it gives me. It takes eight hours off of a twenty hour project.

            AI can be a good thing, as long as we use caution and put in a little bit of our human touch into it.

Comments


bottom of page