A Chat About AI Tech
From chatbots to security identification software, generative AI (artificial intelligence) technology is enjoying its 15 minutes of fame. Headlines range from early adopters hailing the services as revolutionary and life-changing to negative examples of the times AI really, really didn’t get it right.
Andrew Hagen, CEL Integrated Communications Coordinator, and Ashley Winter, CEL Content Marketing Coordinator, sat down to discuss using AI in a communications role and the pros and cons of the ever-evolving technology.
Quick Links:
- Gut reactions
- What’s exciting about AI?
- What are your hesitancies about AI?
- Do you use AI programs like ChatGPT?
- Is using AI ethical?
- Tips for establishing AI use in your organization
- Where to start with AI use
Let’s talk AI technology. Gut reactions?
Andrew: Ugh. Do we have to?
Ashley: I love it! My gut reaction? I welcome the technology—the potential is so exciting.
Andrew: It’s exciting, but it’s early technology. It’s fraught with problems.
Ashley: Agreed. There’s a lot of room to grow—but I’m still excited about the technology.
Andrew: It’s not that I hate technology. It’s that it’s just not there yet. I’m not on the leading edge of the innovation adoption curve.
Ashley, what excites you about the technology?
Generative AI is fascinating! Let’s talk about what everybody is talking about: ChatGPT. It’s an artificial intelligence chatbot/language model trained to be conversational. People use it to help develop outlines, pen speeches, write jokes and identify problems in their work; so many use cases! It’s a great brainstorming tool. In some ways, it’s a better Google search—the AI can compile lists, provide ideas and help you develop talking points, all things you can do with an internet search engine—but the AI does it faster.
AI can (and already does) support many industries. Imagine healthcare diagnoses that are faster and more accurate because doctors have AI helping them to quickly crawl a patient’s history. Natural disaster technology has the potential to warn us of danger earlier so we can communicate faster. And I love the fun parts of AI, too—smart home technology that knows what I want before I know I want it? Bring it on.
Andrew, what are your hesitancies about the technology?
Slow your roll, Ash.
First, I think it’s important to acknowledge that AI is impressive. The engineering behind the technology is next level. Thinking about the fact that even 50 years ago, computers were this unattainable thing that couldn’t fit in a room. For example, this ABC interview from 1974. Now, in 2023, the idea that a computer can “paint” a picture of a panda wearing sunglasses, write a sonnet, or answer questions I have for my bank at 2 am, is utterly jaw-dropping.
That said, the technology is in its infancy. Even dating back to the Turing Test of the 1950s, AI, as we know it today, is still new.
There’s also the fear that we let it go too far before we establish regulations around what AI can and cannot do. We need innovation. It’s how we advance as a society. But we can’t let innovation run rampant without an understanding of the risks involved.
We have to be actively involved in its creation and not be afraid to call out when it’s taking a dangerous turn, whether it be ethics, inaccuracies, or intentional misuse.
Do you use AI programs like ChatGPT in your roles as communications and marketing professionals?
Ashley: I think I can speak for both of us when I say that we use it as a tool in the same way we use Google Docs, Microsoft Excel, PowerPoint Designs or Canva. It’s important for us to explore technology tools and understand their pros and cons. All of these tools can help us do some of our standard work faster, but none are replacing our work.
Andrew: Right. ChatGPT writes in the 5-paragraph format: introduction, three main ideas, and a conclusion. Perhaps it’s just the prompts I’m feeding it, but it hasn’t learned how to expand on an idea outside of this format. It’s great for brainstorming and compiling data, but that’s all.
Ashley: That’s a great point. Many of these AI programs are dependent on the quality of the prompt you feed them. You can ask the program to rewrite something in a more heartwarming way or mirror a certain style of writing. ChatGPT can help us be a little faster in wireframing and brainstorming.
However, ChatGPT is also known to make things up. And it’s quite literal. One day I asked it to compile some quotations for me, and it very happily generated and attributed a bunch of entirely made-up quotes. Not so useful, but very amusing. But it’s only going to get better as we learn to use it more thoroughly and thoughtfully. Right now, accuracy is a big drawback, but the tool has a lot of potential.
Let’s get into a big question for a moment: Is using generative AI ethical?
Andrew: It entirely depends on the industry and how it’s being used. As with any tool, it can be used for the betterment of society, but it can just as easily be used maliciously. And even the best intentions can produce dangerous results. As the saying goes, “with great power comes great responsibility.”
Ashley: Absolutely. There are a lot of questions surrounding ethics, and some of them are very industry specific. Using AI to enhance a medical scan is inherently different (and riskier) than using it to enhance an old photograph.
So the answer to the question is really this: it depends. Using AI isn’t inherently unethical, but it’s important to know the full implication of each technology and how they’re built, what and who they’re learning from. AI learns from people, which means it also has all of our quirks, our biases, our incorrect assumptions and inconsistencies to learn from, too. Each industry needs to determine the ethical use of AI.
For example, can students use ChatGPT to create their schoolwork? Is it plagiarism to use generative art AI to create a graphic, knowing that it trains on the work of others? There’s a lot of nuance. Collectively we’re already using AI to support students — like using Grammarly; it’s very helpful for students who are learning grammar, and those learning English as a second language.
How can we use additional technology tools to improve the educational experience? EdWeek ran a short series of articles on ChatGPT that are worth the read for teachers and educators — including how to outsmart kids with the work you assign and what to do when kids cheat. Every industry should be thinking ahead like this and debating pros and cons, opening the discussion at large.
Andrew: Here’s another point on which we agree. Is an AI creation a form of plagiarism, or is it creation based on inspiration? Any Renthead knows that Rent was inspired by Le Bohème. Shakespeare drew upon The Tragicall Historye of Romeus and Juliet when he wrote what is often called one of the greatest love stories of all time. President Barack Obama quoted Martin Luther King, Jr., who quoted Theodore Parker by saying, “The moral arc of the universe is long, but it bends toward justice.”
Any tips for establishing guidelines on how to use AI within my organization?
Andrew: Ashley, you taught at the college level. What was it like for you when your students started using Wikipedia as a source? I can remember errors were so pervasive that it wasn’t deemed credible. But now it’s the go-to source for everything from the migratory patterns of waterfowl to the personal lives of politicians.
Ashley: Yes, when I was teaching English in the mid-2000s, Wikipedia was generally still a distrusted source of information. Faculty had many discussions about allowing students to cite it as a source, and the overall feeling was that no, it should be banned for students. But our counterargument was that we couldn’t ignore this new, popular tool. We should teach students how to use it responsibly. I believe that’s where generative AI is now. We can’t ignore it. So, we need to be thinking ahead and teach people to use it responsibly and ethically.
Andrew: Are you sure we can’t ignore it?
Ashley: Nice try. Though I’m sure a lot of organizations aren’t ready to dive into creating policies around AI use. But it’s everywhere. It’s already sorting and categorizing our photos (and who doesn’t love the curated memories and videos it creates?) AI is routing our destinations and even driving our cars. It’s making a profile of you and using that information to feed you curated content and advertisements. It really behooves us all to understand AI and the ways we can interact with it, and use it to improve the things we already do.
Andrew: So if you’re not going to let me ignore AI, how do we make sure we’re using it responsibly? How do we limit our use of it so that we don’t end up like the humans in WALL-E?
Ashley: Would that be such a bad thing? …okay, I’m kidding. AI is just a tool. We can use it to help us do our work, help us brainstorm ideas and think about things in new ways. If you’re too busy to toss around ideas with me, I can jump into ChatGPT and ask for some ideas to kickstart my work. Ultimately, the work, the responsibility for the work and the consequences are my own. We’re all ultimately responsible for the content we generate. That’s why we need to understand the tool and explore how to use it both effectively and ethically.
Andrew: Right. There’s a time and a place for using AI-generated content and a time when it’s very inappropriate.
Ashley: We already see plenty of small businesses using technology tools to help them do things they can’t afford to do otherwise. To clarify: a small business can now create some pretty great-looking marketing materials in Canva. They can use ChatGPT to develop branding and marketing slogans. Chatbots can help answer questions on their website or Facebook account when they’re not available to help people. These are all excellent tools.
But they can’t do it all. You need to make sure ChatGPT isn’t outright stealing or making up the content it feeds you. Ask it to source the information or quote an expert; then verify the source. Canva has very functional designs and a helpful AI program (MagicWrite), but the art can be very generic and won’t feed the heart and soul of your brand. Chatbots range from ridiculous to offensive to extremely helpful.
Chatbots, ChatGPT, Canva… where should someone start?
Ashley: In this day and age, people expect to find the information they need at their fingertips, or they’re going to move on. For organizations, communications tactics and content should help customers find the information they need quickly — accurate information from an official source for information. AI can help us create that information more quickly. And the quality of that information, the customer experience, is our responsibility as marketers.
Andrew: For example, website and social media chatbots can be great tools if they run on a limited scope. They need very specific inputs to work and are limited by the information available to them. Often, it’s multiple choice: I need to know what hours the business is open or who to contact with questions about a service, etc. It’s not meant to be a creative process; it’s meant to be a means of answering simple questions. When it’s simple, it works.
We see this all the time with websites. As we audit websites with clients, we find ways to reduce content and streamline information. Why make someone click through five pages when they should be able to find the information on the first click? It takes planning and forethought to set things up correctly, but the payoff is huge. Similarly, AI shouldn’t be adding to your workload and making things more complicated. It should be integrated into your marketing in a way that saves you and the end-user time and improves accuracy.
Ashley: Yes! Listen, I’m a millennial. Do NOT make me pick up the phone to call someone. Give me the ability to schedule my appointment online. Let me ask a chatbot how I get a replacement part. Add your hours to Google, so I know when you’re open at a glance. Use analytics to know what your customers are viewing and using on your website. Design with them in mind. Use all of the technology tools to improve upon what you have.
Andrew: Once again, we agree. But maybe not all of the technology tools?
For those familiar with the personality typing program Insights Discovery, Ashley leads with Red. According to the website, reds are “action-oriented and always in motion.” Andrew leads with Blue. Blues prefer to “maintain clarity and precision, radiating a desire for analysis.” If it wasn’t obvious from their discussion, both communicators value accuracy and authenticity and appreciate technology from different angles.
If you’re interested in brainstorming all the ways AI can support your organization, reach out to Ashley. If you’d like to talk about formulating policies and understanding the limitations of AI technology, Andrew is available to chat.
Published on: March 1, 2023