4 Ethical Considerations when using Gen AI for Marketing

An image of a circuit board

While Machine Learning has been around since 1950, which is a long time to grow into what we now know and love as Generative AI. As Generative AI has become more affordable and accessible it is no wonder it is being integrated into work processes in every industry, especially Marketing. As a non-billable team, in-house marketing teams tend to be smaller with smaller budgets; who wouldn’t want to streamline things with a free AI assistant?

However, as Generative AI gains momentum, it’s crucial to pause and consider its limitations and ethical implications. Today, I explore these challenges in the Marketing space to help avoid legal risks and protect professional integrity.

What is Generative AI and what it isn’t

According to IBM “Artificial intelligence, or AI, is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.” Tools like ChatGPT are created with Deep Learning Algorithms that allow machines to learn and create content from large data sets (think essays, books, art, photography, etc).

The key ethical concern in Generative AI revolves around its reliance on training data. These AI tools don’t create entirely new content; instead, they generate material by combining elements from vast amounts of copyrighted content, raising significant ethical dilemmas.

The primary caution when using Generative AI is recognizing that its training data often includes copyrighted content. Using such content without permission isn’t just embarrassing; it’s illegal. For example one of my favourite photographers Zhang Jiagna is taking Google to court for training its AI Image Generator Imagen with her photos.

While artists are working hard to create tools that prevent their art from going through AI Training, anything that has already been used can be difficult to remove.

Machine Learning and Bias: Racism, Sexism, and more

Another concern when we consider that Generative AI tools create from their training materials: there is a big issue with Bias based on the training materials. From TechTarget we have this explanation:

Machine learning bias generally stems from problems introduced by the individuals who design and train the machine learning systems. These people could either create algorithms that reflect unintended cognitive biases or real-life prejudices. Or they could introduce biases because they use incomplete, faulty or prejudicial data sets to train and validate the machine learning systems.

TechTarget

This isn’t merely an assumption; recent reports and studies have uncovered clear evidence of bias in Generative AI. For example, a UNESCO report highlighted gender bias, while tools like DALL-E and Stable Diffusion have demonstrated racial bias even with neutral prompts.

Keeping Company Secrets

Even when avoiding the above concerns, you may want to contact your IT Team or CTO before putting company information in an external Generative AI tool. For instance, while working at IT services company Levio, we handled highly confidential content for high-profile clients. For something as private as an internal newsletter, we were prohibited from using external AI tools. But why?

It’s crucial to remember that any content input into AI tools is often reused as training data, compromising its privacy. This means that private company information could inadvertently become part of the tool’s learning process.

At Levio, an IT services company with AI expertise, our solution was to retrain our own private Generative AI tool. However, this approach isn’t feasible for every company, so it’s essential to understand your company’s policies and guidelines before using external AI tools.

Generative AI Replacing Real People and Jobs?

A growing concern in the Marketing industry is the potential for job replacement due to Generative AI. This fear gained traction when companies like Lattice announced the addition of “AI Workers” to their organizational charts, though they quickly reversed this decision.

While the long-term implications of Generative AI on the workforce are still unclear, I believe we’re not yet ready for such a transition. However, these AI tools can certainly help alleviate the workload for current employees who may be overwhelmed.

Ethical Generative AI in Marketing: A Human Touch

Based on this article you would think it may be impossible to integrate AI Tools into your team and processes, but that’s not the goal of this piece. All of these cautions just confirm a strong need for a human touch before and after content is created.

As someone who is used to working on tiny teams with big workloads Generative AI tools help me create things faster and of a higher quality. From reviewing, translations, or polishing off bigger ideas tools like ChatGPT have been a great help. I perpetually believe the quality and creativity from human intervention will always trump carbon copy content, at least from the state of AI at the moment.

Still want to integrate Generative AI tools into your company’s toolbox? I love this article from Helios HR on “How to Create an AI Policy for your Company”.

Get in Contact

What has been your experience with AI Tools? What are your concerns and what makes you excited? Feel free to leave a comment or reach out to me on LinkedIn.


Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.