Good Comms | Communication for good

Responsible AI in communication starts with the right foundation (part 1 of 3)

This is Part 1 of a three-part series exploring what responsible and ethical AI use looks like for communication professionals. Over this series, we’ll examine why AI adoption without proper foundations creates problems, what baseline skills are needed before AI can truly help, and how to build a practical framework for using AI in ways that create understanding, build bridges, and break barriers.

As an inclusive leadership and communication consultant, I believe in “communication for good”. Communication that creates understanding, builds bridges, and breaks barriers. Using AI responsibly in our communication work means ensuring these powerful tools support that mission rather than undermine it.

Yet right now, we’re seeing concerning patterns emerge. You’ve probably encountered the term “workslop” which means generic, low-effort AI-generated content flooding professional spaces. It’s one visible symptom of a larger challenge: we’re adopting AI tools faster than we’re developing the skills to use them well.

Recent research from BetterUp and Stanford’s Social Media Lab found that 40% of full-time U.S. workers received workslop. According to the same report, employees spend an average of nearly two hours dealing with each instance, translating to roughly $186 per month in lost productivity per worker.

But workslop is just the most visible problem. The deeper issue is that we’re not approaching AI adoption with the intentionality and responsibility that communication professionals need to maintain quality, trust, and effectiveness.

While the conversation has shifted from “I don’t use AI” to “AI produces garbage,” both narratives miss the actual story. We’re using AI without the support systems, training, and frameworks that communication professionals need to use it responsibly.

Bold 'Mind the Gap' text on a subway platform, emphasizing passenger safety.
Workslop is exposing a skills gap among professionals in the face of new technology.

We've been here before

Remember desktop publishing in the 1990s? Suddenly everyone with Microsoft Publisher could create newsletters and flyers. The result? An explosion of terrible design. Clashing fonts, clipart overload, unreadable layouts. Desktop publishing “acquired a bad reputation from untrained users who created chaotically organized ransom note effect layouts”. Graphic designers were critical of how desktop publishing lowered standards, as anyone with the software could produce materials “without having the appropriate knowledge of what constitutes good design”.

We didn’t blame the software. We recognized that access to tools doesn’t automatically grant taste or technique.

The same thing happened with early websites, PowerPoint presentations, and digital photography. Every democratizing technology goes through this awkward adolescence where bad examples are everywhere and critics declare the tool itself is the problem.

AI is just the latest chapter in this story. The difference is that the cultural anxiety/fear around AI is amplifying the backlash and stretching out the maturity curve. Despite widespread AI adoption, with AI use nearly doubling at work since 2023, 95% of organizations see no measurable return on their investment in these technologies. Workslop is likely a major contributor to this paradox.

What responsible AI use requires

I’ve been experimenting with AI since 2023, back before ChatGPT made everyone suddenly concerned about synthetic content. I’ve also completed coursework in Ethical AI at the London School of Economics and Political Science, because I believe that if we’re going to use these tools in communication work, we need to understand both their capabilities and their ethical implications.

Here’s what I’ve learned: using AI well is actual work. And using AI responsibly, in ways that create understanding, build bridges, and break barriers rather than the opposite, requires specific competencies.

It requires:

  • Prompt crafting – Learning how to frame questions, provide context, and iterate on prompts to get useful outputs rather than generic mush.
  • Critical evaluation – Developing the ability to assess what AI gives you, recognize its limitations, and know what to keep, refine, or throw out entirely.
  • Strategic integration – Understanding where AI fits in your workflow. It’s not about replacing your process but augmenting specific parts of it.
  • Knowing when NOT to use it – Perhaps most importantly, recognizing tasks where AI adds no value or where human judgment is non-negotiable.

This upskilling takes time, experimentation, and genuine effort. It’s not something you get from a single webinar or a “10 ChatGPT prompts for marketers” article.

The organizational responsibility gap

Here’s what many discussions about AI miss: this isn’t just about individual competence. It’s about organizational responsibility. The London School of Economics (2024) identifies a critical concept: responsibility gaps. This refer to situations where an AI system automates activity previously performed by humans, but no individual is clearly responsible for outcomes of the automated process.

When organizations mandate AI adoption without providing adequate training, clear guidelines, or proper support structures, they create these responsibility gaps. People are asked to use powerful tools without the organizational scaffolding that enables responsible use. This is a systems failure, not an individual one.

Organizations can have virtues and vices, just like individuals (London School of Economics, 2024). They can be negligent by failing to look for ethical issues, or appropriately cautious by testing new technology before releasing it. Organizational governance is key to establishing both organizational and individual virtues.

The three camps

Right now, I see three distinct groups in the professional world:

  • The Hesitant – Those who haven’t yet engaged with AI, whether due to concerns about ethics, lack of training opportunities, or uncertainty about how to start. They’re often watching from the sidelines, trying to understand what responsible use looks like.
  • The Unsupported – People using AI but without adequate training, clear guidelines, or organizational support. They’re often responding to pressure to “adopt AI” without being given the resources to do it well. Among AI users at work, 18% admit to sending content that was “unhelpful, low effort or low quality”. This is a sign that systems aren’t setting people up for success.
  • The Skilled Practitioners – Professionals who’ve had the opportunity to invest time in learning how to use AI as a brainstorm partner, first draft generator, or devil’s advocate. They’re doing the upskilling work, but their sophisticated use is often invisible precisely because it’s that good.

The middle group is most visible, but they’re often not the problem. The systems that failed to prepare them are. The Hesitant stay quiet, uncertain how to engage. The Skilled Practitioners often keep quiet about their process to avoid backlash. So the public conversation is dominated by examples of struggling use, which then defines what “AI content” means in people’s minds.

The skills gap is real, and it is systemic

The skills gap is real and measurable. While AI use has doubled at work since 2023 (from 21% to 40%), only 13% of workers have received any AI training (Paylocity, n.d.). Four in five U.S. employees want more training on AI tools, but only 38% of executives are currently helping employees become more AI-literate (Diaz, 2025). This disconnect between tool adoption and skill development is exactly why we’re seeing the workslop problem.

This aligns with LSE research showing that organizational governance must establish clear lines of responsibility for ethical decisions and unfavorable outcomes (London School of Economics, 2024). Without these structures, workers may lack motivation to develop ethical awareness or invest in the professional skills needed for responsible AI use.

The challenge next to providing tools is creating the conditions where people can learn to use them well. This requires:

  • Clear accountability structures so people understand their responsibility
  • Adequate training resources beyond one-off workshops
  • Time and space to experiment with AI in low-stakes contexts
  • Guidelines and frameworks for ethical use
  • Support systems for when things go wrong

The real conversation we should be having

Poor AI use in communication is real. Workslop is just one example. But focusing only on the outputs misses the deeper issue.

Every time someone dismisses all AI-assisted work as problematic, they’re mixing up people who lack support with people who lack care. They’re mistaking access to a tool for access to proper training. And they’re letting visible struggles define the entire conversation, ignoring the many people doing thoughtful, responsible work.

The challenge we’re facing isn’t between human content and AI content. It’s between people who’ve had access to proper training and support for responsible AI use, and people who haven’t. Between organizations that invested in enablement, and those that just mandated adoption. Between systems that set people up for success, and systems that set them up to struggle.

The real question isn’t “Is AI good or bad for communication?” It’s “How do we ensure everyone has what they need to use AI responsibly and effectively?”

But here’s what most organizational conversations miss: You can’t just provide AI training and expect responsible use to follow either.

In Part 2, we’ll explore why responsible AI use requires a baseline that people need access to develop, and why this is fundamentally an issue of equity and ethical practice in communication.

Questions to ask yourself

Before moving to Part 2, take a moment to honestly assess where you stand:

  • Which camp am I in? Am I refusing to engage with AI, using it lazily, or developing real skill?
  • Have I received workslop from colleagues? More importantly, have I sent workslop?
  • When I use AI at work, am I thoughtfully integrating it or just trying to check boxes faster?
  • What would my colleagues say about the quality of work I produce with AI?

Be honest with yourself. The answers will help you understand what you need to develop as we dig deeper in the next post.#

References

  • London School of Economics. (2024). Ethical AI online course materials. LSE and GetSmarter.
  • Niederhoffer, K., Kellerman, G. R., Lee, A., Liebscher, A., Rapuano, K., & Hancock, J. T. (2025). “AI-Generated ‘Workslop’ Is Destroying Productivity.” Harvard Business Review.
  • Spitalnic, G. (2025). “AI-generated ‘workslop’ is here. It’s killing teamwork and causing a multimillion dollar productivity problem, researchers say.” CNBC.
  • Chiwaya, N. (2025). “AI ‘workslop’ is crushing workplace efficiency, study finds.” Axios.
  • “Desktop publishing (DTP) FAQ.” Opticentre.
  • Blanco, J. (2021). “History of Desktop Publishing.” Journal of the American Society of Questioned Document Examiners.
  • “AI Upskilling: How to Prepare Yourself and Your Team for the Future.” Paylocity.
  • Diaz, M. (2025). “How the 100 Best Companies Are Training Their Workforce for AI.” Great Place To Work.
error: Website copy is protected.