Good Comms | Inclusive leadership & communication

The baseline skills needed for responsible AI use (Part 2 of 3)

This is Part 2 of a three-part series on responsible AI use in communication. Read Part 1 here if you haven’t already.

In Part 1, we explored how responsible AI use requires proper support and training, not just tool access. Here’s what most AI training programs overlook: teaching prompt engineering techniques won’t help people use AI responsibly if they lack the foundational skills to recognize what quality and good communication looks like. When we provide powerful tools without ensuring baseline competence, we inadvertently set people up to cause unintended harm, regardless of their intentions.

Businesswoman in a plaid jacket using a laptop and sticky notes for work planning.
Successful AI adoption requires baseline skills.

The Michaelangelo problem

Think of AI like Michelangelo’s hammer and chisel (thanks to Mr. V for the analogy!). The tools didn’t make him a master sculptor. They simply allowed him to execute on expertise he’d already developed through years of training, deep understanding of anatomy, artistic vision, and technical mastery.

AI operates the same way in professional work. It augments existing capability. While it is a powerful sidekick, sadly, it cannot amplify foundational skills which aren’t there in the first place.

The multiplier effect

Here’s the fundamental principle:

Strong baseline skills × AI = Enhanced productivity and quality
Weak baseline skills × AI = Scaled problems

Without domain expertise, professional judgment, or critical thinking skills, AI won’t create those capabilities it will simply help produce work that needs improvement faster. This explains why poor AI use persists: people often weren’t given support to produce high-quality work manually first.

Real-world examples of the baseline challenge

I’ve witnessed this dynamic repeatedly with colleagues navigating unclear expectations and inadequate support:

Example 1: Unclear ownership
A colleague once said: “I put this into ChatGPT and it gave me this output, and I do have a paid one.”

This revealed uncertainty about where their responsibility began and AI’s role ended. The paid version mention suggested they thought better tools would solve the problem. What they actually needed was clarity about ownership and evaluation criteria. When organizations say “use AI” without clarifying “you’re still responsible for quality and accuracy,” people can misunderstand where accountability lies (London School of Economics, 2024).

Example 2: Missing foundation
Another colleague shared their frustration: “I asked ChatGPT to write a speech for the boss and it came out with garbage.”

This wasn’t a failure of effort but a gap in foundational skills. Writing effective speeches requires understanding what makes them compelling, how to structure arguments, and how to match tone to audience. Without that baseline knowledge, it’s impossible to give AI good direction or recognize whether output is useful.

Both colleagues were doing their best with available tools. The gap was the failure to equip them with foundational skills first.

The two-layer development model

Effective AI upskilling requires two distinct layers:

Layer 1: Foundational professional skills

  • Domain expertise
  • Critical thinking and strategic judgment
  • Audience understanding and communication skills
  • Ability to recognize quality work
  • Creative problem-solving and ethical reasoning

Layer 2: AI skills

  • Prompt crafting and iteration
  • Critical evaluation of AI outputs
  • Understanding AI’s limitations and biases
  • Strategic workflow integration
  • Knowing when NOT to use AI
  • Maintaining ownership and accountability

You cannot skip Layer 1. No amount of prompt engineering compensates for lack of professional competence.

Without Layer 1: You can’t evaluate AI output, frame problems effectively, or determine when AI is appropriate.

With Layer 1 but without Layer 2: You do good work but miss efficiency opportunities.

With both layers: AI amplifies your skills, you produce quality work efficiently, and you maintain full ownership.

Why "just train everyone on AI" falls short

This two-layer model explains why well-intentioned “AI for everyone” initiatives often disappoint. Organizations rolling out AI tools without ensuring Layer 1 competencies provide powerful tools without foundations.

AI training alone doesn’t help if someone hasn’t had opportunities to develop strong writing, analytical skills, strategic thinking, or subject matter expertise. AI amplifies whatever foundation exists, gaps and all. As the London School of Economics (2024) notes, one factor preventing responsible AI design and deployment is that people lack the necessary knowledge and skills.

Two approaches to AI

People approach AI in one of two ways:

  1. As a collaborative partner that helps think through problems
  2. As a solution provider that should deliver finished answers

The difference comes down to guidance and support. If someone’s only exposure is “here’s a tool, use it,” they’ll likely treat it as a solution provider. That’s not a personal failing, it’s a gap in organizational guidance that identifies as “responsibility gaps,” where no individual is clearly responsible for outcomes of automated processes (LSE, 2024).

The professional competence question

If AI disappeared tomorrow, would you still be good at your job?

If the answer is no, if you’re relying on AI to compensate for skills you don’t have, you’re using it as a crutch, not augmentation.

This doesn’t mean you shouldn’t use AI. It means investing in Layer 1 first or alongside Layer 2. Develop foundational skills that make you competent independent of tools. Only then can AI truly support your work rather than expose gaps.

Building toward solutions

Responsible AI use requires professional competence as a baseline and reveals how well our systems support people in developing it. As we establish in Ethical AI coursework, using AI ethically requires both individual virtues (awareness, curiosity, courage to identify problems) and organizational structures that establish clear lines of responsibility (LSE, 2024).

Before AI can effectively support communication work, people need foundational skills worth building on. Organizations must ensure access to developing those foundations especially recognizing that not everyone has had equal access to baseline skills development.

In Part 3, we’ll explore what effective, responsible AI development looks like when you address both layers, and provide a practical framework for using AI ethically in communication work.

Questions to ask yourself

Honestly assess your baseline:

  • If AI disappeared tomorrow, would I still be good at my job?
  • What expertise am I bringing that AI augments? Can I articulate it?
  • Can I explain why an AI output is good or bad? Or do I just accept it?
  • Am I using AI as a shortcut for work I don’t know how to do, or accelerating work I do know?
  • Do I treat AI outputs as my responsibility, or as something AI “gave me”?
  • What Layer 1 skills might I need to develop before AI can truly help?

Your answers reveal whether you’re ready for Layer 2, or need to invest in your baseline first.#

References

  • London School of Economics. (2024). Ethical AI online course materials. LSE and GetSmarter.
  • Niederhoffer, K., Kellerman, G. R., Lee, A., Liebscher, A., Rapuano, K., & Hancock, J. T. (2025). AI-generated ‘workslop’ is destroying productivity. Harvard Business Review.
  • DataCamp. (2024). Reskilling and upskilling in the age of AI: Challenges and opportunities for organizations. 
  • Paylocity. (n.d.). AI upskilling: How to prepare yourself and your team for the future.
error: Website copy is protected.