Good Comms | Where fairer systems begin

A practical framework for responsible AI in communication (Part 3 of 3)

This is Part 3 of a three-part series on responsible AI use in communication. Read Part 1 and Part 2 here if you haven’t already.

In Parts 1 and 2, we explored why AI adoption without proper foundations creates problems, and why baseline professional skills matter. Now for the practical part: How do you actually develop both layers?

This framework comes from experimenting with AI since 2023, completing Ethical AI coursework at LSE, and watching what works (and what doesn’t) in practice.

An artisan sketches designs at a workbench in a woodworking shop filled with tools.
The best AI-assisted work starts with human expertise worth amplifying.

The two-layer framework

Effective AI use requires two distinct layers of development:

Layer 1: Foundational professional skills

  • Domain expertise – Deep knowledge in your specific field
  • Critical thinking – Ability to analyze, evaluate, and synthesize information
  • Strategic judgment – Understanding the “why” behind decisions
  • Communication skills – Knowing your audience and how to reach them
  • Quality recognition – Distinguishing excellent work from work that needs improvement
  • Creative problem-solving – Approaching challenges from multiple angles
  • Ethical reasoning – Understanding implications and responsibilities

Layer 2: AI augmentation skills

  • Prompt crafting – Framing questions and providing context effectively
  • Critical evaluation – Recognizing when AI is right, wrong, biased, or incomplete
  • Understanding limitations – Knowing what AI can and can’t do well
  • Workflow integration – Identifying where AI adds genuine value
  • Knowing when NOT to use AI – Recognizing when human judgment is essential
  • Ethical AI use – Understanding attribution, bias, privacy, and responsible practices (London School of Economics, 2024)
  • Maintaining ownership – Taking full responsibility for AI-assisted outputs

For communication professionals, this means understanding how AI outputs affect your audience, whether they create understanding or confusion, and whether they maintain trust and authenticity that good communication requires.

Developing layer 1: your professional baseline

Strengthen your baseline through:

1. Seeking genuine feedback
Get critical feedback from people who will tell you the truth about where gaps exist in your expertise.

2. Learning from excellence
Study the best work in your field. What makes it excellent? Apply those insights.

3. Building domain knowledge systematically
Read industry publications, take courses, learn from experienced practitioners, stay current.

4. Practicing critical thinking
Question assumptions, analyze arguments, consider multiple perspectives, evaluate evidence quality.

5. Developing professional judgment
Seek challenging projects, learn from failures, find mentors, reflect on what works and why.

Finally, ask yourself: Can you do excellent work without AI? If not, maybe you need to focus there first.

Developing layer 2: AI augmentation skills

Once you have solid baseline competence, develop AI-specific skills:

1. Prompt crafting and iteration

Start broad, then narrow it down:

  • Begin with a clear objective
  • Provide necessary context
  • Be specific about constraints
  • Iterate based on results

Example progression:

First: “Write about AI in communications”

Better: “Write a 500-word analysis of how communication professionals can use AI ethically while maintaining quality standards. Focus on practical applications.”

Even better: “I’m a communications director at a B2B tech company. Write a 500-word internal memo explaining how our team can use AI tools like ChatGPT to improve content creation while maintaining our brand voice and editorial standards. Include specific examples and guidelines.”

The difference? Context, specificity, and clear objectives. Think of AI as a new intern. How would you instruct it to do its work? Such a detailed instruction can only be provided with Layer 1 competence.

2. Critical evaluation of AI outputs

Develop a systematic approach to evaluating AI-generated content:

The quality checklist:

  • Accuracy: Are facts correct? Any hallucinations?
  • Relevance: Does this address what I asked for?
  • Completeness: What’s missing? What needs adding?
  • Tone: Is this appropriate for my audience and context?
  • Originality: Is this generic, or genuinely insightful?
  • Bias: What assumptions are embedded?
  • Coherence: Does the logic hold together?

If you can’t evaluate these dimensions, you probably need to go back to your baseline skills which are needed to use AI effectively.

3. Strategic use cases

Know when AI genuinely helps versus when it creates busywork:

Good use cases:

  • Brainstorming and ideation
  • First drafts for refinement
  • Summarizing large amounts of information
  • Testing different approaches
  • Challenging assumptions
  • Formatting and restructuring existing content
  • Research and information gathering

Poor use cases:

  • Anything requiring deep expertise you don’t have
  • High-stakes decisions without human oversight
  • Sensitive or confidential information
  • Creative work needing your unique voice
  • Content where authenticity and personal experience matter
  • Situations requiring understanding of every detail

4. Workflow integration

Think carefully about where AI fits in your process:

Effective integration points:

  • Early ideation: Generate options before committing
  • Mid-process: Get unstuck when hitting roadblocks
  • Pre-finalization: Check for gaps or weaknesses
  • Iteration: Test variations quickly

Ineffective integration:

  • As first and last step (generate and publish)
  • As replacement for thinking
  • For tasks you should be learning yourself

5. Ethical judgment and ownership

Some practices are essential:

Always:

  • Take full responsibility for anything you produce with AI
  • Check for bias, accuracy, and appropriateness
  • Consider attribution and transparency needs
  • Respect privacy and confidentiality
  • Maintain editorial control

Never:

  • Blame AI for poor outputs (“ChatGPT gave me this”)
  • Publish without thorough review
  • Use AI on confidential information without proper safeguards
  • Abdicate responsibility for quality
  • Treat AI outputs as authoritative without verification

As Gabriel (2020) notes in research on AI value alignment, designing AI systems with moral values becomes particularly important as they operate with greater autonomy, making it critical that humans maintain ethical oversight.

Learning from leading organizations

Some organizations model effective upskilling. Amazon committed $700 million to upskill 100,000 employees, focusing on foundational tech skills before advanced AI capabilities. IBM committed to training 2 million people in AI skills over three years, starting with baseline technical literacy before specialized applications (DataCamp, 2024).

The common thread? These programs ensure foundational competence first, then layer on AI-specific skills, never providing powerful tools without foundations.

Final questions to guide your development

Baseline-related (Layer 1):

  • What does excellence look like in my specific role? Can I articulate the standards clearly?
  • What are my greatest professional strengths? How can I amplify these with AI?
  • Where are my skill gaps? What foundational competencies need development?
  • Who can give me honest feedback about my work quality and development needs?

AI-related (Layer 2):

  • What’s one task where I could experiment with using AI as a thinking partner?
  • How do I currently evaluate AI outputs? Do I have a consistent process?
  • When did I last say “no” to using AI? What was my reasoning?
  • What does “maintaining ownership” mean concretely in my role?

Development-related:

  • If I could only develop three skills in 90 days, what would have the biggest impact?
  • What would my plan look like if it addressed both foundational AND AI skills?
  • Who’s doing this well that I can learn from?
  • What’s my accountability system? How will I ensure I’m actually developing skills?

In other words

Poor AI use persists when we treat it as individual failure rather than a systems issue, and when tools are provided without support.

But there is another path forward: develop strong foundations, use AI thoughtfully, and take full ownership of your outputs.

If you’re in leadership, you can change your team’s approach: create support systems, ensure proper training, build frameworks that help people succeed.

The tools are just tools. It matters how we set ourselves and others up to use them well.#

References

  • DataCamp. (2024). Reskilling and upskilling in the age of AI: Challenges and opportunities for organizations.
  • Diaz, M. (2025). How the 100 best companies are training their workforce for AI. Great Place To Work.
  • Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411-437.
  • London School of Economics. (2024). Ethical AI online course materials. LSE and GetSmarter.
  • Niederhoffer, K., Kellerman, G. R., Lee, A., Liebscher, A., Rapuano, K., & Hancock, J. T. (2025). AI-generated ‘workslop’ is destroying productivity. Harvard Business Review.
  • Paylocity. (n.d.). AI upskilling: How to prepare yourself and your team for the future.
error: Website copy is protected.