CFP: AI in Technical Communication 2026: Designing Future-Proof Practices Virtual Conference

Dear Listserv Members,

I hope you are well rested and interested in contributing a proposal to the AI in Technical Communication 2026 conference. Please peruse the call and submit no later than January 16th. Please reach out to me or the conference chairs with any questions.

Cheers,
Dr. G

AI in Technical Communication 2026: Designing Future-Proof Practices

Spring 2026 · Online · One-Day Symposium

Co-chairs

Bremen Vance, Mercer University Guiseppe Getto, Mercer University Geoffrey Sauer, Texas Tech University Jamie Littlefield, Texas Tech University

Theme

Generative AI continues to reshape technical communication, but the pace of change often drives short-term adaptation over long-term planning. In many classrooms and workplaces, this has meant reacting to new tools as they appear (updating assignments, experimenting with workflows, or shifting documentation practices). While these responses are valuable, they can leave us caught in a cycle of chasing the latest trend rather than developing strategies that last.

This year’s symposium asks: What does it mean to future-proof technical communication practices? We invite work that looks beyond immediate adjustments and instead highlights sustainable, resilient, and foresight-driven approaches. Proposals might explore how educators design curricula that can weather rapid tool turnover, how practitioners build workflows that balance innovation with stability, or how researchers and policymakers anticipate long-term challenges in ethics, governance, and accessibility.

By gathering these perspectives, the 2026 symposium aims to showcase practices that not only adapt to today’s technologies but also prepare us for tomorrow’s uncertainties. Our goal is to move the conversation from keeping pace with AI to building durable practices that will strengthen the field of technical communication for years to come.

Topics of Interest

We welcome proposals from educators, practitioners, and researchers. Possible areas include (but are not limited to):

● Sustainable AI pedagogy and curriculum design

● Professional and workplace practices that support resilience in the face of tool churn

● Policies, ethics, and governance strategies for long-term accountability

● Tools, methods, and content operations that emphasize stability and transparency

● Case studies, experiments, and lessons learned that foreground durability over novelty

We welcome proposals from educators, practitioners, and researchers. Possible areas include, but are not limited to, the following:

Case Studies, Experiments, and Lessons Learned that Foreground Durability over Novelty

● Classroom or workplace experiments that assess how AI-integrated workflows evolve over time.

● Analyses of what hasn’t worked—examples of burnout, ethical lapses, or quality issues from overreliance on AI tools.

● Evaluations of student or professional learning outcomes after sustained AI integration.

Interdisciplinary collaborations showing how technical communicators contribute to the ethical, transparent design of AI systems.

Sustainable AI Pedagogy and Curriculum Design

● Designing assignments and learning outcomes that remain relevant across changing AI tools (e.g., prompt engineering as transferable rhetorical skill).

● Building AI literacy frameworks that emphasize critical reflection, ethics, and rhetorical control rather than tool-specific proficiency.

● Developing course modules that teach students to evaluate, verify, and document AI-assisted work.

● Program-level initiatives that embed AI literacy or human-in-the-loop writing across multiple courses.

Professional and Workplace Practices that Support Resilience in the Face of Rapid Tool Overturn

● Workflow redesigns that balance automation with human editorial judgment in documentation, localization, or regulatory contexts.

● Case studies of organizations implementing governance strategies for AI-assisted writing, editing, or translation.

● Longitudinal accounts of teams adapting to multiple AI tools while maintaining quality and compliance.

● Frameworks for managing “AI transitions” — onboarding, retraining, and sustaining communication quality during tool turnover.

Policies, Ethics, and Governance Strategies for Long-Term Accountability

● Institutional or corporate policies governing AI authorship, transparency, and attribution.

● Models for documenting AI involvement in content production (e.g., disclosure statements, audit trails, content provenance).

● Ethical frameworks for equity, accessibility, and inclusion in AI-augmented communication.

● Comparative analyses of regulatory approaches (e.g., EU AI Act, U.S. Executive Order) and their implications for technical communication practice.

Tools, Methods, and Content Operations that Emphasize Stability and Transparency

● Designing documentation pipelines that integrate AI safely (e.g., human-in-the-loop review checkpoints, validation protocols).

● Strategies for sustaining consistent tone, terminology, and information architecture across AI-generated and human-authored content.

● Research methods for evaluating AI’s long-term impact on efficiency, quality, or team communication.

● Approaches to maintaining data integrity, reproducibility, and security in AI-assisted documentation systems.

About the AI in Technical Communication Symposium

The AI in Technical Communication Symposium has become an annual one-day, online event that brings together educators, practitioners, and researchers to explore the implications of AI for our field. The series began with the Spring 2024 focus on classroom experimentation (Teaching Tech Comm with AI), and in 2025 expanded to consider broader professional and ethical challenges (Keeping Pace with the AI Surge). In 2026, the symposium turns toward future-proofing, asking how technical communication can build resilient practices that adapt to change without being consumed by it.

Presentation Formats

Individual Papers: 10–12 minute presentations + brief Q&A (grouped into sessions)

Panels: 3–4 presenters, 45 minutes with discussion

Lightning Talks: 5 minutes, ideal for students or emerging work

Micro-demos: 8–10 minutes, showcasing tools, assignments, or workflows

Submission Details

Send abstract proposals to: techneforge

Proposal length: 300–500 words (include title, format, and 2–4 keywords)

Deadline: January 16th

Notification of acceptance: January 30th

Event date: March 20th

Review criteria (used by the program committee):

Reviewers will look for well-structured proposals with a clear argument, method, or takeaway. We hope to see clear evidence of thoughtful engagement, reflection, or evaluation that indicate the presentation will offer attendees an opportunity to engage deeply with valuable ideas and actionable practices or insights that they can adapt and adopt.

Additionally, we will look at connections to the theme and the interests of technical communication educators, practitioners, and researchers. As a one-day online symposium, proposals will also be considered in terms of how they contribute to an engaging, well-balanced program. We aim to ensure a mix of perspectives, pacing across sessions, and accessibility for participants in a virtual format.

We look forward to your proposals and to building another thoughtful, engaging conversation on the role of AI in technical communication.

Dr. Philip B. Gallagher | He, Him, His

Assistant Professor

Department of Technical Communication

Mercer University School of Engineering

pbgallagher.eiu
gallagher_pb