CFP: Edited Collection on Technical Editing and Generative AI – Reminder

Hello,

This is a friendly reminder that the CFP that Ed Nagelhout and I announced for an edited collection titled Advancing Technical Editing in the Age of Generative AI is due on June 1. This collection aims to explore the impact of generative AI writing tools on the field of technical editing and contribute to research on technical editing practice and pedagogy for researchers, educators, and practitioners.

We invite 500- to 750-word chapter proposals (excluding references) that address key questions such as:

  • How are technical editors currently using generative AI tools, and what are the benefits, limitations, and ethical considerations?
  • How does generative AI change or expand the role of the technical editor, and what unique skills and judgment do human editors provide?
  • What should best practices be around attribution and transparency when using generative AI for writing or editing?
  • How can technical editing curricula and training adapt to prepare students for editing in an age of generative AI?
  • How can technical editors use generative AI critically and inclusively to advance social justice language practices in technical communication?

We welcome proposals from diverse, interdisciplinary perspectives, including practicing technical writers and editors, educators, and researchers. Possible topics might include case studies, pedagogical approaches, comparative analyses, ethical and legal frameworks, the changing role of technical editors, and strategies for human-AI collaboration.

The timeline for this edited collection is as follows:

  • Proposals due: June 1, 2024 (include a tentative title and brief biography for all contributors)
  • Decisions to authors: June 15, 2024
  • Full chapters due: October 1, 2024

For questions, please contact the collection editors Jeffrey Jablonski (jeffrey.jablonski) and Ed Nagelhout (ed.nagelhout). Please send submissions to Jeffrey Jablonski.

We look forward to your submissions and to advancing this timely conversation on technical editing and generative AI.

The complete CFP is copied below. For a PDF copy, visit https://tinyurl.com/3m4tx8un,

Sincerely,

Jeff

UNLV Logo Jeffrey Jablonski, Ph.D.
Associate Professor
English Department – RLL 245
University of Nevada, Las Vegas

jeffrey.jablonski

Call for Proposals: Scholarly Collection on Technical Editing and Generative AI
Advancing Technical Editing in the Age of Generative AI
Jeffrey Jablonski and Ed Nagelhout, University of Nevada, Las Vegas

Technical communication researchers are exploring how technical writers can use generative AI tools in their work and how generative AI can assist with tasks such as research, drafting, and editing (Baro, 2022; Bedington et al., 2024; Quetzlli, 2023; Tang, 2021; Weltin et al., 2023). However, scholars also note limitations in AI-generated content such as the risk of factual errors and hallucinations (Babcock et al., 2021; McIntosh et al., 2023).

The use of generative AI is also changing and expanding the role of the technical editor in a range of disciplines. Editors must develop new skills in AI literacy and human-machine collaboration (Duin & Pedersen, 2021). As Ziegler (2022) argues, the increasing demands for business process integration and semantic technologies in content management necessitate a clearer definition of the competencies of "information architects" as distinct from traditional technical writing roles. At the same time, human editors remain uniquely positioned to provide critical judgment, domain expertise, and ethical oversight in the use of AI technologies (Kaebnick et al., 2023; Ren et al., 2023).

Some scholars propose frameworks and best practices for human-AI interaction in writing and editing. Hart-Davidson (2018) advocates for collaborative, rhetorical relationships with AI rather than using AI merely as a tool. McKee and Porter (2022) propose a taxonomy of roles for “humanmachine” teaming based on rhetorical context. Strobelt et al. (2022) present GenNI, an interface for "human-AI collaboration in producing descriptive text" that gives users high-level control over AI-generated content. Hardin et al. (2020) share methods for technical writers to produce clear and concise content for both human and machine translation, taking a proactive approach to writing for a global audience.

Ethical considerations around attribution, transparency, and bias are paramount as generative AI becomes more integrated into technical editing workflows and academic publishing. Duin & Pedersen (2023) emphasize the importance of AI explainability and the need for protocols to mitigate bias and protect intellectual property rights. Kaebnick et al. (2023) recommend that scholarly authors and editors prioritize transparency in disclosing the use of generative AI, while maintaining human responsibility for the final content.

Technical editing curricula and training must also adapt to prepare students for these challenges, fostering AI literacy alongside traditional editing skills (Flanagan & Albers, 2019; Melonçon 2019; Berger & Pigg, 2023). In exploring how technical editing courses should be adapted to account for generative AI, Mallette (2024) assumes a critical and inclusive stance, emphasizing social justice. In a “microcredential module” students are introduced to concepts of social justice in technical communication (Clem & Cheek, 2022; Jones & Walton, 2023) and reflect on how editors can critically challenge bias, misinformation, and oppression in technical editing practice involving generative AI.

Looking ahead, the broader implications of generative AI for technical communication are significant. For Koerber et al. (2023), large language model (LLM) technologies have the potential to enhance human capabilities and transform knowledge creation, but also raise complex questions around authorship, ownership, and control. These questions require critical engagement from the technical editing community.

Technical editors will play a vital role in navigating these issues and ensuring the responsible development and use of AI technologies in service of effective, ethical communication, while also advocating for the increased importance of human editors in AI-mediated content creation. As Hart-Davidson (2018) asserts, technical communicators must learn to "write with robots," envisioning AI as a dialogic partner in the knowledge-making process. Yet humans still have vital roles to play in the responsible development and deployment of these powerful writing technologies. The path forward is not AI automation alone, but thoughtful human-AI integration.

We invite 500– to 750-word (excluding references) chapter proposals for an edited collection exploring the impact of generative AI writing tools on the field of technical editing. This collection seeks to contribute to research on technical editing practice and pedagogy for researchers, educators, and practitioners. Key questions this collection aims to address include but are not limited to the following:

  • How are technical editors in industry and academia currently using generative AI tools? What are the benefits, limitations, and ethical considerations?
  • How does the use of generative AI change or expand the role of the technical editor? What skills and judgment do human editors uniquely provide?
  • What should best practices be around attribution and transparency when generative AI is used to assist with writing or editing?
  • How can technical editing curricula and training adapt to prepare students for editing in an age of generative AI?
  • How can technical editors use generative AI critically and inclusively for advancing social justice language practices in technical communication?
  • How can technical communicators participate in the development of generative AI editing tools?
  • What are the broader implications of generative AI for issues of accuracy, bias, intellectual property, and authorship in technical communication?

We welcome proposals from diverse, interdisciplinary perspectives including practicing technical writers and editors, educators, and researchers. Possible topics might include:

  • Case studies on the use of generative AI in technical editing workflows
  • Pedagogical approaches to teaching technical editing in the context of generative AI
  • Comparative analysis of human versus AI technical editing on measures such as accuracy and style
  • Ethical and legal frameworks for technical editors’ use of generative AI
  • The changing role and value proposition of the technical editor in an AI-assisted future
  • Strategies for human-AI collaboration in technical editing

Timeline
The timeline for this edited collection is as follows:

  • Proposals due: June 1, 2024
    • Include a tentative title and a brief biography for all contributors
  • Decisions to authors: June 15, 2024
  • Full chapters: October 1, 2024

If you have any questions, please contact either of the collection editors Jeffrey Jablonski (jeffrey.jablonski) and Ed Nagelhout (ed.nagelhout); for submissions, please send to Jeffrey Jablonski (jeffrey.jablonski).We look forward to your submissions and to advancing this timely conversation on technical editing and generative AI.
.
References
Babcock, R. D., Khandelwal, J., Wilkinson, C. E., Kahathuduwa, C., & Schlabritz-Loutsevitch, N. (2021). Supporting medical writers in the twenty-first century. In L. Melonçon & S. Graham (Eds.), Teaching writing in the health professions (pp. 129-146). Routledge.
Baro, D. (2022). Metadata and content management bridging technical documentation and automation technology. SHS Web of Conferences, 139, 02004.
Bedington, A., Halcomb, E., McKee, H. A., Sargent, T., & Smith, A. (2024). Writing with generative AI and human-machine teaming: Insights and recommendations from faculty and students. Computers and Composition, 71, 102833.
Berger, A., & Pigg, S. (2023). Peer-led professional development: How one technical communication team learns on the job. Journal of Business and Technical Communication, 37(4), 347-377.
Clem, S., & Cheek, R. (2022). Unjust revisions: A social justice framework for technical editing. IEEE Transactions on Professional Communication, 65(1), 135–150.
Creary, M., & Gerido, L. H. (2023). The public performativity of trust. The Hastings Center Report, 53(S2), S76–S85.
Duin, A. H., & Pedersen, I. (2021). Writing futures: Collaborative, algorithmic, autonomous. Cham, Switzerland: Springer.
Duin, A. H., Pedersen, I. (2023). Augmentation technologies and artificial intelligence in technical communication : designing ethical futures. New York: Routledge, Taylor & Francis Group.
Flanagan, S., & Albers, M. J. (Eds.). (2019). Editing in the modern classroom. Routledge.
Hardin, A. R., Ito, J., & Sasaki, A. (2020). Writing for human and machine translation: Best practices for technical writers. In Proceedings of the 38th ACM International Conference on Design of Communication (pp. 1-8).
Hart-Davidson, W. (2018). Writing with robots and other curiosities of the age of machine rhetorics. In J. Ridolfo & W. Hart-Davidson (Eds.), The Routledge handbook of digital writing and rhetoric (pp. 343-353). Routledge.
Jones, N. N., & Walton, R. (2023). Social justice. In H. Yu & J. Buehl (Eds.), Keywords in technical and professional communication (pp. 267-272). WAC Clearinghouse.
Kaebnick, G. E., Bennett, A., Brody, H., Dresser, R., Garland, S., Guinn, A., Hale, B., Moreno, J., & Vanderpool, H. (2023). Editors’ statement on the responsible use of generative AI technologies in scholarly journal publishing. Hastings Center Report, 53(5), 3-6.
Koerber, A., Pedersen, I., Duin, A. H., Kastman, E., & Smith, J. (2023). The predatory paradox: Ethics, politics, and practices in contemporary scholarly publishing. Open Book Publishers.
McIntosh, T. R., Liu, T., Susnjak, T., Watters, P., Ng, A., & Halgamuge, M. N. (2023). A culturally sensitive test to evaluate nuanced GPT hallucination. IEEE Transactions on Artificial Intelligence. Advance online publication.
Mallette, J. C. (2024). Preparing future technical editors for an artificial intelligence-enabled workplace. Journal of Business and Technical Communication, 38(2), 205-239.
Melonçon, L. (2019). A field-wide view of undergraduate and graduate editing courses in technical and professional communication programs. In S. Flanagan & M. J. Albers (Eds.), Editing in the modern classroom (pp. 171-191). Routledge.
McKee, H. A., & Porter, J. E. (2022). Team roles & rhetorical intelligence in human-machine writing. In 2022 IEEE International Professional Communication Conference (ProComm) (pp. 384-391). Limerick, Ireland.
Quetzalli, A. (2023). The future of ChatGPT and AI in docs. In A. Quetzalli (Ed.), Docs-as-ecosystem: The community approach to engineering documentation (pp. 225-233). Apress.
Ren, Y., Zhang, H., & Kraut, R. E. (2023). How did they build the free encyclopedia? A literature review of collaboration and coordination among Wikipedia editors. ACM Transactions on Computer-Human Interaction, 31(1), 7:1-7:48.
Strobelt, H., Bau, D., Bethge, M., & Mordvintsev, A. (2022). GenNI: Human-AI collaboration for data-backed text generation. IEEE Transactions on Visualization and Computer Graphics, 28(1), 1076-1086.
Tang, Y. (2021). A robot wrote this?: An empirical study of AI’s applications in writing practices. In Proceedings of the 39th ACM International Conference on Design of Communication (pp. 380-381). Association for Computing Machinery.
Weltin, M., Lucke, D., & Jooste, J. L. (2023). Automatic content creation system for augmented reality maintenance applications for legacy machines. Procedia CIRP, 120, 744-749.
Ziegler, W. (2022). New roles and competencies in technical communication induced by semantics and analytics. SHS Web of Conferences, 139, 02004.