Welcome to Revise and Resubmit. Today, we’re diving into a topic that’s both exhilarating and unsettling—the role of generative AI in shaping the future of scholarly work. Our focus is on the fascinating editorial “The Janus Effect of Generative AI: Charting the Path for Responsible Conduct of Scholarly Activities in Information Systems,” authored by Anjana Susarla, Ram Gopal, Jason Bennett Thatcher, and Suprateek Sarker, and published in the prestigious FT50 journal Information Systems Research by INFORMS.
Generative AI tools, like ChatGPT, have taken the world by storm. They’re transforming industries, from marketing to medicine, but what about academia? Can AI really help us become better researchers, or are we walking a fine line between innovation and intellectual compromise? This editorial doesn’t shy away from the hard questions—it explores how AI might help automate tasks like writing or data analysis while cautioning against over-reliance on these tools. Could generative AI enhance our work? Or does it risk diluting the depth and integrity that comes from human insight?
Here’s the thing: as AI-generated content starts to blur the line between machine and human output, how do we keep control of our own scholarly integrity? How do we ensure that these tools are used responsibly without eroding the very foundations of academic inquiry?
Before we dive into the complexities, let’s give a big thank you to the authors—Anjana Susarla, Ram Gopal, Jason Bennett Thatcher, Suprateek Sarker—and Information Systems Research for this thought-provoking piece. Now, are you ready to explore the future of research with a little help from AI?
Reference
Anjana Susarla, Ram Gopal, Jason Bennett Thatcher, Suprateek Sarker (2023) The Janus Effect of Generative AI: Charting the Path for Responsible Conduct of Scholarly Activities in Information Systems. Information Systems Research 34(2):399-408.
https://doi.org/10.1287/isre.2023.ed.v34.n2