Openai/694a9a91-d810-8013-80fe-0c38fdc58984
This conversation was exported from ChatGPT[1]
gpt-5-2 edit
User: Final sentence of my ECIS review: »This is a kind of paper which presentation I would like to listen to at the ECIS and… edit
Final sentence of my ECIS review: »This is a kind of paper which presentation I would like to listen to at the ECIS and further discuss with the authors during the coffee break. Thus I recommend the acceptance” Of course, the usual criteria of rigorour, contribution, relevance, clarity previous literature etc. apply and the paper has several issues (outlined in my review) and a long way to go before making a suitable contribution for a journal level publication. But my main line of thought in GenAI era – Always keep the recipient in mind. Will both the audience in Milano and the authors benefit from the presentation/attendance? Is this a paper that is worth 20 minutes of time of the audience?
Disclaimer: My line is from “confidential comments to editors” field, thus this Linkedin post does not break the rules of double blind review
Assistant: The Edit edit
The Edit
I recently caught myself writing this sentence in an ECIS review:
The paper had issues. Many of them. Rigor, contribution, positioning in prior work—nothing was “done.”
Yet I still recommended acceptance.
Why?
Because in the GenAI era, my main review question has shifted:
Is this worth 20 minutes of the audience’s time?
Not:
- Is it already journal-ready?
- Is every method airtight?
- Is the theory perfectly polished?
But:
- Will the audience in Milano gain something concrete?
- Will the authors benefit from presenting it?
- Will this trigger a real conversation afterward?
Conferences are not journals. They are attention markets.
If a paper sparks thinking, debate, or disagreement—and makes people stay in the room instead of checking email—it has earned its slot.
Everything else still matters. But it is secondary.
Disclaimer: this reflection is based on confidential comments to editors and does not violate double-blind review rules.
- academia #GenAI #conferences
The Value Check
This version reframes reviewing around audience attention and learning, not perfection—something reviewers, authors, and conference organizers all quietly struggle with today.
User: Generate a photo for this linkedin update: My last sentence in an The European Conference on Information Systems ECIS r… edit
Generate a photo for this linkedin update: My last sentence in an The European Conference on Information Systems (ECIS) review:
“This is the kind of paper I would like to listen to at ECIS—and then discuss with the authors during the coffee break.”
The paper has issues. Many of them. Rigor, contribution, positioning in prior work—nothing was “done.”
Yet I still recommended borderline acceptance.
Why?
My main line of thought in GenAI era – Always keep the recipient in mind. Is this worth 20 minutes of the audience’s time?
Not:
Is it already journal-ready?
Is every method airtight?
Is the theory perfectly polished?
But:
Will the audience in Milano gain something concrete?
Will the authors benefit from presenting it?
Will this trigger a real conversation afterward?
Conferences are not journals. They are attention markets.
If a paper sparks thinking, debate, or disagreement—and makes people stay in the room instead of checking email—it has earned its slot.
Disclaimer: this post is based solely on my "comments to editors" field and thus does not violate double-blind review rules.
Assistant: edit
User: Can you put me in the audience and also add me to the coffee break. edit
Can you put me in the audience and also add me to the coffee break. My picture attached. Also, use the logo of ECIS 2026: https://ecis2026.it/
Assistant: edit
User: almost there. edit
almost there. Coffee is fine, but also put me on the chair listening to the presentation
Assistant: edit
User: No, no, as one of the people listening to the presenter edit
No, no, as one of the people listening to the presenter
Assistant: edit
References edit
- ↑ "ChatGPT - Social media post humanizer". ChatGPT. Retrieved 2025-12-24.