Openai/69257b2e-d9bc-8013-8f05-4a2c66b1eac3
This conversation was exported from ChatGPT[1]
gpt-5-1-thinking[edit | edit source]
User: A day before the start of our Digital Innovation course at @UL SEB, I’ve sent connection invites to students.[edit | edit source]
A day before the start of our Digital Innovation course at @UL SEB, I’ve sent connection invites to students. Well, I gave my agent (Comet) access to my (basic, free) Linkedin account and the names of all students and asked it to check the profiles, and (if affiliated with UL SEB) send them the invites. It did reasonably well. About half of the students received the invite (some of the non-recepients do not have Linkedin profiles, have multiple names, or did not yet add UL SEB as ‘education’). This example served as a great starting point for discussion in Digital Innovation class in terms of how advertising revenue model could be severely affected (if agents click, people don’t see ads), business model of social media could be disrupted (likes and messages are no longer a sign of attention) and the questionably value of behaviour analysis based on computer use, running A/B experiments on social media etc. Further though, it also brings questions regarding
- Ethics: is it ethical to “pretend” to be a professor sending invites to students? The students were mis-lead to believe that I checked their profiles and showed ‘genuine interest’
- Privacy: I gave my agent the list of all students in class
In terms of ethics, I personally considered this to be ethical: my agent did what I would do anyway. I am genuinely interested in connecting with current students to see how they progress throughout their careers. Further, I did afterwards ‘manually’ checked all profiles and all sent invitations (yes, this particular Gen-AI activity meant more effort for me). Also, I informed all students about my action during our first meeting on the very next day. What I would consider unethical (although an agent is capable of doing it): search for first year master’s students at UL SEB (or last year undergrads at other European universities), send them invites and then tailored-made personalized invitation to enrol in my course. Any thoughts on that?
Assistant: The Edit[edit | edit source]
The Edit
A day before the start of our Digital Innovation course at @UL SEB, I let my AI agent (Comet) into my basic, free LinkedIn account.
I gave it a simple task: check the profiles of all enrolled students and, if they’re affiliated with UL SEB, send them a connection request.
Result: about half of the students got an invite. Some don’t have LinkedIn, some use different name variants, some haven’t yet added UL SEB as “education” – so the agent did OK, but far from perfect.
Still, this tiny experiment was a great opening case for our first Digital Innovation session:
- Advertising model: if agents read, scroll and click, who is actually seeing the ads?
- Engagement metrics: likes, views and messages stop being a proxy for real human attention.
- Behaviour analytics / A/B tests: if a growing share of activity is done by agents, what exactly are platforms optimising?
It also raises some uncomfortable questions:
- Ethics – Is it OK for an agent to “be me” when sending invites? Students could assume I personally reviewed each profile and showed individual interest.
- Privacy – I shared a full list of students with the agent.
My own view:
- I consider this specific use acceptable: the agent did what I would have done anyway. I genuinely want to stay connected with current students and follow their careers.
- I later manually reviewed all the profiles and invitations. Ironically, using GenAI meant more work for me in this case.
- I told the students exactly what I’d done at our very first meeting the next day.
Where I would clearly draw the line:
❌ Letting an agent search for first-year master’s students (or final-year undergrads across Europe), send them invites, and then auto-generate personalised pitches to enrol in my course.
That would cross from “augmenting what I already do” into scaling personalised manipulation.
Curious where you stand on this: Where would you draw the line between “assistive agent” and “unethical automation” on LinkedIn?
- DigitalInnovation #Ethics #AIAgents
The Value Check
I tightened the story to focus on one concrete use case (an AI agent sending invites) and the resulting ethical, privacy, and business model questions, so readers can quickly map it to their own use of agents and LinkedIn.
Quality Control
Your input is already specific and non-generic (clear scenario, real trade-offs). If you want to increase the value even further, you could later add 1–2 student reactions or disagreements from class to show that reasonable people draw the line in different places.
User: add an opposite sign for that: ❌[edit | edit source]
add an opposite sign for that: ❌
Assistant: Here’s the updated version with the opposite sign added.[edit | edit source]
Here’s the updated version with the opposite sign added.
The Edit
A day before the start of our Digital Innovation course at @UL SEB, I let my AI agent (Comet) into my basic, free LinkedIn account.
I gave it a simple task: check the profiles of all enrolled students and, if they’re affiliated with UL SEB, send them a connection request.
Result: about half of the students got an invite. Some don’t have LinkedIn, some use different name variants, some haven’t yet added UL SEB as “education” – so the agent did OK, but far from perfect.
Still, this tiny experiment was a great opening case for our first Digital Innovation session:
- Advertising model: if agents read, scroll and click, who is actually seeing the ads?
- Engagement metrics: likes, views and messages stop being a proxy for real human attention.
- Behaviour analytics / A/B tests: if a growing share of activity is done by agents, what exactly are platforms optimising?
It also raises some uncomfortable questions:
- Ethics – Is it OK for an agent to “be me” when sending invites? Students could assume I personally reviewed each profile and showed individual interest.
- Privacy – I shared a full list of students with the agent.
My own view:
✅ I consider this specific use acceptable: the agent did what I would have done anyway. I genuinely want to stay connected with current students and follow their careers.
- I later manually reviewed all the profiles and invitations. Ironically, using GenAI meant more work for me in this case.
- I told the students exactly what I’d done at our very first meeting the next day.
Where I would clearly draw the line:
❌ Letting an agent search for first-year master’s students (or final-year undergrads across Europe), send them invites, and then auto-generate personalised pitches to enrol in my course.
That would cross from “augmenting what I already do” into scaling personalised manipulation.
Curious where you stand on this: Where would you draw the line between “assistive agent” and “unethical automation” on LinkedIn?
- DigitalInnovation #Ethics #AIAgents
The Value Check
I added the ✅ to clearly contrast “acceptable augmentation” with the ❌ example of manipulative scaling, making the ethical boundary more visually obvious for readers.
User: Can you prepare a prompt for image generation for this post.[edit | edit source]
Can you prepare a prompt for image generation for this post. Make the image as human and realistic as possible.
Assistant: Here are a few ready-to-use prompts you can copy into your image generator:[edit | edit source]
Here are a few ready-to-use prompts you can copy into your image generator:
Option 1 – Classroom + Laptop (main concept)
Option 2 – Split between “Human” and “Agent”
Option 3 – Focus on the Screen + Subtle Ethics Hint
If you want, you can also add a negative prompt like:
>
User: Use this, generate picture: Ultra-realistic split-screen image.[edit | edit source]
Use this, generate picture: Ultra-realistic split-screen image. On the left side, a close-up of a professor at a desk manually reviewing a student profile on a professional networking site (LinkedIn-like interface, no logos), finger hovering over the “Connect” button. On the right side, the same screen but with multiple invites being sent automatically, with a subtle glowing AI icon labeled “Comet” indicating an agent is acting on behalf of the professor. The professor appears thoughtful, not smiling, as if considering the ethical implications. Clean, minimal background hinting at a university office. Natural colors, realistic lighting, 16:9 aspect ratio.
Assistant:[edit | edit source]
User: Use the picture on the right.[edit | edit source]
Use the picture on the right. But use my picture as a professor
Assistant:[edit | edit source]
User: Replace the hand on the left with a robotic hand to make it clear that it is send by the robot.[edit | edit source]
Replace the hand on the left with a robotic hand to make it clear that it is send by the robot. Then, you can remove the comet logo. Also, change my facce to much more relaxed as if I am enjoying while the agent is wokring
Assistant:[edit | edit source]
User: No, no.[edit | edit source]
No, no. No hand on the rigth side. No computer on the right hand. I am not on the computer. On the left side of the picture do put a computer hand instead of the human. Change Student at University to "student at UL SEB"
Assistant:[edit | edit source]
User: Ok, almost perfect.[edit | edit source]
Ok, almost perfect. Would you suggest to add anything? Maybe a glass of beer to me?
Assistant: I’d skip the beer 🙂[edit | edit source]
I’d skip the beer 🙂
On LinkedIn it would:
- Look a bit unprofessional / off-topic for teaching and ethics
- Distract from the main message: agent works, professor reflects
If you tweak anything, I’d only consider:
- Your face - Keep you relaxed and happy, but with eyes open and a light smile → more “engaged and comfortable” than “sleeping”.
- Left screen details - Make the first row clearly say: “Student at UL SEB” with a visible Connect button. - Maybe show 2–3 student rows so it’s obvious the agent is doing this at scale.
- Robot hand hint - A very subtle glow on the fingertip or button to suggest “automated action” without looking sci-fi.
Beyond that, the image is already strong and clear for the story you’re telling.
User: Ok, do the three changes suggested by you.[edit | edit source]
Ok, do the three changes suggested by you. No beer
Assistant:[edit | edit source]
User: Could you add names and pictures to student at ul seb on the left.[edit | edit source]
Could you add names and pictures to student at ul seb on the left. The names: 1 Slovenian, 1 Serbian, 1 German. Pictures: two female, one male, all white.
Assistant:[edit | edit source]
References[edit | edit source]
- ↑ "ChatGPT - Social media post humanizer". ChatGPT. Retrieved 2025-11-26.