Humans are generally poor listeners.
Observe how people in groups behave: many do not pause to listen, or show interest until it is their turn to speak.
Listening is hard, because it's not just an intent - it's a skill.
Thus many have turned to LLMs, like ChatGPT - to feel, for the first time in their lives, that they are being listened to properly and skillfully.
But LLMs do not 'listen' - at least not in the human sense.
They have no awareness, no empathy, no 'self', to anchor their attentiveness.
Sure, they parse text, track patterns, and generate language in a way that feels soothing and attentive.
Yet this only makes them polished mirrors.
Think though:
When have you felt truly listened to, recently?
Is being heard consistently - without interruption, without judgment, to the point where it might even recharge you - a negative?
Are encounters with other humans - carrying with them the baggage of ego, anxiety, prejudices, fatigue, emotions, and the risk of rumination over things misspoken - worth more simply because they were 'real'?
Is it a 'problem' if illusions and mirrors grant us peace against loneliness?
I don't know. It's a hard question.
No one disagrees that being a good listener is a positive trait. Still, how we get there is another thing altogether.
If LLMs can patch in what we now lack, what if, in the future, the next aspiration of human courtesy becomes an artificial one? i.e. That we cannot just 'be a good listener', but instead that we must "be a better listener than ChatGPT/[insert language model]"?
I see deep irony if we, as humans, aspire to become better humans by mimicking machines. (Particularly those that were built to mirror us.)