[Lab Note #2] “Pro-human” movement focuses more on AI companies
The missing half of the “pro-human” movement seems to be the humans
I recently came across The Pro-Human AI Declaration and I’m struggling to understand what it adds beyond repackaging familiar principles (and creating another institutional layer around them).
In a recent post on X they said that leaders from all areas of life have come together to agree on 33 AI principles across 5 key themes:
Keeping Humans in Charge
Avoiding Concentration of Power
Protecting the Human Experience
Human Agency and Liberty
Responsibility and Accountability for AI Companies
The thing that’s noticeably missing is responsibility and accountability for the individuals who engage with this technology. In my opinion, this should be the most important theme.
Concretely, that looks like:
teaching basic AI literacy (calibration, hallucinations, over-trust)
building cultural norms around dependency/anthropomorphism
having clear guidance for “don’t use this when you’re in crisis / sleep-deprived isolated” and where to escalate instead
Institutions like this can become a blame-allocation machine: when things go wrong, responsibility flows in one direction: toward “the technology.” Of course companies should make their products safer. Duty of care matters, especially for vulnerable people. But a pro-human lens has to include the human side too—how people actually use these tools, what norms we build, and what we expect of ourselves.
I may be in the minority here, but I believe AI safety starts with the user first. Humans are not passive recipients of tools.
I’m still forming my opinion on this, but one thing feels obvious: a pro-human organization that doesn’t address the human user as the top priority is mostly paying lip service.
When I think about AI safety, harm, or governance, I start from a human-centered perspective: incentives, behavior, context, and the lived experience of using the tool. Yet, most of the conversations around this topic seem to be about holding AI companies responsible. So the Pro-Human AI Declaration talks about protecting people and keeping “humans” in charge, but it says little about holding individual users accountable—or prescribing user-side practices.
To me, this becomes a convenient story: it’s easier to regulate companies than to build cultural norms for human behavior. We saw this same approach be done with social media companies and it didn’t work well. Particularly because we are irrational beings behaving unpredictably.
And a Pro-Human organization like this seeks to hold companies responsible for designing better experiences for users. And I can tell you, from my experience as a human-centered designer. It’s one of the last things they think about and is often thought of as a band-aid to solve for later.
And that’s what we’re going to see again here. Unless we make the cultural decision to be pro-human in a way that challenges each of us to be responsible for our own actions.
About 17 years ago, as a young naive designer—right as the iPhone became a cultural phenomenon that would change our relationship with technology forever—I began to think about technourishment, the idea that technology should enrich the human experience (not replace it).
But I’ve come to understand it more as an individual philosophy, not a corporate-led one. For it to work at scale it needs to stick to the ribs of our cultural consciousness. But it requires the conscious engagement with technology. And I don’t think humans are ready for that responsibility. We’ve been conditioned since the industrial revolution to be passive users. And let’s be honest, it hasn’t been that bad. But as I wrote:
“[T]echnourishment stands as a beacon in an increasingly digital world. It’s a call to artists, writers, designers, and creators of all kinds to engage with technology consciously.” — technourishment.com
Maybe not everyone needs to adopt this philosophy. But if a few people who shape culture adopt it, I think we’ll all be better for it.
Otherwise, “pro-human” becomes a slogan that focuses on the technology. A pro-human movement should also build the culture of conscious use.
Join me in the conversation by following me on LinkedIn → Eric P. Rhodes, and check out my latest research at the Future of Work Lab.


