
As AI infrastructure expands into local communities, the harder question is not just what the technology can do, but who benefits, who pays, and who gets a say.
I saw a meme recently that said maybe people would care more about AI data centers if we called them “AI surveillance centers.”
That wording is loaded, obviously.
But buried under the slogan is something worth paying attention to.
A lot of the public conversation around AI is starting to split into two camps.
One side treats AI like inevitable progress.
The other treats it like a threat to almost everything human.
And the more I watch it, the more I think the real issue is not just AI.
It is agency.
People are not only reacting to models, GPUs, chatbots, or automation. They are reacting to the feeling that massive systems are being built around them, using public infrastructure, local resources, personal data, and future labor markets, while ordinary people are mostly expected to adapt.
That is where the water conversation gets interesting.
Most people understand that data centers use electricity. Fewer people realize that many large data centers also use water for cooling, often through evaporative systems.
At small scale, that may not be obvious. I have worked around small data centers before, and water was not part of the daily picture.
But hyperscale AI infrastructure is different.
When you pack buildings full of power-hungry compute, all that energy eventually becomes heat. That heat has to go somewhere. Water is often one of the most efficient ways to move it.
So the question is not simply:
“Are data centers bad?”
That is too easy.
The better questions are:
Who benefits?
Who pays?
Who controls it?
What resources are consumed?
What does the community get in return?
What safeguards exist?
And who gets a real say before the decision is already made?
That is the part tech leaders miss when they dismiss the backlash as fear or ignorance.
Some resistance to AI is reactionary. Some of it is exaggerated. Some of it is wrapped in slogans that flatten complicated tradeoffs into easy outrage.
But some of it is a rational response to a world where technology keeps arriving as a mandate instead of a conversation.
I am not anti-AI. I use it, study it, build with it, and think it can solve real problems.
But I also think “progress” is a weak argument when the costs are externalized onto communities, workers, water systems, power grids, and people who never had a seat at the table.
The future probably does not belong to the AI maximalists or the AI abolitionists.
It belongs to the people willing to ask harder questions before the cement is poured, the grid is strained, the aquifer is tapped, and the business case is declared inevitable.
Not anti-tech.
Not blind faith in tech.
The real divide may not be between people who understand AI and people who do not.
It may be between people who feel invited into the future and people who feel run over by it.