AI Doubt And Real World Consequences
Mejia | CLMI
When influential advanced artificial intelligence developer Anthropic recently clashed with the Pentagon over concerns about the use of its technology, national security experts and Silicon Valley observers seemed to take an unprecedented pause: How far should government go in using AI for military, surveillance and law enforcement missions? Anthropic was the trailblazer, the first of its kind used by government agencies for classified work. Yet, by early this year, with clashes between Anthropic executives and Pentagon brass intensifying, the final break in the relationship happened over an ongoing debate into whether AI could be used to spy, carry out offensive operations, counterattack or kill.
The confrontation between the developer and the Department of Defense adds a fresh layer to an endless conversation over the introduction of AI into our lives. It’s being used everywhere, in our work, classes, and at home. Some may argue that it has invaded every facet of our lives. As a result, there is an opportunity for a critical and larger debate on how much humans rely on technology and our level of trust in it while it rapidly evolves from a simple Internet communication tool to a perfected digital assistant and companion of the future. There are pros and cons, and an understandably jittery public is asking more questions than it did just several years ago.
The AI Effect
There are a broad range of mixed views and opinions on the now ubiquitous presence of AI. As Gallup shows, daily, weekly and monthly use has more than doubled in the workplace over less than two years as organizations continue to adapt AI technologies for speed and efficiency …
That use is much more pronounced, naturally, in the technology and finance sectors …
Yet, there is a growing sense of unease over its use. In a recent FOX News poll, 6 in 10 registered voters feel like the technology’s adoption is moving way too fast …
With a majority (63%) increasingly skeptical about the government’s ability to regulate AI’s growth. That mirrors another recent survey from Verasight where over half of all Americans don’t trust AI, with concerns over …
• Loss of human oversight in AI systems (63%)
• Loss of human creativity and interaction (62%)
• Cyberattacks enabled by AI (61%)
• Misinformation in AI output (61%)
• Job displacement (54%)
Views dramatically diverge between excited AI experts and a broader skeptical public: 56% of AI experts in a 2025 Pew survey are positive about AI impact over the next 20 years while just 17% of the public is. That’s an enormous gap.
The Data Center Dilemma
Earth is about 71% water, and only 3% of that is fresh water … and of that 3%, just 0.5% of it is safe and accessible for human consumption. That presents an important question: “How much water do you need to survive?” Humans drink about four to six cups of water a day on average, which is about 32 to 48 ounces of water, despite the fact our bodies really need about 3.7 liters of water a day to function properly.
But with the introduction of AI, we have entered a water crisis of epic proportions (compounding an existing water pollution and water quality crisis). AI data centers use our fresh drinking water supply to cool down their AI machines at a rate ranging from 2-17 millions of gallons a water a day … for just one center. As Joseph Kane at the Brookings Institution notes …
[W]ater is a fundamental ingredient to keep servers and other equipment in data centers reliably cool. A typical data center uses 300,000 gallons of water each day (equivalent to the demands of about 1,000 households), but large data centers can use an estimated 5 million gallons of water each day, equivalent to the needs of a town of up to 50,000 residents. Moreover, projections show water used for cooling may increase by 870% in the coming years as more facilities come online. These direct water needs also do not include the indirect needs required for energy generated offsite or involved in manufacturing software components.
Homes living near data centers are now affected with reduced water pressure, and others are running out of well water. According to the United Nations, we are now entering “global water bankruptcy” where “… more than half the world’s large lakes have declined since the early 1990s, and 35% of natural wetlands have been lost since 1970.” The convergence of AI and climate change is also real as more people using AI means more people losing access to fresh, healthy and drinkable water.
AI Impacts On Young People
With AI being introduced into the world, many students use it to help teach them how to solve equations they don’t understand. A large majority, 57%, are regularly using AI chatbots as an information resource, according to Pew …
This can be very helpful in many ways of course, but it has some downsides. Using AI can reduce human interactions, some experts argue, and there are others who stress that while AI can’t and shouldn’t replace human interaction, it can certainly strengthen human collaboration. In the instance a student needs help from a tutoring program, this can create a direct one-on-one experience that enhances the student’s understanding of the subject through critical thinking skills. However, will the use of AI help you solve the question … or will it just give you an immediate answer? Is it accurate? Will you end up passing classes as a top student, yet discovering your unable to grasp a wide range of subjects as you matriculate into college? An American Association of Colleges and Universities (AAC&U) survey found that “95% of … faculty … said GenAI’s impact will be to increase students’ overreliance on these artificial intelligence tools, including 75% who said the tools will have a lot of impact. [In addition,] 90% said the use of GenAI will diminish students’ critical thinking skills, including 66% who think GenAI will have a lot of impact.”
These are important questions, in addition to the dangerous use of AI in the school environment. There are already countless examples of AI’s use in replicating voices and faces, leading to menacing situations where students are victimized. A RAND American School Leader Panel survey “found that 13 percent of [K-12] principals reported incidents of bullying that involved AI-generated deepfakes during the 2023–2024 and 2024–2025 school years …”
MIRANDA MEJIA is a Fellow at the Civic Literacy and Media Influence Institute at Learn4Life








