In the Age of AI, a Giant Glowing Poster Speaks Louder Than a Neural Net

The artificial intelligence boom has delivered many things: soaring valuations, lavish conferences, and a collective hallucination that reality only counts when mediated through a screen. But as thousands of executives flocked to yet another financial services convention this year—badged, badged again, and Bluetooth-tracked—we made a different kind of bet.

While everyone else chased attention inside the browser, we took a more literal route: the wall.

We were advertising Sedric (where I helm as VP Marketing), a company that builds AI-powered compliance platforms, the old-fashioned way. Not through high-cost low-ROI LinkedIn banners or other digital noise, but through sheer physical presence. I bought media at the airport. I staked a massive glowing digital banner on the convention center—directly in the approach of the on-coming foot traffic from the sponsor hotel. And at the heart of the event, we planted a floor-to-ceiling digital monolith at the main entrance of the exhibit hall—a display so large it made the official conference signage look like directional Post-it notes.

And it worked.

Against the Grain

Conventional wisdom in the AI space says the smartest way to market is through smarter algorithms. Target. Retarget. Personalize. Automate. You’re not marketing unless your funnel has a funnel.

But conventional wisdom is often wrong. When they zig, you gotta zag!

What digital advertising gains in precision, it loses in memory. The average human sees between 4,000 and 10,000 ads per day, most of them algorithmically served and instantly forgotten. What they don’t forget is the thing they had to crane their necks to see.

That’s exactly what happened with Sedric’s presence at the event.

Scene One: Baggage Claim Bravado

Janet Uses Sedric Airport

Our campaign began long before the attendees reached the venue. At the airport, as soon as conference attendees arrived en masse and exited the secure area at the airport, they were greeted by a massive sign, “Janet Uses Sedric.” Then again a few standing signs and again at the baggage claim.

No jargon. No generative promises. Just a short memorable statement. More than one person told us later that this was their first encounter with the brand—before they opened the conference app, before they walked into the hotel. The message registered, not because it was targeted, but because it was unavoidable. More than a dozen would ask us who “Janet” is and what Sedric is all about.

Scene Two: The Pavement Plays Its Part

Outside the convention center, most brands were busy buying pixels. We bought concrete. A series of Sedric-branded installations lined the walkways—bold, self-assured, almost austere in their restraint. They didn’t shout. They stood. And in doing so, they created a physical rhythm. A drumbeat of visual authority that led attendees from curb to check-in desk.

Scene Three: The Tower

Janet Uses Sedric ACA Entrance

Then came the showstopper: a floor-to-ceiling digital ad at the entrance to the exhibition hall. Twenty feet tall, placed dead center, with a clear and deliberate message.

While most companies splintered their branding across 10-by-10 booths and awkward giveaways, we went vertical. Our display loomed over the exhibit floor, casting a long shadow—literally and metaphorically—over lesser signage. This wasn’t marketing. It was architectural intent.

People took photos in front of it. They posted about it. And they remembered it.

Results You Can’t Scroll Past

The payoff came quickly—and tangibly. Even before the conference began, attendees were snapping pictures of our airport banners and sending them, unsolicited, to our CEO with messages of praise. Full disclosure: it was a great feeling for me to kick off the conference with such an enthusiastic validating response from our CEO! Booth visitors arrived saying, “I saw your billboard and had to ask what Sedric is.” My presentation on the Innovation Stage drew a full house with dozens more standing on the back and sides—not because they knew us from LinkedIn, but because they saw us from the sidewalk. Nearly a hundred conversations were influenced by the media blitz. And a thousand conference attendees (and envious competitors) saw the Sedric brand loud and clear. We did nearly 50 demos over a day and a half—not bad for a startup.

After the event, existing customers reached out to our customer success team, unprompted, to share photos and praise and say they’d seen our massive messages displayed at the venue. One billboard. Hundreds of conversations. Thousands of impressions. The campaign generated more meaningful dialogue than months of sponsored content could have hoped to. And not a single chatbot was required.

The Irony of the Moment

In a year dominated by talk of large language models and autonomous agents, it was oddly satisfying to hear constant chatter about our brand—not because a new Agentic AI told them to, but because a massive screen did.

In marketing, there’s a term called “category capture”—the moment when a brand becomes synonymous with a problem space. We didn’t need AI to generate that moment. We needed space, scale, and the nerve to defy the trend.

Lessons from the Analog Edge

The AI industry talks incessantly about disrupting the old playbook. But sometimes, the oldest plays still work—especially when no one else is running them.

Physical presence is the most underutilized lever in tech marketing today. In a landscape obsessed with “reach,” we chose resonance. In a digital arms race, we brought a billboard.

And as it turns out, when everyone zigs toward hyper-targeted AI ads, the most strategic move might just be a really, really big zag.

The Hidden Insatiable Thirst of AI: Unveiling the Water Footprint of Intelligence

Can you guess how much water the AI model consumed to generate the featured image of a thirsty robot for this article? Read below for answers.

In the digital age, artificial intelligence (AI) has become synonymous with innovation and efficiency. Yet, behind the seamless interactions and rapid computations lies an often-overlooked environmental cost: water consumption. As AI systems like ChatGPT become integral to various sectors, understanding their water footprint is crucial.

The Invisible Drain: Water and AI

AI models, particularly large language models (LLMs), require substantial computational power. This power generates heat, necessitating cooling systems in data centers. Many facilities employ evaporative cooling, which consumes significant amounts of water. Once used, this water evaporates and cannot be reclaimed, contributing to the overall water footprint of AI operations.

A 2023 study by researchers at the University of California, Riverside, and the University of Texas at Arlington highlighted this issue, estimating that training GPT-3 in Microsoft’s U.S. data centers could consume approximately 700,000 liters of freshwater. The study also projected that by 2027, global AI water consumption could reach 4.2 to 6.6 billion cubic meters, surpassing the annual water withdrawal of countries like Denmark.

Quantifying the Cost: Insights from Industry Leaders

OpenAI CEO Sam Altman has addressed concerns about AI’s environmental impact. In a June 10, 2025 blog post, The Gentle Singularity, he revealed that an average ChatGPT query uses about 0.000085 gallons of water—roughly one-fifteenth of a teaspoon. While this figure seems minimal, the cumulative effect across billions of queries is substantial.

Altman also noted that each query consumes approximately 0.34 watt-hours of energy, comparable to powering a high-efficiency lightbulb for a few minutes.

“As datacenter production gets automated,” Altman writes, “the cost of intelligence should eventually converge to near the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon.)”

Despite these assertions, experts argue for greater transparency and detailed data to assess the broader sustainability of AI technologies. Environmental advocates stress the need for comprehensive reporting on water usage, especially in regions facing water scarcity.

The Broader Implications: AI’s Environmental Footprint

The environmental impact of AI extends beyond water consumption. Data centers contribute to carbon emissions, with the energy required for training and operating AI models often sourced from fossil fuels. A 2024 report indicated that training large AI models like GPT-3 released approximately 552 metric tons of carbon dioxide, equivalent to the annual emissions of 123 passenger vehicles.
Moreover, the location of data centers plays a critical role. Facilities situated in drought-prone areas exacerbate local water shortages. For instance, data centers in Virginia’s “Data Center Alley” have seen water usage surge by nearly two-thirds between 2019 and 2023, raising concerns among environmentalists and local communities.

The Risk No One Talks About

In the context of financial services—Sedric’s domain, where I lead marketing—this might not be an abstract concern. Rather; it’s a compliance issue hiding in the skin of an engineering detail.

Consider a global bank adopting large-scale AI for client communication, transaction monitoring, or internal compliance might find itself hosting those models in a jurisdiction where water is scarce. What begins as a procurement decision may end up a reputational vulnerability. Stakeholders, regulators, and even ESG auditors may ask: Did you know what your infrastructure was doing to the local watershed?

This is not to suggest catastrophe. But it is to suggest complexity—something compliance teams are not just trained to handle, but ethically required to anticipate. Just this year many parts of Northern and Western Europe are suddenly in a state of drought. While people are being told to scale back water usage, what about our relentless AI queries? In 2024, Europe’s data center industry consumed about 62 million cubic meters of water—the equivalent to roughly 24,000 Olympic swimming pools. Until there is pressure to find more efficient methods, the thirst for water resources will only grow.

Navigating the Future: Sustainable AI Practices

Addressing AI’s environmental impact requires a multifaceted approach:

  1. Innovative Cooling Solutions: Companies are exploring alternative cooling methods, such as air cooling and liquid immersion cooling, to reduce water usage.
  2. Renewable Energy Integration: Transitioning data centers to renewable energy sources can mitigate carbon emissions associated with AI operations.
  3. Strategic Data Center Placement: Locating data centers in regions with abundant water resources and cooler climates can alleviate pressure on water-stressed areas.
  4. Transparency and Reporting: Implementing standardized reporting on water and energy usage can foster accountability and inform sustainable practices.

Balancing Progress and Sustainability

As AI continues to revolutionize industries, it’s imperative to balance technological advancement with environmental stewardship. Recognizing and addressing the water footprint of AI is a critical step toward sustainable innovation. At Sedric, we advocate for responsible AI development that considers not only performance metrics but also ecological impacts. By fostering transparency and embracing sustainable practices, we can ensure that the growth of AI aligns with the health of our planet.

Prologue (for my Science fam)

So, just how much water did the AI model consume to generate an image of a thirsty robot? Let’s jump into the numbers.

Assumptions and Methodology

1. Energy Cost of One Image Generation

Based on various benchmarks, generating a single high-resolution DALL·E image (1024×1024 or larger) consumes approximately 2.9 to 5.4 watt-hours (Wh) of electricity, depending on server load and GPU type.

2. Water Consumption per kWh

Water usage depends on:

Evaporative cooling at data centers

Water used in electricity production (if fossil or thermal-based)

Average combined water use (direct + indirect) in U.S. data centers:
~1.8 liters per kWh 

Estimated Water Use

We apply the formula:

Water (liters) = Energy (kWh) × 1.8 L/kWh

For 2.9 Wh:
0.0029 kWh × 1.8 L/kWh = ~5.2 milliliters

For 5.4 Wh:
0.0054 kWh × 1.8 L/kWh = ~9.7 milliliters

Result:

Estimated water used to generate the robot image:
~5–10 milliliters of freshwater

That’s roughly one to two teaspoons of water — enough to hydrate a parched AI for a single, satisfying sip.

The AI Emperor Has No Clothes: Builder.ai and the Perils of Overpromising

A cautionary tale of AI Unicorn roadkill.

Silicon Valley has always been a land of mythmakers. A place where garage-born startups become billion-dollar behemoths overnight, and founders sell visions faster than they ship products. But in the current gold rush for artificial intelligence, we’ve reached an inflection point. Hype is no longer just a marketing tool — it’s a liability.

Enter Builder.ai, a case study in the catastrophic consequences of selling the illusion of intelligence.

The Dream: Software Without Developers

Founded in 2016, Builder.ai rode the rising tide of AI evangelism with a seductive proposition: building custom software should be as easy as ordering a pizza. Their platform, they said, would let anyone design and deploy fully functional apps in days—powered almost entirely by artificial intelligence. Vibe coding before vibe coding was a thing.

Investors believed. Microsoft poured in cash. The Qatar Investment Authority joined a $250 million Series D round, pushing Builder.ai’s valuation past $1 billion. Headlines hailed it as the future of no-code development. The face of the company, founder Sachin Dev Duggal, was cast as the spiritual heir to Steve Jobs and the technomancer who would make developers obsolete.

Except the magic wasn’t real.

The Illusion: AI That Wasn’t AI

Behind polished pitch decks and flashy marketing videos, Builder.ai was quietly doing something very old-fashioned: using armies of human developers, most of them based offshore, to manually write the code that their “AI assistant” Natasha was supposedly generating in seconds. Essentially, a mechanical Turk.

This wasn’t just garden-variety vaporware. It was an industrial-scale mirage. The company misrepresented how much of the process was automated and how scalable its technology actually was. Former employees, speaking to multiple media outlets, described internal chaos, missed deadlines, and mounting client complaints — all while the company maintained a slick public narrative of frictionless innovation.

By 2025, the truth began to emerge in earnest: Builder.ai had overstated its projected revenue for the year by a staggering 300%, slashing expectations from $220 million to just $55 million. These weren’t minor adjustments — they were tectonic retractions, the financial equivalent of a house of cards collapsing in a wind tunnel.

The Implosion: From Unicorn to Cautionary Tale

The reckoning came swiftly. Viola Credit, one of Builder.ai’s main creditors, began proceedings to seize assets. Investor faith evaporated. The company laid off large portions of its workforce, shuttered operations in several countries, and quietly began seeking buyers for what remained of its technology.

By May 2025, Builder.ai had gone from galloping unicorn to roadkill — an emblem of everything broken in the current AI funding landscape.

AI Washing and the Seduction of the Synthetic

Let’s call this what it is: AI washing, the cynical rebranding of human labor as machine intelligence to juice valuations and dupe stakeholders.

Builder.ai is not the only offender, but it may become the most iconic. As The Next Web wrote, “FOMO investing” — the fear of missing out on the next OpenAI or Anthropic — has blinded the tech ecosystem to fundamental due diligence. In the rush to be first, investors forgot to ask: Is any of this real?

There’s something almost poetic about Builder.ai collapsing under the weight of its own myth. This was a company that sold artificial intelligence not as a tool, but as spectacle — a magic trick. But once the curtain was pulled back, all we saw was a tired stage crew frantically pulling levers behind the scenes. A ruse.

I write this not as a finger-wagging critic, but as someone deep inside the AI ecosystem. At Sedric, we build AI-powered platforms for regulated industries, where trust isn’t a nice feature to have—it’s the whole product. And every day, we see how seductive it can be to stretch the truth about what your AI can do. To blur the line between automation and augmentation. To call it “AI” when it’s really just a smart decision tree duct-taped to some manual processes.

I’ve seen more than a few fellow players in the RegTech space guilty of AI-washing their capabilities. One dead giveaway for over-hyped AI capabilities is processing time. Reports that should be ready in minutes shouldn’t take a day or more to complete.

But hype is a sugar high. And eventually, reality always shows up with a clipboard and a bankruptcy petition.

Final Thoughts: Reality Is Having a Moment

Builder.ai’s story isn’t just a business failure. It’s a morality play about the cost of ambition untethered from reality. It’s about how we fetishize disruption without asking whether it’s real—or whether we even need it.

And it’s a warning to every founder, investor, and marketer working in AI today: don’t sell the future if you haven’t built the present.

Because in the end, no amount of buzzwords, TED talks, or CGI demo reels can save you from the truth.

Remy Rosen is the VP of Marketing at Sedric, where he leads brand and communications for an AI company that believes responsibility is not optional. Opinions are his own, but feel free to borrow them.