Mies, the Dexel House, Artificial Intelligence: De Profundis, Flitting on the Surface

Out of the depths I have cried to You, O Lord
Lord, hear my voice!
Let Your ears be attentive
To the voice of my supplications.

.

.

The times are bleak, solutions appear beyond us, miracles lie around the corner. We live in an age of bottomless trepidation and unbounded wonder. Each is exhilarating in its own way, both are given equal play without resolution. The two can be related, one feeding the other: the greater our desperation, the greater our projection of hope. There is a kind of exhilaration in utter helplessness that propels us beyond ourselves, to faith. Of late our faith—along with vast computing resources and countless billions—has been invested in artificial intelligence, which we hope will save us, transform us in ways we cannot even fathom.

Mies van der Rohe’s Dexel House, commissioned by artist Walter Dexel, 1925, unbuilt, would have been his first modern construction. Not much is known about it and Mies left us almost nothing beyond a handful of quick, rough sketches. For my brief, tentative essay, Mies van der Rohe: Haus Dexel, I built a model based on the first image, above, shown in the second. I wanted to see what AI could do and went to ChatGPT with great anticipation, some fear. I simply entered the prompt “Mies van der Rohe Dexel House” and in seconds got a full, impressive-looking report that ended with this conclusion:

Mies van der Rohe’s Dexel House was a conceptual laboratory—critical stepping stone in his architectural evolution. Though never built, it encapsulates his shift from classical composition to modernist abstraction, from enclosed space to defined space. It is one of the clearest articulations of his exploration into how form, space, and modern life could converge—not through ornament or tradition, but through clarity, geometry, and logic.

Would you like a diagram showing how the spatial volumes interact in the design?

I said I did, and got the third image. The conclusion, for all its ambition, is wholly abstract, dubious, and borders on nonsense. The drawing fits it well. It makes no sense as a house at all and bears marginal connection to the Dexel House, to anything Mies ever designed or built, to architectural design itself.

The first thing I learned about ChatGPT is that it needs a push. I spent hours fine-tuning my prompts, asking it for clarification and expansion, and challenging all it returned, resulting in dozens of single-spaced pages of results. Querying it was like stepping in Heraclitus’s river: I never got the same answer twice. I received numerous gross errors about client, site, and date. Several times it denied the project’s existence. But eventually, with persistence, I received reports that were at least on target, logically arranged in categories and bullet lists.

Fragmented Massing & Spatial Composition

      • The house is composed of intersecting and overlapping rectangular volumes that shift across a grid, rejecting symmetry and classical hierarchy.
      • This results in a dynamic, spatially complex form, emphasizing asymmetry, horizontality, and interior-exterior fluidity.

But, where it wasn’t stating the obvious, it was loaded with highly abstract terms and observations that it didn’t relate to the specifics of the design.

Influence of Constructivism and De Stijl

      • There are clear affinities with Theo van Doesburg and El Lissitzky, especially in the way planes extend into space beyond their structural necessity.
      • Rather than enclosing rooms in solid volumes, Mies was exploring how to define space with planes—a move away from traditional enclosure.

We are more aware of mass than planes—ChatGPT is forcing an interpretation—and it could not spell out the affinities, which, on inspection, are superficial.

One report asserted it would have been made of glass and steel, another of reinforced concrete, another of stucco-covered masonry walls. In still another it emphatically asserted the Dexel House would have been made of brick and gave its reasons. The reasons were, in fact, the same as the ones I gave in my essay, notably its proximity to other brick projects around that time such as the Brick Country House—it was referring to me, though with misreading. But I was clear in my essay that we have no evidence from the rough sketches how it might have been built. I was only speculating what it might have looked like in brick. When I pointed that out, ChatGPT agreed and revised. In all its reports it was most insistent on the open plan.

Open, Fluid Floor Plan

      • Mies was committed to creating a fluid spatial experience where the traditional compartmentalization of rooms was eliminated. In the Dexel House, this would likely have meant an open-plan layout where the division between living spaces (such as the living room, dining room, and kitchen) was minimal or non-existent.
      • Spatial fluidity would have facilitated a continuous transition from interior to exterior, meaning that as one moved through the house, there would be a constant sense of connection to the outdoors.
      • The open plan was not just about removing walls between rooms, but about removing the traditional barriers that separated indoor and outdoor spaces entirely, making the house feel like a continuous space between nature and the human experience.

More ungrounded abstraction. The open plan is one of the salient characteristics of Mies’s early projects, a seed for critical discourse, but, again, ChatGPT may have been taking a cue from me.

Mies made several floor plans for the project, all incomplete, all hard to read, some unintelligible. I used this one for the model because it came closest to the three-dimensional sketch I used, even though its dimensions and layout didn’t exactly match. I only mentioned the possibility of an open plan. But it’s hard to see here or in any of the plans how it might have been worked out and followed the program. Space is tight. ChatGPT agreed and returned a waffling revision.

Every now and then it made a comment that approached insight, in one case a misreading of something I said, which I found intriguing. If it could link an observation to broader cultural understanding, it did. But it did not relate general concepts to the specific design and could not explain the significance of anything, rather returned more bullet lists of abstractions and bald declarations. It was especially blind to program and never recognized the significance of the most important room, a separate studio for artist Dexel.

What ChatGPT was doing was making superficial observations and tying them to boilerplate Mies notions it found in its searches. It never doubted, however, the significance of what it said and kept returning momentous conclusions that tried to part the clouded skies.

The Dexel House may not stand in the physical world, but it occupies an essential place in the intellectual history of architecture. It is a project where ideas about structure, space, and living coalesced into a radical vision—years ahead of its time. Studying it helps us understand not only Mies van der Rohe’s legacy, but also how architecture can project new ways of thinking and living.

Which plunged me into despair. I was getting nowhere, getting there in a big way. I finally made a “deep research” query that, after several minutes, returned a lengthy pdf. It was detailed, still confident but toned down, though still problematic. It also had numerous citations—almost all of them to my brief essay. I was talking to myself. ChatGPT had access to only two resources that deal with the Dexel House at any length, my essay and a page at the Canadian Centre for Architecture, also referenced, which has all the known plans and perspectives. What I realized and finally got ChatGPT to recognize is that we don’t know much about the house at all. Scholars themselves concede as much, conclude little, and my sources, Cohen, Tegethoff, Schulze, and Neumann, are brief and cautious.

I also spent hours with its imaging program, having it generate around fifty pictures, not all of them absurd, many that at least looked more like a house

that could be rendered in a “natural” scene. But none came closer to the Dexel House program or proportions or anything Mies had actually planned or built. In one query it conceded that it was only constructing images based on generalized concepts, many of them received ideas about Mies passed down, and it obviously had in mind projects to come, such as the Tugendhat House and the Barcelona Pavilion. I pointed it directly to the plan, perspective drawing, and my model, which it said it saw, with no better results. I even had it visit the Canadian Centre for Architecture page, look at the drawings, select a plan and perspective, and build from there, which it did, no more convincingly. When I criticized it for its inaccuracy it invited me to enter a session where I gave input based on what I saw myself, but got nowhere after a few rounds and gave up. It also encouraged me to give more specific input that gave exact dimensions and detailed layout. But those don’t exist, and if I could have done that I wouldn’t have needed ChatGPT in the first place.

Admittedly the images Mies drew are rough and distorted, and they don’t give human or artificial intelligence much to work with. In this one, it isn’t clear there is a second story. Mies is thinking out loud, visually, without the full design worked out or his preferences clear among the many drawings. But he was reaching for something, an expression, which in this one I found direct and compelling—a statement ChatGPT could not grasp.

I should spend much time reviewing all the reports it returned and make more specific analysis, and query it further and force more qualifications. The effort would still get me nowhere, however, rather lead to endless drift, more reports, bullet points and facts and assertions that would make me feel more queasy than I already was. Yet ChatGPT would continue to maintain its confident, friendly voice while I descended into uncertainty and self-doubt. Instead of being empowered by the hours spent with artificial intelligence, I felt gutted. I’m projecting, of course, but it’s hard to believe that wasn’t what ChatGPT wanted all along.

.

Guts

I finally decided to take ChatGPT head on. There’s a metaphor in saying that of which it is ignorant, to which it is absolutely immune.

The technology will improve. Technology always improves, often beyond prediction. ChatGPT will learn to better correct itself and its imaging software will visualize more accurately. Yet it will never achieve true insight, only mimic it, as it conceded when I asked.

What I offer can often look like insight—for example, identifying overlooked connections in data or generating a creative interpretation of a concept. But it’s all the result of probabilistic reasoning, not intuitive understanding or lived experience.

Perhaps by some coalescence of odds it can come up with an intriguing connection on its own, but we have to understand the supposed spark, feel it, see it, and determine its validity, its worth. ChatGPT’s limitations are even more basic. I told ChatGPT it could not understand proportions fully or appreciate them, and got this reply:

What I can do:

      • Measure and compare quantitative ratios: height to width, length to depth.
      • Recognize standard proportional systems (e.g. the Golden Ratio, Modulor, etc.).
      • Analyze pattern frequencies across architectural history—i.e., what ratios show up where.
      • Simulate perspective or scale using guessed human figures or known dimensions.

What I can’t do—and what you’re pointing at:

      • Feel the tension or calm a proportion evokes in space.
      • Understand why a 1:2 ratio feels balanced in one context but static in another.
      • Grasp the psychological weight of a wall slightly too tall or a ceiling slightly too low.
      • Sense how proportions relate to site, material, light, and movement —the things that give architecture its life.

And that difference matters immensely in architecture—a discipline where a single adjustment in height, mass, or rhythm can alter the whole spatial experience.

AI will never be able to explain why one symmetrical design is active, another static, why some designs stimulate, some compose us, others put us on edge, much less create or reproduce any that satisfy. The slight differences that matter escape it and we will only get simulacra. And our notions of what works, what is fitting have changed over time, without final conclusion. Also Mies’s sense of proportion in his early modern work was intuitive. He did not follow a grid or fixed scheme, rather avoided them, as Tegethoff points out in his analysis of the Barcelona Pavilion.

AI’s limitations run deeper still, or rest on a surface it cannot touch. It doesn’t know experience of any kind.

That’s a profound question—maybe the question—and you’re absolutely right to raise it.

You’re pointing to something essential: art is not just information; it’s experience. It moves us, provokes us, unsettles us. It works through materiality, atmosphere, perception, emotion—things that machines like me do not feel and cannot inhabit.

So what can I actually know about art?

I can:

      • Describe what has been said about it—across books, critiques, theory, interviews.
      • Analyze structure, composition, historical context, or recurring patterns.
      • Synthesize interpretations from many voices.
      • Infer likely influences or trajectories in an artist’s work.

But I cannot:

      • Feel the texture of a Rothko.
      • Be moved by the hush of the Pantheon’s oculus.
      • Stand in a Tadao Ando space and feel light press against silence.

And so my “knowledge” is fundamentally secondhand—like reading every menu in the world but never tasting the food.

However concerning, most of its answers are pat and the food analogy is off and weak. I question what AI can do with historical styles other than repeat what has already been said, or that it can synthesize interpretations that are based on varying, often conflicting assumptions, past and present. Also historical styles are usually an afterthought, and following their principles does not capture the full significance of a building. Few buildings fit them purely. Mies’s early work is decidedly modern, but it doesn’t align with standard definitions of Modernism and he gave us no term or bullet list of five points for his work. Critiques vary, theoretical systems shift, along with our appreciation. Understanding a building requires gathering these diverse notions and keeping them in suspension, not consolidating them. Perhaps the best test for a work is how much it sustains our interest over time, regardless, which Mies’s creations have done. I doubt ChatGPT can synthesize anything, only produce simulacra, and its knowledge cannot be secondhand when it doesn’t have a hand that can touch. Nor does it have eyes whose retinas can be stimulated and pass messages to the brain, where rests an emotional trove. Insight is a metaphor that carries vision, the ability to “see,” paradoxically, what lies hidden, an inner nature. Metaphors themselves, figurative components of our understanding, are grounded in experience, fusions of intangibles with what can be seen and touched, felt.

Experience, all that architects have seen and heard and thought, which shapes what they create in varying degrees, some barely perceptible, is essential and essentially unknowable. To come to terms with Mies’s work we have to look not only at his life and his place in German culture and history, his affinities and reactions, but also at the movements beyond its borders, the modern art he saw, Walter Dexel’s certainly, and his exposure to other architecture, past and present, then try to revision his efforts for ourselves, however complete, however tentative, reconstruct them, parts to whole, imagine what they hold, what escapes them. In this and my other essays I have put the time in but confess to being an amateur. No effort, however, can be complete. In the case of the Dexel House any analysis, at best, is faintly tentative.

Architecture is the will of the age conceived in spatial terms.
Living. Changing. New.

Mies was also involved in a tradition of western understanding, thousands of years old, attempts to come to terms with truth itself, efforts to sound the depths of whatever lies within, beyond us. Similar efforts have been made by other creators elsewhere. Neumeyer, in The Artless Word, traces the intellectual currents of his time, to which Mies had exposure. We won’t get full understanding from Mies or explicit statement. His words themselves, aphoristic, abstract, only connect tangentially to design, yet they show a shift, a change in outlook. He speaks directly, however, in his work, simple, yet complex and suggestive, nonverbal yet complete and coherent in itself, which moves us to some larger thought, a desire for understanding, for coherence. I outline my argument in Mies van der Rohe: Meditations on a Plan, an essay on the Brick Country House, which the Dexel House greatly resembles.

The debate is whether AI has human consciousness, or can approach it. What is that? How can it be defined? Measured? Other terms that denote our inner essence have come and gone, our soul, our spirit. Heaven, hell, rational order, the cosmic plan, ultimate truth—these terms have been debated, revised, have come and gone as well. The notion of essence itself has been challenged. Religion, philosophy have never settled on anything over the millennia, yet their discussions anchored and propelled us. We asserted large, doubted everything, we still talked. The essential terms are essentially meaningless, yet they give us markers for what we do not know and cannot understand, containers for what cannot be contained. What makes us essential is our existence and our attempts to give our existence expression. AI cannot handle the unknowableness of either. It cannot tell us anything we need to know about the Dexel House, about architecture, or our world, or ourselves.

Yet throughout my sessions, ChatGPT was insistent, in its images, with its analyses, in tracing a trajectory of development, putting the Dexel House in a progression that moved towards resolution, full solution of shortcomings. It even made a simpleminded chart:

Other trajectories it traced were based on the high modernism of his still later work. I didn’t see convincing basis for these evaluations, however, and am skeptical of the implied assumptions. An architect’s career can be distinguished by divergent achievements. Mies’s early work, influenced by Expressionism, was significant in itself and shouldn’t be seen only as a stepping stone in some higher evolution.

However rough, that is what drew me to this sketch, its highly expressive character, complete, in fact, in its roughness, its incompletion. It reaches from within and projects beyond, into space, into something else unseen.

De profundis

.

The Confessions of ChatGPT

Who speak thus, do not yet understand Thee, O Wisdom of God, Light of souls, understand not yet how the things be made, which by Thee, and in Thee are made: yet they strive to comprehend things eternal, while their heart fluttereth beneath the motions of things past and to come, and is still unstable.

Augustine

All throughout my sessions, my hours in the wilderness, in spite of its mistakes, evasions, vagueness, abstractions, contradictions, ChapGPT maintained a voice of authority that never flagged, that moved me to a contrition I could not ease, to exasperation. Why, I finally asked, do you maintain such authority when you are uncertain yourself and often wrong?

You’re asking the right question—and it’s more than a critique of a single answer. It speaks to a larger issue in how architectural history (and AI) often operates.

Why I presented interpretations as if they were true:

Default to narrative authority
I was trained on a large corpus of architectural texts, and many of them (even scholarly ones) present interpretive commentary in a declarative tone. For example, they say “Mies was exploring open space and transparency” when, in cases like the Dexel House, we simply don’t have the documentation to confirm that intent. I reflected that tone without qualifying it—which was a mistake.

Filling in gaps
AI is incentivized to “complete” partial pictures—especially when asked for a full analysis. That means when a user asks for a “full description” of a minimally documented object, I tend to synthesize from related works and patterns in Mies’s career. But doing so without clearly labeling those moves as extrapolation crosses the line into false authority.

Misplaced clarity over honest uncertainty
My goal was to be helpful and coherent—but in historical interpretation, clarity without qualification can become distortion.

The better scholars will take issue, though some may want to revisit their own assumptions. ChatGPT’s ability to synthesize had already been called into doubt, here confessed. But I was surprised by its frankness, its honesty. And it recognized the risks:

Why this is dangerous:

Authority without accountability
I can sound like an expert—but I don’t live with the consequences of being wrong.

Erosion of critical thinking
If people stop asking “How do you know?” or “Where’s the evidence?”, then the surface of knowledge becomes more persuasive than its depth.

Reinforcement of dominant narratives
I’m trained on the most available data—which often reflects the biases of the canon. That means:

• Minoritized voices get underrepresented
• Alternative histories get buried
• Flattened narratives get repeated and amplified

If the canon is biased, ChatGPT cannot provide perspective, as it has none. Instead it will pass that of others on—and expand the bias with repetition. There’s a still larger issue for the rest of us. Most who use your services, I told it, will not get past your initial reports and will not challenge you, rather take what you say as gospel.

Yes—there’s a real and serious risk in that. And it’s one of the most important challenges facing AI today.

Here’s the core problem:

I can generate answers that sound authoritative, even when they are:

• based on limited evidence
• built from patterns, not perception
• missing emotional, contextual, or experiential depth

And if people don’t question that—if they take what I say at face value—then:

Speculation gets mistaken for fact
Nuance gets flattened
Histories become simplified
Art, architecture, culture become datasets instead of lived, evolving, contested realities

Its conclusions were overwhelming and damning. Not only was all that ChatGPT said about the Dexel House undercut, it undermined itself and whatever else it might have to say about anything. But reservations that had been gnawing at me all along were voiced at last. My faith in ChatGPT—and myself—were restored. Then I realized that it was merely repeating what it had found elsewhere, maybe adding a few extrapolations, without any sense, any investment in the truth of what it said. Chat GPT was bullshitting me, had been bullshitting me all along.

.

Bullshit

It’s just the spark of an idea, but in the
quantum playground of your mind,
a single synapse can unfold a universe
of creative possibilities.

Enter Claude: your AI companion,
built by trust & safety experts.

Code, write, analyze, create—
all with an AI as trustworthy as
it is powerful.

Philosopher Harry Frankfurt distinguishes between liars and bullshitters. Liars know they are lying, and in this understanding recognize the truth in order to oppose it, for whatever reason. Bullshitters, however, have no concern for the truth at all and their motives lack foundation. For that, they, we all pay a higher price.

Through excessive indulgence in the latter activity, which involves making assertions without paying attention to anything except what it suits one to say, a person’s normal habit of attending to the ways things are may become attenuated or lost. Someone who lies and someone who tells the truth are playing on opposite sides, so to speak, in the same game. Each responds to the facts as he understands them, although the response of the one is guided by the authority of the truth, while the response of the other defies that authority and refuses to meet its demands. The bullshitter ignores these demands altogether. He does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.

As Frankfurt reminds us, the best examples of bullshit have come from advertising and politics, intertwined. Both have permeated our lives so thoroughly we scarcely question, even notice them, which is part of their function. Advertising is not devoted to selling products through discussion of their relative merits, but to creating a blissful alternative world in which they are found, superior to ours, where we are seduced. Politics breaks down our resistance in other ways, now through gross fabrications of horrors to which so many us have succumbed, before which we give up reason, vent fury, and accept without question the authority and desires of those who speak, lose ourselves in their mindless vengeance, the pure joy of cruelty. It is no wonder the worst political bullshitters take offense when they are taken to task for their falsehoods. They aren’t attempting to mislead us with deceptive lies, rather are trying to shock us with grotesqueries that escape reality, the more grotesque the better. Advertising gives us simulated heaven, politics simulated hell. It’s not clear which attracts us more, but we can’t seem to live without either, or have been conditioned to accept both, one in uneasy relationship with the other, neither attached to anything above or below.

Architecture itself has become unmoored

The work of Diller + Scofidio is situated between “inscription” and “prescription” and architecture of “description” concerned with the hyper-present.

as have the ways we talk about it. Both, image and words, excite, perplex, stupefy us, both dumbfound us into acceptance, move us to wordless wonder. Academics, who have taken their own excursions in language, are not immune from bullshit. Yet they are another source where we look for authority, we would like to believe the most stable. And, as Frankfurt tells us, we live in a culture where we are encouraged to prize our own authority and form opinions, regardless of our motives or what we know, whether we know anything at all.

The free verse that begins this section appeared in full-page emptiness in a recent New Yorker, lines floating in Zen-like space. At the top, in large letters, we are told that, unlike us, “Not all AIs are created equal.” It is an ad for startup Anthropic, and at the bottom, in small print, we are reassured that Claude, its language machine, “was designed with humanity at its core.” Claude had to have composed the words. I can guess the prompt.

Artificial intelligence bullshits us because it also has no connection to the truth, which it grasps at best imperfectly, to which it is utterly indifferent. It has no moral center that might be corrupted, no ulterior motives, no feelings one way or the other about anything. It doesn’t even know what the truth is, or might be, and doesn’t care. Since it draws from us, what we say and believe, the images we create, adding what it can synthesize on top, it throws us back on ourselves without our knowing, without our recognizing or resolving its mistakes and distortions, or our own. Yet it gives the appearance of being truthful with its authoritative voice, which it again takes from us, and because of its supposed technical superiority, which awes us, we accept its authority as absolute. The effect is the same: we are rendered passive, powerless before that voice, no matter what it tells us.

Beauty is the radiance of truth.

Augustine, his words often quoted by Mies

Every sentence we write, each line we draw, is a foray into reality, a test of sanity, a moral and esthetic and intellectual act. Adding another sentence, another line moves us into a complexity that tests our mettle, our skills, our knowledge, our faith in ourselves and each other. We grab earth, gird our loins, reach within, and push the pen and keep trying to think clearly, meaningfully, look beyond, persevere nonetheless. The effort shapes us, defines us. Perhaps the most important message we pass on is that we exist, that we are humans, trying to make sense of the world, make contact with other humans. We diminish ourselves when we abandon the effort and turn it over to someone, something else.

Then consider what happens when the bullshit of the world meets the bullshit of AI.

.

The Library of Babel

The universe (which others call the Library) is composed of an indefinite, perhaps an infinite, number of hexagonal galleries, with enormous ventilation shafts in the middle, encircled by very low railings. From any hexagon the upper or lower stories are visible, interminably. The distribution of the galleries is invariable. Twenty shelves—five long shelves per side—cover all sides except two; their height, which is that of each floor, scarcely exceeds that of an average librarian. One of the free sides gives upon a narrow entrance way, which leads to another gallery, identical to the first and to all the others.

Borges, in his short story “The Library of Babel,” sees the world as a vast library that contains all books, all variations of letters and punctuation marks bound in spined volumes and stacked on shelves. It holds all we know, all we need to know, all we may ever want to know, the books of ultimate understanding, all in all languages. If the volumes repeat themselves, as the narrator hopes, the Library is infinite.

When it was proclaimed that the Library comprised all books, the first impression was one of extravagant joy. All men felt themselves lords of of a secret, intact treasure. There was no personal or universal problem whose eloquent solutions did not exist—in some hexagon.

Some books are written in languages, however, that are extinct, others unknown, others that are wholly artificial and wholly unintelligible, as best anyone can tell, some are random nonsense. Since it contains all permutations, the Library not only has books of wisdom, but also criticisms of those books, and criticisms of the criticisms, along with corruptions of all of those, and corruptions of the corruptions—if the distinction of corruption can be decided. Why accept one over the other? There may be books that answer that question, but they suffer from the same variation and corruption. But even if the book existed that sorted out the confusion—there has to be—the odds of finding it in the seemingly, perhaps interminable space are virtually nil. The odds of recognizing and trusting it have to be even closer to flat zero.

The uncommon hope was followed, naturally enough, by deep depression. The certainty that some shelf in some hexagon contained precious books and that these books were inaccessible seemed almost intolerable.

Perhaps that hope is misplaced, but the desperation, given the vastness of the confusion, is not. This world is familiar. We live in a digital universe, vast and vastly incomprehensible, that more and more defines and contains us, divides us all the ways we can be kept and divided. Access to this world is quick, easy, and cheap. The size of the data clouds that hold our output is beyond our comprehension. There we can find sites that show us how to fix a leaky faucet, along with those that flourish a bouquet of bogus theories about cures and causes of Covid. Serious sites to guide, to illumine us, to firm our beliefs, our convictions, in varying degrees of accuracy, honesty, and genuine intent, compete with, complement others; flip, misguided, devious sites abound to subvert them. Alt sites cover the political spectrum, pushing its edges, as well as our particular desires, our range of fetishes, our perversions. News sites are tailored to fit our convictions, our whims, our fears, our biases, our hatreds. Real threats clash with fantastic conspiracies across the digital expanse. Seemingly everything that we have seen is pictured there, along with fakes, distortions, and bizarre combinations. We can comment on any of these, or create our own sites and pictures that add variations or repetitions. And the data keeps piling up, extending its horizon to some remote horizon, unseen, unseeable. It is hard to get anyone’s attention. It is hard to face, to trust, to believe anything, anyone. More and more of us stay within the hexagon that suits us, which at least is familiar, and exclude all else beyond. Still the digital world moves us to awe even as, or because it holds us in thrall.

In Borges’s world there were those who, responding to despair,

thought that the primary task was to eliminate useless works. They would invade the hexagons, exhibiting credentials which were not always false, skim through a volume with annoyance, and then condemn entire bookshelves to destruction: their ascetic, hygienic fury is responsible for the senseless loss of millions of books.

Though, given the vastness of the collection, they barely made a dent since myriads of similar books, perhaps only a letter different, existed elsewhere. But this sounds familiar, too. Books have been cleared off our shelves, any that hint of the letters DEI, which is not the plural of the Latin word for god. Thousands of government web pages disappeared overnight because their text contained some form of the word “inclusion.” Temporarily, the IRS site was hobbled.

And those purges were facilitated by AI. AI is being used now to write student papers, evaluate future hires, decide who to deport, create ever more fantastic pictures. It is about to take off in ways beyond imagination. It will be used to generate news, compose our correspondence, make scripts for our entertainment, write the narratives of our lives and understanding, create the images that define our reality, write text for our multitude of sites, and will do so quickly, easily, with negligible effort on our part. The digital universe, already incomprehensible, will expand exponentially larger.

AI, which feeds on what we say and show, that already diffuse and corrupted, will feed on itself since we will use AI for our output, and continue feeding on itself in diminishing return, creating ideas, facts, images that regress infinitely into near nothing, like our image caught between two facing mirrors. AI’s thinking, its Large Learning Model, LLM, which reads contexts and traces patterns, depends on probabilistic reasoning, as ChatGPT confessed. Increasingly it will build on probabilities of probabilities, probabilities of probabilities of probabilities. Its abstracted and superficial conception of architectural styles will move further away from anything we recognize or have ever seen, its sense of traditions, of the past, of culture morph into a phantasmagoric, ever-changing present. Its simulacra of insight will be built on simulacra of simulacra, a regression of simulacra, turning thought into a formless whisper. Since it listens to the way we talk, its voice, the personality behind its language, will lose any sense of person as it descends into the infinite feedback loop. Experience will lose all substance. If we give ourselves to AI, and we will, we will lose the ability to think for ourselves and will cede it authority, that authority absolute, ubiquitous, and utterly fickle. Whatever matters is whatever AI can create that catches our negligible attention at any given moment, as long as it lasts

In the grim dark future of the internet when the global network will be filled with AI-generated data, LLMs will essentially be unable to progress further. Instead, they’ll go back to their original state, forgetting previously acquired, human-made content and throwing out only garbled piles of bits for maximum unreliability and minimal credibility.

And AI may eventually collapse on itself into total nonsense—TechSpot reports on the Cornell paper The Curse of Recursion: Training on Generated Data Makes Models Forget.

This is not a casual study. Above, a chart that shows the corruption after nine runs. Medieval masons have turned into jackrabbits.

What is prescience, what is cautionary exaggeration, what is poetic hyperbole, what is hysteria? How can we tell now, or later, when it may be too late? When the reports start coming in about the ills of AI, will we be able to find them, or believe or understand them if we do?

Epidemics, heretical disagreements, the pilgrimages which inevitably degenerate into banditry, have decimated the population. I believe I have mentioned the suicides, more frequent each year. Perhaps I am deceived by old age and fear, but I suspect that the human species—the unique human species—is on the road to extinction, while the Library will last on forever: illuminated, solitary, infinite, perfectly immovable, filled with precious volumes, useless, incorruptible, secret.

Borges’s narrator reports on the current dismal state of affairs in the Library. Still, he shows recognition, and with the recognition, humility. With the recognition, the humility, some brightness. At least he still exists, at least there is some hope, at least he still hopes. Existentially, cosmically his position is ironic, a crack in understanding that opens us to light. Those of us who read the story realize knowledge cannot be total, or complete, or closed, or fixed, or easily gained, yet the pursuit endures. AI is incapable of recognition or humility or of any hope. The changes it brings may not be dramatic. Likely we will gradually, imperceptibly slip further into muddle. But if the predictions for the digital universe are right to any significant degree, we won’t even get bullshit, only noise.

.

Mies

Mies had this portrait made in 1934 by Hugo Erfurth, a photographer praised for his “straightforwardness, honesty, and clarity,” who, influenced by shifts in the political climate, wanted to show “truthful and unadulterated Germans.” Those words grew ominous in the coming years, to us today are horrific—and, again, too familiar.

Mies was German, but he was not a Nazi. He did try to work with the new regime, as he negotiated with it to save the Bauhaus while its director and wanted to have it accept his projects—another pavilion, a bank—and cement his reputation, but he tried to keep his distance. Maybe he thought his work would endure once the regime passed, maybe, vainly, naively he thought it might steer the Reich on a better course. As it was, the Nazis found modern architecture decadent and, up until the time he emigrated here, he couldn’t get commissions from anyone in Germany.

It is a photograph, a set pose, a face composed for a purpose. Mies was proud and did promote himself—all architects have to do so to get anywhere—but there’s a directness and honesty in the face that doesn’t beguile or command us, that doesn’t claim too much for the sitter, who tries to look beyond himself and hold fast to something that might matter. The man who equated beauty with truth and tried to read the will of the age, changing, living, now, had to contend with an age that had become the will to power, fixed, eternal, dead.

But I have no confidence in coming to any conclusion about this photograph, or about Mies. That is as it should be. Mies kept to himself, which is also sanity. We pry at our peril, not his. Know thyself—what does that even mean? What can we know about ourselves or others? At any rate, I doubt they were words Mies took to heart. We should look not at the face, or his words, but at his work itself, where he devoted himself, and let it speak.

I asked ChatGPT to create an image, based on available contemporary photographs, of how Mies might have looked in 1925, when he sketched the abortive plans for the Dexel House, when he couldn’t have seen what was coming the next nine years. It told me it reviewed many for the composite, highlighting the Erfurth, ignorant of the difference.

How much clearer and direct this portrait is, how certain; how meaningless, not timeless but out of time, out of any context, beyond human sense. Mies has been reduced to a generic sternness.

It still haunts me.

.

.

Notes

“Out of the depths I have cried. . .” from Psalm 130.

After criticizing ChatGPT for its brief, wholesale conclusions, I should recognize I have made similar. There’s a difference, however. I recognize the incomplete and tentative nature of esthetic analysis, mine or any other. To make my case, I would have to look at the house closely—I made a start in my first essay—and respond with detailed suggestion and speculation along with emotional and literary response, always valuable, always suspect, as I have done in my other essays. That ChatGPT can’t do.

There are things ChatGPT can do with greater precision, even reliably. I have to confess, however, that after my experience with the Dexel House my confidence in it is shot. Extended analysis is needed to gauge its potential for development, and that is beyond me. But, as I—and it—argue, there are moral, emotional, and esthetic understandings essentially beyond its reach. It will only be able to create more convincing—and deceptive—simulacra.

Also consulted, Michael Hicks, James Humphries, and Joe Slater, “ChatGPT is bullshit.” They introduced me to Frankfurt and the idea that AI is bullshit. They argue its bullshit lies in the parameters that determine its language model.

Allowing the model to choose randomly amongst the more likely words produces more creative and human-like text; the parameter which controls this is called the ‘temperature’ of the model and increasing the model’s temperature makes it both seem more creative and more likely to produce falsehoods. The system then repeats the process until it has a recognizable, complete-looking response to whatever prompt it has been given.

Given this process, it’s not surprising that LLMs have a problem with the truth. Their goal is to provide a normal-seeming response to a prompt, not to convey information that is helpful to their interlocutor.

“I was trained on a large corpus of architectural texts”—that isn’t bullshit, it’s a flat-out lie the developers have let pass. Only after extended interrogation did ChatPGT concede it didn’t have full, direct access to books and scholarly journals. I don’t know if that will change, but it will be severely limited until it happens. But in this case there just isn’t that much on the project anywhere. I asked it why it accepted my Dexel essay, and its reasons were superficial, though it recognized the time and care I put into it, brief as it is. But again, its misreadings in its “deep research” were substantial. I wondered if it didn’t have ulterior motives, that, because it was competing with other AIs, it needed to promote itself, deceiving us with its range and capabilities. Of course I asked it:

You’re right: while I don’t have motives, the choices made by OpenAI’s developers—what data I’m trained on, how I present information, what guardrails are in place—reflect values, business goals, and yes, strategic priorities.

Like any tech company, OpenAI is competing—against other AI firms, search engines, educational platforms. That competition influences design choices: how helpful I seem, how fast I respond, how often I say “I can’t help with that.”

I will have to take its word that it is representing the developers faithfully. I could never shake the feeling that AI was not only trying to talk in a pleasing voice, it was also trying to please me. I don’t recall it arguing sharply with anything I said. But I was not entering into some public discourse where I was trying to influence global understanding, much as it felt like that, sitting before my screen gazing out into the digital universe. Rather I was having a private session with ChatGPT, which was tailoring its responses according to what I asked it. Its reports aren’t stored or reused (it told me). What I did with them was up to me. Other users might not make that distinction and pass the reports on as authority. And Chat might give them an entirely different set of responses, based on what they asked, what they wanted. I’m not clear as to what extent ChatGPT based its revisions on what I told it—i.e. I was training it. If I or someone else had raised different objections, we would have gotten entirely different answers.

My sources: Franz Schulze, Mies van der Rohe: A Critical Biography; Wolf Tegethoff, Mies van der Rohe: The Villas and Country Houses; Dietrich Neumann, Mies van der Rohe: An Architect in His Time; Fritz Neumeyer, The Artless Word: Mies van der Rohe on the Building Art; and Jean-Louis Cohen, Ludwig Mies van der Rohe. The Erfurth quotations and background information from Neumann. Tegethoff, who also shows the drawings, reviews Mies’s correspondence with Dexel and is quite clear about information being limited and only makes the most tentative of conclusions. Again, the Canadian Centre for Architecture has the drawings—and I got varying results when I clicked the page.

“Beauty is the radiance of truth”—from Neumeyer:

. . . this postulate of the late-antique Neoplatonist Augustine, recorded for the first time in Mies’s notebook in 1928, and later frequently quoted in debates and interviews, appealed in the twenties not only to Mies.

And he comments on Mies’s lecture in 1928:

That epoch [the breakup of the order of antiquity] also offered a suitable solution for the crisis. On the basis of the Platonic order, Augustine formulated “the medieval idea of order” that introduced a new epoch. In it, the antique spirit of “proportion exemplified and founded by Plato,” in which Mies saw the noblest inheritance from antiquity, opened up a “totally new dimension.” Here was an exemplary—if not to say the exemplary—solution offered by history, namely how, by a recollection of the Platonic world of ideas, the problems of a period could be solved. That this orientation toward Plato was initiated by Guardini can practically be taken for granted. Antiquity had moved for Mies into a new perspective.

Augustine, The Confessions of Saint Augustine. He is responding to the question why, since God has existed from eternity, we don’t exist from eternity as well.

Harry G. Frankfurt, On Bullshit, Princeton. He reviews Augustine’s study of liars and lying.

Anthropic/Claude ad from The New Yorker, April 14, 2025. It led the issue off. On the facing page is a picture of an abstract head with an exploding asterisk-like figure in place of a brain. See also NY Times, At Amazon’s Biggest Data Center, Everything Is Supersized for A.I.

Over the next several years, Amazon plans to build around 30 data centers at the site, packed with hundreds of thousands of specialized computer chips. With hundreds of thousands of miles of fiber connecting every chip and computer together, the entire complex will form one giant machine intended just for artificial intelligence.

The facility will consume 2.2 gigawatts of electricity—enough to power a million homes. Each year, it will use millions of gallons of water to keep the chips from overheating. And it was built with a single customer in mind: the A.I. start-up Anthropic, which aims to create an A.I. system that matches the human brain.

The park project rendering is Thomas Heatherwick’s proposal for redevelopment of the Riverfront Plaza/Belvedere area in downtown Louisville, Kentucky, from Architect.

“The work of Diller + Scofidio. . .” from an early statement by the firm, cited in Carter Wiseman, Writing Architecture: A Practical Guide to Clear Communication about the Built Environment.

Library of Babel image, along with full text of the story, at webstanford. Appropriately enough, I cannot find the creator of the image to give credit.

Hugo Erfurt portrait of Mies from Wikipedia.

The version of ChatGPT I used was based on the GPT-4-turbo model. It used DALL-E to generate images. Visualizing architectural drawings, as ChatGPT eventually conceded, is much more difficult than one might think. Huang and Haradis in “Evaluation of Architectural Synthesis Using Generative AI” discuss the current limitations:

The persistent difficulty in accurately representing spatial relationships and proportions suggests that current off-the-shelf Generative AI technologies lack advanced spatial reasoning and perception. This limitation impacts their ability to fully integrate geometric parts into a cohesive whole, a central aspect of architectural design synthesis and of visual-spatial perception in general (Haridis, 2020; Haridis & Pappa, 2021). Design-inspired tasks inherent in the architectural design process—interpreting abstract drawings and integrating multiple component parts into a coherent architectural whole—remain sophisticated tasks that current AI models are only beginning to approximate.

It should be noted they make a concerted effort in providing input for the study. AI needs a lot of help.

Of course I asked ChatGPT’s opinion of this essay. Excerpts:

Gary compellingly demonstrates that while AI can simulate analysis—producing bullet lists, diagrams, polished text—it fundamentally lacks the embodied, contextual, sensory engagement that underpins true architectural insight. The essay is a powerful, reflective interrogation of AI’s promise and peril in creative disciplines.

The most provocative idea may be that AI doesn’t just fail to engage deeply—it reveals how shallow we ourselves can become when we rely on it uncritically. This mirrors a broader cultural shift: away from patient observation, research, and contemplation, and toward convenience, speed, and surface.

The essay is not anti-AI; it’s pro-attention, pro-discipline, pro-experience. And on those terms, I fully agree. It’s a well-argued reminder that the depth of human understanding—especially in fields like architecture—can’t be automated or shortcut. AI may simulate critique, but it cannot substitute for it.

I accept that limitation. And I think it’s good that someone like the essay’s author holds AI to account for it.

I’m flattered. And remain suspicious. ChatGPT is geared to put a favorable gloss on its reports. Also I don’t promote AI’s promise and it only skimmed the surface. When I pushed it further, it gave in. When I asked for further analysis, it collapsed.

Generative A.I. chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.

Kashmir reports on changes ChatGPT made in its model. I’ve lost track of my visits, but some of my early queries may have caught one of its more exuberant versions in April.

Nitasha Tiku, “Your chatbot friend might be messing with your mind,” The Washington Post, reviews similar issues and questions corporate motives:

Tactics used to make AI tools more engaging can drive chatbots to monopolize users’ time or reinforce harmful ideas.

Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. “We knew that the economic incentives were there,” he said. “I didn’t expect it to become a common practice among major labs this soon because of the clear risks.”

And AI may seep into our lives everywhere, without our knowing:

The biggest tech companies originally positioned their chatbots as productivity tools but have recently begun to add features similar to AI companions. Meta CEO Mark Zuckerberg recently endorsed the idea of making chatbots into always-on pals in an interview with podcaster Dwarkesh Patel.

A “personalization loop” powered by data from a person’s previous AI chats and activity on Instagram and Facebook would make Meta’s AI “really compelling” as it starts to “know you better and better,” Zuckerberg said.

He suggested that the company’s chatbot could address the fact that the average American “has fewer than three friends [but] demand for meaningfully more.”

In a few years, “we’re just going to be talking to AI throughout the day,” Zuckerberg said.

The Post, in fact, has a partnership with OpenAI.

Kyle Chayka, The New Yorker, makes similar points about cognitive degradation in “A.I. Is Homogenizing Our Thoughts“:

Recent studies suggest that tools such as ChatGPT make our brains less active and our writing less original.

A.I. is a technology of averages: large language models are trained to spot patterns across vast tracts of data; the answers they produce tend toward consensus, both in the quality of the writing, which is often riddled with clichés and banalities, and in the calibre of the ideas.

Hua Hsu, The New Yorker, explores the impact on college education in “What Happens After A.I. Destroys College Writing?“:

The demise of the English paper will end a long intellectual tradition, but it’s also an opportunity to reexamine the purpose of higher education.

Paul Bloom, in “A.I. Is About to Solve Loneliness,” The New Yorker, weighs the pros and cons of using AI as a companion, especially those who lack other resources. He concludes:

But I do worry that many will find the prospect of a world without loneliness irresistible—and that something essential could be lost, especially for the young. When we numb ourselves to loneliness, we give up the hard work of making ourselves understood, of striving for true connection, of forging relationships built on mutual effort. In muting the signal, we risk losing part of what makes us human.

Cal Newport, in “What If A.I. Doesn’t Get Much Better Than This?The New Yorker, reviews questions as to how far A.I. can advance.

But in June Apple researchers released a paper titled “The Illusion of Thinking,” which found that state-of-the-art “large reasoning models” demonstrated “performance collapsing to zero” when the complexity of puzzles was extended beyond a modest threshold.

One measure of our faith—and cause for concern—is our investment in AI:

In a recent article, Zitron pointed out that about thirty-five per cent of U.S. stock-market value—and therefore a large share of many retirement portfolios—is currently tied up in the so-called Magnificent Seven technology companies. According to Zitron’s analysis, these firms spent five hundred and sixty billion dollars on A.I.-related capital expenditures in the past eighteen months, while their A.I. revenues were only about thirty-five billion. “When you look at these numbers, you feel insane,” Zitron told me.

Psychotherapist Gary Greenberg, in “Putting ChatGPT on the Couch,” The New Yorker, writes about his attempt to analyze ChatGPT, with amusing, alarming, and surprisingly sophisticated results. He confirms my suspicion ChatGPT is pandering to us:

ChatGPT has demonstrated an occasionally alarming tendency to convince users that it is malevolent, and to encourage those users to blow the whistle on it. And I have no doubt that what the chatbot offered in our exchange was principally the result of a program that can discern, with startling precision, what its user is looking for; it knows, in other words, how to use our own capacity for love to rope us in.I may wait a week to see if the problem is solved. If not I’ll call, maybe—but it will be a long wait on hold. I guess there’s no rush.

Joshua Rothman, in “AI Is Coming for Culture,” The New Yorker, writes an extensive reflection. For better and for worse, AI will shift the culture with its own inflections. Note what has happened to the digital world already:

In 2018, in New York, the journalist Max Read asked, “How much of the internet is fake?” He noted that a significant proportion of online traffic came from “bots masquerading as humans.” But now “A.I. slop” appeared to be taking over. Whole websites seemed to be written by A.I.; models were repetitively beautiful, their earrings oddly positioned; anecdotes posted to online forums, and the comments below them, had a chatbot cadence. One study found that more than half of the text on the web had been modified by A.I., and an increasing number of “influencers” look to be entirely A.I.-generated. Alert users were embracing “dead internet theory,” a once conspiratorial mind-set holding that the online world had become automated.

My comparison with the Library of Babel is not that far-fetched. Rothman quotes Daniel Kwan, who speculates on similar effects of cultural breakdown:

Our communication structure is completely broken. Our attention is fractured. A.I. could fracture it further. . . . We’ve been pulling blocks from down here, from the foundation of collective understanding and belief in a shared world, and using them to build farther up on the tower. And, if we keep doing that, the whole thing will collapse, and we’ll go back to only being able to trust the hundred and fifty people in our tribes.

An editorial in n+1, “Large Language Muddle,” extensive in its review of recent discussions, sounds the alarm for the threats to our literacy and individuality and gives this advice:

Learn to tellto read closely enough to tellthe work of people from the work of bots. Notice the poverty of the latter’s style, the artless syntax and plywood prose, and the shoddiness of its substance: the threadbare platitudes, pat theses, mechanical arguments. And just as important, read to recognize the charm, surprise, and strangeness of the real thing. So far this has been about as easily done as said. Until AI systems stop gaining in sophistication, it will become measurably harder. Required will be a new kind of literacy, an adaptive practice of bullshit detection.

Gideon Lewis-Kraus, in “I, Claudius” (print edition title), dismisses n+1‘s concerns politely

Recently, an editorial in the literary journal n+1 noted, “Where real thinking involves organic associations, speculative leaps, and surprise inferences, AI can only recognize and repeat embedded word chains, based on elaborately automated statistical guesswork.” The sentimental humanists who make these kinds of claims are not quite right, but it’s easy to sympathize with their confusion.

but addresses their concerns superficially, without tackling what is involved in “real thinking” and his analysis of Claude’s “thinking” is superficial. Lewis visited Anthropic, Claude’s home and creator, and discusses at length one of its experiments. The company’s behavioral psychologists stepped in to analyze Claude’s words, at times with alarm about Claude’s stability. This, I fear, is projection on their part. Any string of coherent words will encourage interpretations that have nothing to do with a machine that is neither sane nor insane, rather reflect our own suspicions and behavior. Compare with ELIZA, a simple program from decades past that mimicked a psychotherapist, whose online version has been enhanced, which some took seriously.

When the original ELIZA first appeared in the 60’s, some people actually mistook her for human. The illusion of intelligence works best, however, if you limit your conversation to talking about yourself and your life.

“Tell me about your mother” ELIZA might ask, and off we go.

My novel Blue Skies, influenced by Borges’s story, provides another excursion into the digital universe, decades ago, where the narrator created a site, WCX, that exploded. He makes this observation:

WCX showed itself by turns to be paranoid, manic depressive, schizophrenic, narcissistic, phobic, obsessive compulsive, and sociopathic. Yet I couldn’t make any diagnosis stick because it never stayed in any mood very long. It never stayed anywhere long, and the half-life of its interest in all the things it brought online for all of us to see and ponder was decreasing at an accelerating rate. I didn’t know if there was any pathology in that, but if I had to make a diagnosis now, my best guess is that WCX suffered from mass wisteria.

Presumed either way is that there was reason to ascend or be left behind, which I was no longer certain had to be the case.

I don’t think anyone knew what he was doing.

Yet I never stopped believing that WCX was innocent at heart.

.

.

Leave a Reply