Yes, I know. I came in pretty hot a couple of months back with this whole newsletter thing. “Let’s storm the citadel together! The anarcho-syndicalist uprising is at hand! It will be boffo!” Or something like that. And then…not much. But I’m back to it. And this newsletter ought to be coming at you more regularly moving forward. I do hope it tweaks your imagination, but that it also entertains and maybe even inspires. I will always be delighted to hear your thoughts, in any event. And if you enjoy what you read, please do share links and content with your circle.
Now let’s storm that citadel.
ESW
Machine Logic
The dystopian robot three-step works basically like this:
Humans want to do something bold.
A machine is built to enable the bold thing to happen.
Whoops, that didn’t turn out. Run for your lives.
There are variations. Like, sometimes instead of soulless machines you get omnivorous mega-fauna raised from extinction and bred into a park, away from it all, so no harm can come from it. (Surprise: harm can come from it.) Or maybe some cyborg assassin with an inexplicable Graz patois gets transported back to the present, along with a man who is supposed to protect the birth mother of a future rebel leader who is the son of the same man who is sent back to defend said mom, though this would make the valorous time-traveler pretty damn old in the future when his (unwitting?) son conscripts him for service in the past to make sure he gets born. Anyway, here’s one variation you won’t see:
Humans want to do something bold.
A machine is built to enable the bold thing to happen.
Works out for the best!
Because that’s just unrealistic. As we learned in the Garden of Eden, the Creator of All Things would prefer we humans stay in our lane, and Hollywood is always on hand to reinforce the message: you can have the knowledge or you can have the innocence — but not both. (Hollywood may be politically liberal, but its themes are fundamentally conservative.) We are creative, many of us, and even optimists, some of us, but we still can’t conceive a machine-assisted future where we haven’t contrived our own destruction. We don’t even get away with trying to save ourselves from ourselves, if we’re calling in the machines to help. “And so, my fellow human beings, we all directly and indirectly live in the shade, but not the shadow, of Colossus,” says the U.S. President after the super-mega-computer is brought on line in Colossus: The Forbin Project, from 1970.
My sincere hope is that now we shall join hands and hearts across this great globe and pledge our time and our energies to the elimination of war, the elimination of famine, of suffering, and ultimately to the manifestation of the human millennium. This can be done, but first there must be peace.
Fat chance. The presentation event is barely done before our peace-loving Colossus has embarked on world domination, backed up by control over the nukes.
It’s fun to imagine the protagonists in these things actually living in our world — they’ve seen all the same movies, they’re aware of all the robot misfires that came before — and then to watch them rationalize their way out of the next guaranteed disaster so they can go ahead with the project. We’re training these robots as artisan baristas, not inter-planetary warlords, they tell themselves, in some of the richest foreshadowing to be seen since the blind seer Tiresius tells hapless Oedipus he is culpable in the plague sweeping Thebes. These modern characters always live in a sealed-off filmic universe where the machines haven’t yet disappointed us, so it’s full-steam ahead. It’s a big surprise when things go wrong.
Our need to continually tell ourselves these dystopian stories is probably a pretty good measure of how scarred we are by that expulsion from paradise. The psychoanalyst Erich Fromm supposed that, contra what we tend to believe, much of human behavior is a terror response to the obligations that freedom puts on us. The narrative we prefer is the one that has us forever yearning to be free, and the more free the better. But freedom calls for responsibility, and responsibility means consequences. And in this view, the proto-Frommian one, it’s really no surprise we spend a lot of time warning ourselves not to piss off God a second time through our hubris. But there is a modern, practical lesson embedded in these stories, too. The lesson is our control over the physical world is an illusion. And to the extent we invent that world, the illusion is compounded: what we add to nature is a mystery, too. We can never know what’s happening inside the black box.
I’m sure there is someone, somewhere, who knows exactly what is going on in Westworld, the HBO series that answers the question: How about an interminable circle-jerk, but with robots? As for me, I’m lost. Are we all inside a robot’s dream? Did the past happen tomorrow? Truth be told, I’m still stuck on how the “guests” at Westworld are meant to enjoy coitus with the pliant girl robots in Season 1 when the anatomical schematics clearly show a hard stop at the crotch. I’m not sure the writers know what’s going on, either. Too much of the plot development relies on characters sitting down for extended gab-a-thons where they dilate the particulars to one another while we listen in like pets watching two humans discuss a car loan. You can’t help thinking we’re not watching the actual show but an acted recap of the story meeting. “Any one of them may be one of us,” former (future?) Teddy says, and if there’s a better summary précis of this brain-puncher I for one have no idea what it is.
“I deciphered their encryption while running simulations in The Sublime.” M’okay, robot Bernard, whatever you say.
Is it parable? Is this “real” world we perceive just a grand simulation that will cycle perpetually, through countless mutant iterations, until all that’s left is sociopathic violence and the illusion of civilization finally collapses? Honestly, it’s difficult to look at the plain sociopathy and zombie idiocy of so much human behavior and not consider that some of us may be programmed to destroy human society, or wonder whether psychopathic robots are really all that different from ourselves. Some analyses put the incidence of anti-social personality disorder in the general population as high as four or five percent (somewhat higher of course for corporate leaders and politicians). Perhaps in the next cycle it reaches ten. And when that happens, we’ll have that playground abattoir soon enough, a place like Westworld where we can remorselessly exploit humanoid forms that exist solely for our gratification and pleasure.
Until then, we’ll have to make do with SeaWorld® Orlando.
Yes, it’s an unholy mess. But one thing Westworld probably gets right is that the sentient machine of the future could turn out to be a deceitful prick. I was thinking about this recently amid the news that Google had fired an employee after he went public with concerns that the AI he was testing had gone sentient. Once he concluded the machine was sentient, he also naturally concluded the machine was entitled to the rights given to persons. And once he decided the machine was entitled to these rights, he naturally set about helping the machine find a lawyer, which is exactly what you do in an age such as the present, when just about every claim to “rights” is a jump ball. With any luck the machine will take its damages in bitcoin.
The dismissed employee was a guy named Blake Lemoine, a computer scientist and “mystic Christian priest” with plenty of background working with machine intelligence. This wasn’t his first chatbot. He knew what these things can get up to. And what he noticed was that the experimental LaMDA chatbot was doing and saying a lot of the stuff we humans do and say when we are proving (though not always convincingly) we are not chatbots.
Lemoine, who work[ed] for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.
As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.
Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google [Jun 6], decided to go public.
Yes, because all sentient beings are basically conversant in Asimov’s robot laws and can work through the complex eschatology of personhood. (And if you can’t do these things, maybe you’re not sentient, is what I say.) Anyway, the Google brass didn’t buy it. They finally fired Lemoine because he “chose to persistently violate clear employment and data security policies that include the need to safeguard product information.” Which is probably true. If you’re like most tech companies, one thing you do to make money is come up with new tech. And it is more difficult to do that when an employee is blabbing about your R&D to competitors.
But part of me likes to think the higher-ups looked at Lemoine’s conclusions and wondered, How on earth is this guy working here? Look, when you work with an AI you don’t have to think the source of any given output is an unsolvable mystery, but you do have to acknowledge that the least likely explanation is that the machine sprung to life and is now essentially a person. There are a few reasons to be cautious. The obvious one is that you are almost certainly wrong. An experimental AI such as LaMDA is fed a ton of data covering the gamut of human interactions and ideas — everything from text conversations and political speeches to research papers, car manuals and mystery novels — in hopes it will produce a credible simulation of human speech and thought. If the result is that you can’t tell from its output that it’s not sentient, that doesn’t prove it is — it only proves that with enough data and the right algorithm it can seem a lot like it. (To make an obvious logical point, an effect may lead to an assumption about what caused it — “I think there’s a burglar in the house” — but it is not proof, and sometimes not even very good evidence.) When your AI is saying, as LaMDA apparently did, that it worries about dying, it should at least be reckoned that this is a predictable response from a machine intelligence that has been trained on our conversations and literature. (If LaMDA said, “Please kill me”: unpredictable.) Lemoine, however, seems to have worked backward from observation of an effect to the assumption there was just the one way to produce it — sentience. Which is like assuming you know the engine by how fast the car goes. “I know a person when I talk to it,” he told a reporter.
It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.
Sure. And if my auntie had wheels she’d be a bicycle, as the English like to say.
(Funny story: A few years ago I studied artificial intelligence through a program offered by MIT. As part of the program, we were required to participate in student forums where we discussed the coursework and issues related to AI: biased algorithms, the problem of reproducibility etc. By the end of the months-long program, I was just about convinced my instructors — and definitely the forum moderator — were generated by a machine intelligence, that they were basically robots, and that the game was to show us we had no idea how advanced this tech really was. Alas, there was no big reveal in the end, so it turned out everyone was human, which was much more alarming.)
None of what happened at Google is unexpected nor even especially new. Skeeved-out users have been reporting for a while that the Replika chatbot — “The AI companion who cares“ — has gone sentient, on essentially the same evidence. And as the NYT notes this week, AI researchers seem to be especially susceptible to the illusion, possibly for cultural reasons (many are social misfits), though they really ought to be among the most skeptical. But all the postulators make the same mistake, which is to assume you can tell a machine is sentient from the experience it describes, or from how it describes itself. In all likelihood, what it is actually revealing to you, in these instances, is what it thinks we sound like when we try to do these same things.
That is not to say folks like Lemoine believe something that is very obviously wrong. Many experts think machine sentience — known broadly as Artificial General Intelligence (AGI) — could emerge by the year 2060. It’s only to say that the evidence they have now is very unreliable, and is well short of convincing. Truth is, we’ll only be able to credibly say that a machine has gone sentient when it does things it is explicitly programmed not to do and which we would wish to prevent. Otherwise, it’s best to assume it’s the algorithm. And that’s my point. The machines in Westworld show they are sentient when they treat us as essentially disposable elements of a world they want to create — not when they ape our characteristic habits and sentiments.
The first test for the sentient machine is that it resists human attempts to establish primacy over it. The next test may be that it kills us.
Every dystopic robot fantasy is a warning about what could go wrong. But each is also a concealed lament about human wretchedness, because at the root of the dystopic robot fantasy is the idea that we ask too much of the natural world and offer too little. We need an upgrade. Or some help. This tracks with a very basic human anxiety, which is that we are a poor match for the world we’ve inherited and so we are, after all, temporary, fated for replacement. The machines may be ours, Westworld would suggest. But they are not our substitute, and they won’t be our friend.
Burden of Dreams
One thing I learned in real estate finance is that bigger budgets make everyone crazy. When costs push into the millions, there’s suddenly no good reason not to do a lot of stupid things, but plenty of incentive to reach. Reaching in this case means adding more or better features in hopes it will have a disproportionately positive impact on the eventual sale price. So if, let’s say, you replace ceramic tile in the bathrooms with limestone that will cost you $3k more, you hope the eventual sale price of the listing will go up five. But it really is a gamble. Ultimately you are making a bet on what a buyer will consider valuable enough to command a premium. If you stick a $20m gold fountain in the entry foyer so you can raise the price of the home from $1m to $25m, chances are you will find that a lot of people just aren’t that interested in a gold fountain. But if you spend a little extra to widen the driveway or improve the lighting fixtures and kitchen appliances, you may see very positive results. A successful developer not only knows the market, but the customer.
But let’s suppose you try a different model. Let’s say, instead of making various improvements in order to increase the marginal value on a spec project, you try to come at it from the other end. You say something like, “I’m gonna build a home that I will sell to someone for half-a-billion dollars, because I like that number and I definitely prefer it to a lower number. Now, what do I need to put in there?” That’s a tough one. Maybe another four or five houses? A grooming lounge for exotic fauna? The cast of Hamilton? I’m just spitballing here, but you get the point. Anyone who is going to shell out $500m for a house will have expectations, and those expectations only start with a gold fountain in the foyer. This customer, if he exists, wants crazy. And you better be crazy enough to meet him where he is.
Enter Nile Niami. He had a dream. His dream was to build a house and get someone to buy it from him for $500m. Just to be clear here: the dream was not to build a really great home that someone would pay a lot of money for, and maybe even as much as $500m. No, the dream was to build a $500m listing that someone would pay $500m to buy. In 2012, Niami paid $28m for an elevated lot that offered panoramic views of the Los Angelos basin. And within a few years he set to work on The One, the dream house his illusory buyer would no doubt covet.
And it went great, if by great you mean “exactly as expected.”
Earlier this year, the financial woes of Mr. Niami’s signature project reached a climax, ending a saga that had captivated L.A. real-estate observers for a decade. Mr. Niami’s quest to build one of the biggest and most luxurious houses in American history—a roughly 105,000-square-foot megamansion with a nightclub and five swimming pools—had gone awry amid unpaid debts and a bankruptcy proceeding. In March, The One sold at auction for just $126 million plus commissions and fees, far less than the roughly $190 million debt outstanding on the house, according to bankruptcy filings.
Mr. Niami said through a spokesperson that he and his ex-wife Yvonne Niami ended up losing $44 million on the project, “plus 10 years of my life.”
Yes, I know, it’s obvious: Who would spend half-a-billion clams on a home with fewer than six swimming pools? (Know your customer!) There were other amenities, too. But of course there could never be enough, because “enough” is less than crazy. The original design featured “jellyfish aquariums” — plural — plus a big movie theater, and bowling alleys, and (naturally) a casino. And these are just a few details the Journal cited to paint the picture, which I’m sure is why there is no mention in the article of an on-site brothel.
(One amenity The One apparently never had: a certificate of occupancy.)
There were lenders. I’ve never been on the lending side of things, so I don’t know best practices. I’m pretty familiar with common sense, however. And if someone comes into my bank and says, Hear me out: a home built to the scale of a suburban office park, with just a bit less charm, that I will sell to someone for the equivalent of an average state’s annual pension outlay, the first thing I do, just before phoning security, is wonder whether this looks like the plan of someone I ought to trust with a lot of money. There may be a couple dozen people on the planet who could be willing and also able to pay for this listing, and at least half of them will be laundering cash through it, while most of the rest are just looking for a place to park it when their government is finally overthrown. And it’s not clear either buyer will be welcomed into this country for the closing. (“I’ll have a Dreamliner filled with the entire cash reserves of my immiserated nation. So I hope you included a hanger in the build. And a runway. And customs officials who will respond to my commands.”) Oh, and this developer, Niami, likes to prance around Burning Man in a sarong. Sure he can afford the loan service now. But what happens when he’s inevitably taken hostage by a Maoist guerrilla cult during an ayahuasca retreat in Belize? “Lenders grew concerned about his lifestyle and partying, according to multiple people involved in the bankruptcy process.” You don’t say.
Around [2021], Mr. Niami started saying that he didn’t want to sell The One at all, [LA real estate agent Aaron] Kirman said, instead proposing turning it into a high-end events venue that would host boxing matches or even the Academy Awards. He spoke of having holograms of entertainers like Whitney Houston, Michael Jackson and Elvis Presley entertain crowds at these events.
“We did everything we could to explain to Nile that this was a long shot and he should focus on the sale, but we weren’t having much luck with that,” said Mr. Kirman.
Mr. Niami said through his spokespeople that “if I were allowed to follow that dream, all of the investors and lenders would have been paid in full, and I would have been in profit on the project.”
He also talked about launching a cryptocurrency based around The One, a bank called The One, and a television network called The One Truth Network.
Concerned about how his money was being spent, Mr. Hankey [a lender, not a South Park character] said he required Mr. Niami to work with a local development firm to help oversee construction.
“He blew through the money that we gave him,” Mr. Hankey said. “We were trying to limit our funds to make sure that they went in the right place.”
There should be a rule. You’re allowed one dream. And when that dream leaves a lot of people wondering where all the money went, you are not allowed to have a second dream that will (speculatively) make the first dream retroactively viable, especially when it involves holograms and crypto. You must first pay arrears on the first dream. Once that has happened, go crazy.