🕸️ AI Metaphors
by AIxDESIGN
41 blocks
3 days ago

Market economies and liberal democracies are loose, low-bandwidth examples of MMIs that use humans and mostly non-AI computers to scale muddler intelligence. The challenge now is to build far denser, higher bandwidth ones using modern AI agents.

Lindblom’s paper identifies two patterns of agentic behavior, “root” (or rational-comprehensive) and “branch” (or successive limited comparisons), and argues that in complicated messy circumstances requiring coordinated action at scale, the way actually effective humans operate is the branch method, which looks like “muddling through” but gradually gets there, where the root method fails entirely. Complex here is things humans typically do in larger groups, like designing and implementing complex governance policies or undertaking complex engineering projects. The threshold for “complex” is roughly where explicit coordination protocols become necessary scaffolding. This often coincides with the threshold where reality gets too big to hold in one human head.

As the word limited suggests, the branch method respects limits, including the rationality limits of humans, but more importantly, limits imposed by imperfect knowledge and the temporality of data availability, which apply to...

We don’t yet know what “natural” language will look like for modern AI agents chattering away on their future protocol pipes, let alone mixed human-AI agent systems, but I am pretty confident it won’t look high-modernist and formalist the way non-AI computer chatter does. But it won’t look like human natural language either.

If you want to scale AI sustainably, governance and influence cannot be one way street from some privileged agents (humans) to other less privileged agents (AIs).

The title of this essay inverts the title of a book about economics by Donald Mackenzie, An Engine Not a Camera. The premise of that book is that economics theories are engines that produce (via policies and institutions) economic behaviors, but trick us into thinking they merely describe them. Modern AI has the reverse problem. It’s a camera that tricks us into thinking it’s an engine that “generates” rather than “sees” things. As an aside, this weird symmetry makes me suspect that economics and modern AI are true duals of some sort — maybe the way to get to AI with agency is to bolt on an economics theory.

This is what the stochastic-parrot takes (and the older monkeys-at-typewriters take) entirely miss. We’re all stochastic parrots attached to monkeys on typewriters all the way down. That’s only insulting if you don’t like parrots and monkeys and imagine you possess some ineffably higher-order consciousness their kinds of minds cannot embody.

The fascinating thing is, parrots and monkeys are all you need. That’s all it seems to take to produce anything we consider intelligent behavior, and we’ve barely begun to scratch the surface of that behavior space, which is far vaster than we imagined.

+ 44 more blocks