We of bounded rationality

I came across a fascinating article at the Communications of the ACM site about negotiation agents. I quote:

“[Sanfey] matched humans with other humans and with computer agents in the Ultimatum Game and showed that people rejected unfair offers made by humans at significantly higher rates than those made when matched with a computer agent.”

Which proves the point I was making in my earlier post: we expect humans to be humane first, and efficient second. Machines, however, are expected to be cold and calculating, so we’re prepared to let ’em get away with precisely such behaviour.

(There may be a  flipside to that same coin: it may be we’re also likely to push our luck when we know it’s a machine at the other end – as in, we’re likely to ride a machine closer to it’s tolerance limits than we are a human).

The ACM article reminds us of the fact that the first cohort of would-be negotiation agents will be facing humans, so that they have to account for the human tendency to deviate from strategies that make sense in favour of – one presumes – strategies that feel good / feel right.  It seems that such deviations from ‘quantifiable sanity’ actually complicate the design, development and deployment of automated agents. It would be easier for them if humans were out of the picture, I guess, but that’s not how things are going to start out.

Last but not least, it looks like an ontology of negotiations hasn’t quite been fleshed out… and how d’you model something like ‘buttering up the opposition’? Yeah, exaclty.  Read the full article here.

***

So. That was the intelligent spiel from the intelligent people.

Now here’s what silly me thinks:

Isn’t there a hint of an inverse of The cylon problem? Like, why spend all this energy in learning how to get inside the head of a human opponent? As I have always said, when situations prove to be intractable, it’s humans that are generally called upon to deal with it. Routine negotiations of the type that will continue to make the world turn (if not stop it from grinding to a shuddering halt) will still, I’m betting, end up with automated agents. And when they get stuck, or when someone throws flesh at them, they will calmly hand over the situation to Joe Bloggs at Diplomacy Inc’s brand new call center on the moon.

So. Wouldn’t it be more efficient to just study agent-agent interaction? And maybe feed ’em the sorts of problems they’ll *actually* want to hog to themselves? Something like matching trade agreements, right down to individual products or services. Or oil prices. Or something. Something where the wiggle room is finite (by which I mean planet-bound), and there is no walking away from the negotiation table, and where a century counts as mid-term planning.

Since the researchers in the article saw fit to rely on historical data, I think we should go to town with the idea. Feed agents every number ever crunched with regard to trade, economics and politics, for a specific decade. Lock them all in a room, and give them each a simple goal: cut your nations emissions by x percent. Or, increase your GDP by y percent. Go!  Oh and you’re all running on battery power, so make it snappy…

I wonder if the result would be utter and complete deadlock. Who knows? it’s worth finding out. If there’s actual usable results, the next thing would be to feed the resulting agreements into a simulation that allowed us to play back the following decade, so that we may compare the auto-negotiated world with the world as-it-actually-was.

Now this post is in danger of bleeding into my giant-history-machine idea, which would only be followed by why-has-noone-built-this!!! ranting… So I’ll stop… right… HERE.

***

– image cred: some ‘ole pic of president Gorbachev and Reagan signing off on something. Green robot head (Reagan) is a picture by user ‘thomasbrant’ on flickr; Mr clock radio head (Gorbachev) came from here

Leave a Reply

Your email address will not be published. Required fields are marked *