I want to create computer programs that engage in meaningful two-way creative collaboration with human users. For this to take place, I think the notion of “offers” is going to need to become more widespread in human/computer co-creativity research.

In solo creative work, you don’t ever necessarily need to articulate what it is you’re trying to make. You can take action in the medium, see what happens in response, and iteratively refine the actions you’re taking in order to produce an artifact that more closely matches your intent. A classic creative feedback loop – and all without ever having to explain yourself.

When another person gets involved as a creative collaborator, though, things get tricky. You can’t just take action anymore – they’ll be taking action too, and if your actions aren’t coordinated, you’re likely to end up with a mess. Some creative forms (like improv) seem at first glance to operate on the premise that action is sufficient for creative coordination and you don’t need to explicitly metacommunicate with your collaborators about intent. But even improv performers do a lot of metacommunication about intent outside of individual scenes. And in most creative forms, effective collaboration involves a ton of dialogue and negotiation about what you want to accomplish, how you want to accomplish it, and why.

There’s something of a gap in existing co-creativity research around the question of how to handle creative metacommunication. In co-creative platformer level design, for instance, the state of the art involves action-level turn-taking between the human and computer system, but creative intent (on both the human and computer sides) is left almost totally implicit. Either you guess at what your collaborator is trying to achieve from their actions alone, or you just learn to treat them as inscrutable and alien. And if they’re doing something you don’t want, you often can’t even tell them not to do it – let alone tell them why you don’t want to do it in terms of the creative intent you’re trying to enact, as you would with a human collaborator. Negotiating creative intent with your machine collaborator (e.g. by deciding together which creative goals should take precedence over others in a situation where they can’t all be fully realized) is right out.

Where existing systems try to handle creative intent, they usually do so by trying to implement more and better ways for the machine to infer what the human intends from their actions alone. This is neat when it works, but not sufficient: insofar as actions speak, they always say simultaneously too little and too much. Not only do actions fail to reliably communicate their own motivations, they also hint at myriad other possible motivations that the person performing the action never even considered.

This is why I’m excited about the notion of explicit offers in creative collaboration. An offer, in essence, takes a tacit inference about intent and makes it explicit: “Based on what’s happened so far, it seems like we’re trying to achieve this overall goal. I could further advance that goal by doing X. What do you think?”

The term “offer” as I’m using it here originally comes from improv, where it refers to any action that somehow “advances” the scene. Generally improv actors are supposed to accept any offers they receive, since there’s no time to deliberate on whether the overall direction implied by an offer is good or not. But here I’m generalizing the concept a bit. A lot of co-creative systems already implicitly provide users with little details that they can either seize on and elaborate or choose to ignore. (Consider, for instance, “extrapolative narrativization” in simulation-driven emergent narrative games.) But crucially, I think an offer is not just a creative action – it’s a creative action accompanied by some explicit means of acceptance or rejection. The structure of an offer not only provides the recipient with something to react to, but also provides the giver with a way to determine whether the recipient is interested in taking the shared creative work in a particular direction. If the system makes an offer and the user rejects it, the system gains information from the rejection that it can incorporate into its understanding of what the user is trying to create.

In the co-creative storytelling game Why Are We Like This?, we’ve been implementing offers via the automatic detection and surfacing of what we call “situations”. First, we query the database for collections of interrelated characters and narrative events that match a certain specified pattern – let’s say the baseline minimum requirements for “love triangle”, or “escalating cycle of vengeance”. Then we present these matches to the user as potential situations, and give them the choice to take an action that reifies the situation in question. Once reified, new actions related to this particular kind of situation are unlocked between these characters: for instance, two characters engaged in an escalating cycle of revenge might unlock a set of actions relating to sworn enmity that enable these characters to act hostile toward one another without any other immediate motivation. But if you don’t want to tell a story that includes sworn enmity, you can also reject the offer and choose to develop some other storyline instead.

In practice, this can start to feel a bit like the palette negotiation that takes place in the tabletop story-making game Microscope. If everyone in your story is supposed to be uncomplicatedly polyamorous, you can reject the system’s offers to help you develop potential love triangle plotlines. But if you’re deliberately delving into the luridest corners of soap opera-space, the system can begin to recognize a pattern of accepted bids in that direction and offer increasingly enthusiastic support.

Note that offers aren’t enough to solve the problems outlined here on their own. The main issue is that they’re too local: they don’t help as much with longer-term or higher-level negotiation of creative intent. For that you need a proper intent language, which I hope to say more about in future posts. But a co-creative system that reasons in terms of offers still represents a big step up from a system that reasons in terms of actions alone.

[This is a crosspost from the new Mixed Initiatives group blog, which will host writing on similar themes from a number of scholar-practitioners in expressive computation, myself included. Check it out!]

About

Affording Play is an irregularly updated blog by Max Kreminski about humans, computers, creativity, and play.

Elsewhere