Lessons From Creating My First Text Adventure
Table of Contents
When I write about the greatness of text adventures, I pretend they are easy to make. They are, compared to many other types of games, but it’s still a bit of a lie. They’re hard to make.
I made one and submitted it to ParserComp 2025, and here’s what I learned.
My first text adventure: Lockout
Since ParserComp voting has opened, the game is now public, and you can play it in your browser. The game itself reveals what it is about as you progress, so I won’t write too much about it here. You should be able to finish it in 30–60 minutes. Don’t feel bad about needing hints, especially if you’re new to text adventures.
Lockout is my third attempt at making a text adventure. The first two failed because of ambition.
Scope and richness of world modeling
It’s very easy to accidentally try to create too large a text adventure. My first attempt was way too ambitious, and would have taken months to finish. I still love the idea for that, but it had to go. I picked another idea that was much smaller in scope, but it, too, grew too ambitious. I restarted one more time, from an idea that was even more pared down, and I was actually able to finish that game. With the minimalistic scope it took maybe fifteen hours of active work to create most of the game, and then almost as much time again to fix the 57 problems I received feedback on from beta testers.1 This is a large time investment for me, which makes it unlikely I’ll create another one any time soon, even if I’d really like to.
Scope in text adventures is complicated, because there are two dimensions along which scope can vary: breadth and detail.
- A broad game might have many locations, items, and events, but each is relatively sparsely implemented. Actions get generic responses, and items are mainly generic scenery without interesting interactions. This is the style of some of the earliest text adventures; they had to do it that way due to technical limitations of the time.
- A detailed game might have few locations, items, and events, but those that do exist are modeled in high detail and interact realistically with each other. Modern development tools afford insane levels of detail, and it’s up to the author to set a reasonable limit to the detail included.
Lockout is in the narrow-but-detailed category. Making a broad and detailed game takes a huge effort, so for the hobbyist it boils down to finding the right tradeoff between breadth and detail.
For better or worse, the community encourages detailed games, and among modern games, a text adventure that is broad and not detailed stands out as lacking in implementation. A useful comparison to study this difference is The Plot of the Phantom: a newly released game which was designed in the 1980’s, but implemented using modern tools. It contrasts with Lockout in that it is mainly in the broad-and-sparse camp, as the following interactions illustrate.
CAVE ENTRANCE
You stand at the base of a sandstone cliff, at the mouth of a large cave. Behind you to the south, a misty Appalachian forest stretches in all directions. A weathered sign made of pressure-treated lumber and painted in the style of National Park signage stands to the right of the cave entrance.
The cave entrance is to the north.
Oh, wow, what sort of cave is this? > look at cave
You can’t see any such thing.
Huh. Well, what can I find in the direction I came from? > go south
You can’t go that way.
Why not? I suppose what remains is to enter the cave. > enter cave
You can’t see any such thing.
Oh right, the cave doesn’t exist in the world, only in the description. Maybe I have to go inside. > go in
You can’t go that way.
I could have sworn … I guess I’ll go north to enter. > go north
LARGE CAVERN
You step into a large open space, a cavern created by seeping groundwater dissolving the sandstone over many decades. The sandy floor is strewn with rocks and the damp ceiling hangs low, just a foot or two above your head.
You’re not the first to find this place. Crumpled beer cans and cigarette butts are everywhere, and the cold, blackend remains of a firepit sit in the cavern center, and the eastern wall is blanketed with a remarkable amount of multicolored graffiti.
To the north, a passage disapppears into the rock, while to the west you can see a hole just barely wide enough to crawl through.
I’m worried about the safety of the ceiling. Can I get more information? > look at ceiling
You can’t see any such thing.
That’s odd. Let’s take some stuff we found here before we move on. > get beer cans and cigarettes
That’s hardly portable.
This goes on for the rest of The Plot of the Phantom. There are a lot of things that appear to exist, but when the player tries to examine them, they aren’t really there, or they behave in unexpected ways.
Don’t get me wrong! I finished The Plot of the Phantom and I had a lot of fun with it. You can make sparse games fun. However, this sort of sparsity does tend to annoy modern players. One of the reasons is that when there are four hundred objects mentioned in the game, but only forty of them actually exist in the game world, the player can easily miss vital clues because they stop examining things, having grown tired of the stock “You can’t see any such thing” response.2 Another common example of beta testing criticism is when the player is given e.g. a matchbox and is somehow able to stuff something like a keyboard inside it. The matchbox is technically a container and can contain things, but a detailed world model will prevent the player from inserting large things into small things. A sparse world model will not care.
Detailed world models are more fun for the player, but harder for the developer. The complexity of the implementation scales multiplicatively thanks to interactions. When we add another item, we have to account for how it interacts with everything else already in the game. When we add another action the player can perform, we now have to contend with the fact that the player might perform that action on every single object we already have in the game.3 My first attempt at making a text adventure included the ability of levitation! It’s a great mechanic that can be used creatively, but it can also be exploited to skip a lot of puzzles and enter restricted areas unless the rest of the game is very carefully designed around it.
Scope can be measured in locations
Text adventure complexity used to be measured in number of objects, going back to the time when primary memory was the main limiting factor in how complex text adventures could become. The most popular text adventure system of the 1980’s were limited to 256 objects, where locations also count as objects, as well as some things like directions. In practice, these games typically had
- 80-ish locations,
- 40-ish significant objects, and
- 100-ish scenery objects.
The rest of the object budget was used for housekeeping, directions, etc.
Infocom, a highly influential developer of text adventures, released 34 games over their most active decade. These games had a total of about 2800 locations. I don’t have exact employment numbers, but let’s say they had 70–100 implementer-years over that decade. This works out to maybe 2300–4000 implementer-weeks, accounting for various inefficiencies and lossages along the way. In the end, that gets us an estimation of effort at 30–60 implementer-hours per location, which – satisfyingly – matches my experience with the single-location Lockout.4 Although it should be said that I have the benefit of modern tooling, so they certainly had to accomplish more with their time.
On the one hand, the 1980’s density of two interactable objects per location is a little low for a modern game, but on the other hand we have modern tooling, so maybe the two cancel out: we can build more detailed games at less effort, giving us similar development times for similar location counts. This means we can pick how many hours we’re willing to spend on making a text adventure, and count backwards to how many locations we’ll be able to make in that time. That’s useful for scoping a design.5 It also reveals how expensive creating text adventures can be: If we, as hobbyists, can spend a couple of hours three days a week on it, a relatively small ten-location game will take as long as a human pregnancy! That said, someone who is more experienced is probably going to be able to go much faster with modern tooling. I have heard some people are able to make text adventures as fast as one hour per location, although I suspect these are for speed competitions and will take a few more hours per location to flesh out.
Selecting a small enough scope and managing to design a fun game around it is probably one of the most difficult aspects of text adventure design, with the caveat that I’m not the right person to talk to about design, since I’m a terrible game designer, and I’ve only just dipped my toes into the world of making text adventures anyway.
I think one of my key realisations making text adventures is that they are a form of simulated world, in which the player is relatively free to act according to their will – and often the solution to puzzles comes out of this freedom, so players expect the world to react faithfully to their freedom. But simultaneously with the above, the simulated world does not exist: all the developer has to do is to present to the player words that make the player willing to imagine that the world exists.
Non-scope design challenges
One of the challenges I didn’t expect was that puzzles are more difficult than we think. I had constructed a set of eminently logical, generously hinted puzzles for Lockout. Then my first set of beta testers struggled with finishing the game. I ended up removing some puzzles entirely, and hinting the others more strongly.
It turns out puzzles feel much easier to their author than they are to any player. If the author designs puzzles they think are just about doable, then even intelligent players will have to look at hints or abandon the game. We should aim to over-hint at the puzzles. It is also useful to create alternative solutions such that if the player gets the right idea, but fails to implement the solution precisely according to the steps we imagined, they can get past the puzzle. Players don’t like having basically solved the puzzle-in-spirit but not using the exact methods we intended. The game should take the hint, draw the rest of the owl on its own, and let the player progress.
New here? Text adventures are my new found obsession, but I write a lot about software development more generally. You should subscribe to receive weekly summaries of new articles by email. If you don't like it, you can unsubscribe any time.
Technical challenges
Beyond the issue of good game design, there are technical challenges related to presenting words such that the player is willing to imagine. I can think of three:
- Making a conventional parser, i.e. the bit of the game that reads the player’s commands and turns them into in-game actions;
- Generating text that is grammatically sensible when different objects are spliced into templates; and
- Creating a consistent world model that governs interactions between items and actors.
We briefly touched world modelling already, and much of it is about good game
design more generally. I’m the wrong person to talk about that. Let’s just say
that the world model does not have to be realistic, but it has to be consistent.
If one part of the game requires the player to LICK STAMP
then the player
should also get a reasonable response when they try to LICK APPLE
. A standard
“that is not something you can do” response shatters immersion. If there’s a hat
and a coat rack, should the player be able to hang the hat on the coat rack,
even if that does not fill any function in the game? In a detailed world model,
sure.
Let’s speak more about the parser and text generation.
A conventional parser
Someone who has not played text adventures before are likely to struggle at the prompt. It might play out like
> I’d like to try knocking again
That’s not a verb I recognise.
> Look around.
You can’t see any such thing.
> How do I get out?
That’s not a verb I recognise.
The openness of the prompt breeds confusion and frustration. It tricks beginners into thinking the game speaks English, but of course, the game does not speak English. The game speaks a specific conventional language that looks like English when squinting. These are all commands that could work:
SMELL EXAMINE DOOR TAKE SCREWDRIVER HIT COMPUTER WITH KEYBOARD SWING ON ROPE DROP ALL TOOLS BUT HAMMER
Parsers typically ignore articles, so there’s no difference between TAKE
SCREWDRIVER
and TAKE THE SCREWDRIVER
. The pattern behind the commands above
are that they consist of an action (a verb phrase, like SMELL
or SWING ON
),
optionally followed by a subject (like COMPUTER
or DOOR
). Sometimes they
take an additional object after a WITH
modifier (like WITH KEYBOARD
).
Modern parsers understand that ALL TOOLS BUT HAMMER
is a meaningful
description of a set of objects. They will look in your inventory, find all the
things that are tools, and then remove the hammer from that set before dropping
them.
A player may also expect to be able to chain commands together, and refer to items in earlier commands, such as typing
EXAMINE SCREWDRIVER. TAKE IT.
and having both actions play out. Writing this kind of parser is non-trivial, to say the least.
Text generation
English is full of weird rules. The parser can get away with ignoring them by not really speaking English, but the text presented to the user does not have the same luxury. If we have a template string that indicates someone stands quiet when spoken to, we need it to generate the article in “The guard stands quiet”, but it must not also generate “The Monica stands quiet”.
A text generation system, then, must know some basics about how to form grammatically valid English sentences. It needs to know about plurals, proper names, articles, pronouns, etc. Writing that kind of system is perhaps easier than a parser, but still not a walk in the park.
Text adventure-making environments
Text adventures are superficially simple. Anyone with some rudimentary Python knowledge could make one! But the above technical challenges make this less than desirable. It’s as they say: if you want to make a game engine, feel free to. But if you want to make a game, use someone else’s engine.
There are three popular, mature programming environments for making text adventures: Inform 6, Inform 7, and tads 3. These come with a conventional parser, powerful text generation tools, and they have a primitive but consistent world model built in by default: out of the box, they understand actors can move around and perform actions, objects stay where they are dropped, some objects are fixed in place, containers can have objects inside them, etc.
Both Inform 6 and tads 3 are more traditional object-oriented languages. I have played with Inform 7, because I was intrigued by its model of computation.
Inform 7
The fundamental units of Inform 7 code are
- Rules, which are lists of instructions that should be executed in response to events. Rules can prevent further rules from running.
- Actions, which are attempts by actors to manipulate the world, leading to sequences of events being fired off.
- Filters (I don’t know their real name) which determine when rules apply.
- Properties, which characterise objects and serve as the state of the game world.
- Relationships, which characterise how objects relate to each other. These form a sort of graph for which there are manipulation utilities in the standard libraries, e.g. pathfinding.
I’m not very good at Inform 7 yet, but I have a vague sense that this paradigm is actually really powerful and could be used to good effect for other types of software too, but I’m early in my explorations of it. One of the co-inventors of Inform 7 has described her early experience with the paradigm.
Inform 7 is a little hard to learn, because nobody sits down and explains its model of computation, nor the syntax. People are assumed to learn by copy-pasting and tweaking examples, and I’m not that sort of person anymore. I want to try to understand first and do later.
The syntax of Inform 7 is often the thing people react to first. Here’s an excerpt from a popular Inform 7 library which handles keys and doors a little more intelligently than the default.
A keychain is a kind of supporter that is portable. Instead of putting something which is not a passkey on a keychain (this is the limiting keychains rule): say "[The noun] [are] not a key." (A). The keychain-aware carrying requirements rule is listed instead of the carrying requirements rule in the action-processing rules. This is the keychain-aware carrying requirements rule: if locking or unlocking something with something which is on a keychain which is carried by the actor: continue the action; abide by the carrying requirements rule.
Yes, that looks like English. No, it is not English. For some people, this
syntax is a big deal, either because they find it easy to read, or difficult to
write, or both. I’m not one to care much about syntax, but I will say this style
of syntax does make it a little more difficult to learn the language.6 There
are these subtle variations on phrasing that sometimes matter a lot and
sometimes don’t matter at all, which make the first few days of learning really
frustrating. For example now the chest is not open
is a valid statement, as is
now chest is closed
, but not the chest is now closed
. To update a property
of an object, we must use specifically the syntax now <object> is <property>
,
where the article “the” is optional, and “closed” is an automatically-derived
synonym for “not open”. Things you just have to know.
The rules, properties, and relationships systems of Inform 7 are very powerful. When used properly, these give us the ability to construct complex systems out of simple pieces, with the programming environment itself filling in many of the blanks. On the other hand, it is also easy to misuse Inform 7, and focus on the concrete rather than the specific. I wish I could give good examples here, but I’m just not skilled enough with Inform 7 to do that. I just know that in the process of making Lockout, I ended up duplicating a lot of code that could have been handled more generally, with fewer bugs to boot.
The one thing I am envious of Inform 6 users for is PunyInform. That is an alternative standard library/compiler of Inform 6 which can generate smaller binaries7 Called story files in the lingo. usable on even weaker hardware. I don’t care for weaker hardware, but I like the idea of smaller binaies as a creative constraint, forcing limited scope on the game.
Learning the language and troubleshooting
The documentation for Inform 7 is not great. It’s effectively a long cookbook with small examples for specific mechanics. The examples are good, but it’s hard to find what one is looking for. Jim Aikin’s Inform 7 Handbook (pdf) does a slightly better job of introducing the language8 Note that this is version 3 – when you search the web for it you run a large risk of getting version 2., but I’m still missing an “Inform 7 the Hard Way” guide that starts from the core fundamentals.9 But why don’t you write it, kqr? Because I’m not yet good enough at the language to. Maybe if I stick with this for another year I can do that. There is also Inform 7 For Programmers by Ron Newcomb but it seems a little confusingly structured for me too – it starts talking about computation (functions and variables) which are not the most important parts of Inform 7 code. At least not as far as I’ve been able to tell. More recently, I’ve also come across Inform 7 Concepts and Strategies which is in the direction I want, but too short and shallow, and lacking in examples showcasing the variety of functionality possible with the fundamentals.
Another key source of language knowledge comes from the built-in debugging
commands. SHOME APPLE
prints details of an object. SHOWVERB KICK
helps
figure out what is triggered by a particular command. ACTIONS
turns on
explicit printing of implicit action conversions, such as when e.g. LOOK UNDER
triggers a SEARCH
action. For maximum verbosity, the RULES
debug command
turns on explicit printing of every rule that gets evaluated in response to an
event. This can help pinpoint why things are printed the way they are.
Once we have found something interesting using the above debug commands, we might want to find out how that thing was created. We can get at this through the index in the ide somehow, but easier is searching the html dump of the Standard Rules provided by Zed Lopez. These are the full standard libraries that run by default in all Inform 7 projects, unless they are explicitly disabled.
There are extensions to Inform 7. If the standard rules are like the standard library, then extensions are like third-party libraries; these add generic functionality and simulation we might want in our games. The most popular ones are available at the Friends of I7 Extensions listing.
With some luck, our syntax questions might be answered by the Inform 7 cheatsheet made by Mark-Oliver Reiser.
Publishing a Text Adventure
First off: beta testing is critical. I had self-tested my game to pieces. I had tried everything, looked at it from every angle, proof-read every sentence. Yet when I handed over the game to its beta testers, they found over fifty problems with it, including some that could easily have made an ordinary player stop playing out of frustration.
If we plan on releasing our game in a competition (which is a good idea – see below), we have to keep in mind while requesting beta testers that most competitions have rules specifying entries must be previously unreleased. This means we cannot publically post a link to a testing build – we must send it privately to people who volunteer to beta test our game without having seen it before.
Creating a testing build
Generating a testing build for beta testers is not too difficult: there’s a menu
alternative in the Inform 7 ide for that. This testing build retains some
cheat commands like purloin
(move anything into the player’s inventory) and
gonear
(teleport the player to any other location).
Testers conventionally record a transcript of their experience, which shows their input and the game’s output. In this transcript, they often inject their thought processes or other comments by prefixing their input with something like an asterisk. Handling such a remark gracefully in Inform 7 (avoiding the “That’s not a verb I recognise” error) is a matter of adding a command preprocessing rule:
First after reading a command: if the player's command matches the regular expression "^<\p*+='[apostrophe]>": say "(Noted.)"; reject the player's command.
I have seen other versions of this rule online, but they tend to over-complicate the regex because, I suppose, not a lot of people know how regexes work.
Although they belong to the finishing touches, I’d still suggest including
HELP
and ABOUT
commands already in the testing build. I haven’t found a
specific guide for how to do this online, so here is how it was done in Lockout:
Understand "help", "hint", "hints", "instructions" as being lost. Being lost is an action out of world. Carry out being lost: say "PLAYING TEXT ADVENTURES[line break]This is a parser-based ..." Understand "about", "info", "credits" as being curious. Being curious is an action out of world. Carry out being curious: say "Lockout is a one-room puzzle. Your goal is to get out of ..."
The help response should include a link to puzzle hints, unless there are adaptive hints programmed into the game.
Puzzle hints
On the topic of hints: to be respectful of our beta testers’ generously donated time, they should get access to a set of puzzle hints. That way, they don’t waste time on puzzles which may be badly designed, and they can instead move on and continue to provide feedback on the later parts of the game. My first set of hints also received beta tester feedback. They were (a) not very helpful, and (b) encoded in rot-13.
Apparently rot-13 is old-fashioned. Writing the html and css for modern-style hints10 See e.g. the hints for Lockout. is easy, but a bit tedious. I made a generator for such InvisiClues-style hints, where we paste in plain-text hints and get the html out of it. Feel free to use it for your game, if you’d like!
As for making helpful clues, I’m still not sure how. The first set I generated was based on Socratic questioning, trying to draw the answer out of the player. This was deemed annoying. I had also forgot that the final hint should spell out the specific correct sequence of commands. The current set of hints for Lockout are still in flux, but better than the first set.
Publishing to competitions and IFDB
Most text adventures these days are published at competitions. There is no list of which competitions exist, but these are the ones I’ve gathered are active:
- IFComp
- The big deal. Running for many years now. Two hours of playtime is the generally agreed on scope limit for entries. Accepts entries in the late summer.
- Spring Thing
- Started out as a lower-key alternative to IFComp, but grown very large too. Accepts entries in the spring.
- IntroComp
- Competition where entries are expected to be unfinished, with the aim of earning quality feedback. Prizes are awarded only to those entries that then go on to be finished within a year of their entry. Accepts entries in the summer.
- ParserComp
- Smaller competition for parser-based text adventures. Accepts entries in the summer.
- Text Adventure Literacy Jam
- The talp accepts entries that try to teach inexperienced text adventurers how to play, and are judged partly on how well the entry accomplishes this. Accepts entries in the late spring, I think?
Relative to the size of the text adventure playing community, there are a lot of text adventures released any given year; it is easy for a newly released game to fall under the radar. Thus, it is encouraged to enter a new release into a competition, even if we are not of the competitive kind.
If the game is released in a competition, someone else is likely to make an ifdb page for it once the release is public. If not, this is the time to do it ourselves. And then, we’re done!