From command to conversation: what will the user interface look like in five years for navigating laws, regulations, and standards?

Sean McGrath Sean McGrath Co-founder and Head of Innovation at Propylon 12 May 2025 6 mins 6 mins
SDL Campus
In 1999, Neal Stephenson published an essay called In the Beginning… Was the Command Line, looking at the history and philosophy of operating systems and user interfaces. Stephenson championed the command line interface (CLI), which saw users interacting with software via pseudo-English commands, each formatted as a line of text. Today, we are all accustomed to graphical user interfaces (GUIs), which use lots of visual indicators such as buttons, windows, and the ubiquitous pointing device. This graphical approach is also known as the WIMP interface (Windows Icons Mouse Pointer) and was invented by Xerox PARC in the 1970s.
 
In the beginning wave of popularization of computers, there was a revolt against the companion we, many of us today, couldn’t imagine life without – the mouse. Mainframe users were accustomed to directing the computer’s actions via the keyboard, much like many of today’s computer gamers. Thinkers like Stephenson saw it ultimately as a step back, though he did later change his mind.
 
Yet, there is much evidence that the way we think about interacting with computers is on the cusp of another profound transformation as a result of AI.

Revisiting the command line

In the early days of the PC, the user interface revolved around pseudo-English commands provided by an operating system called MS-DOS. Entering ‘DIR’ (short for ‘Directory’) prompted the system to list the files within a particular folder. This was also the nascent stage of Microsoft, and its early operating system, MS-DOS, reigned supreme, beating out PC-DOS (IBM) and other command-line oriented contenders such as CP/M. In fact, the CLI in PCs never truly vanished; Microsoft Windows today still has a command line built into it – two of them, actually.
 
This period was also the heyday of WordPerfect, the leading word processing application for MS-DOS and particularly popular with legal professionals. It represented an early step towards democratizing the ability to create legal documents, such as contracts, on a computer. WordPerfect became a staple in law school curricula. Microsoft, on the other hand, struggled to get a foothold in this market.
 
WordPerfect’s appeal lay in its minimalist design, focusing users on the text. Such was the command-line nature of WordPerfect that the software, once launched, essentially left you with a black screen. Users were expected to know how to use the “function keys” on their keyboards to access its features.

Buttons: rise, reign, retreat?

The genesis of GUIs can be traced to the 1970s and the work of companies like Xerox PARC. However, it was Apple, with the Lisa and then Macintosh, that successfully packaged a user-friendly GUI for the mass consumer market.
 
Over the ensuing decades, the GUI achieved near-universal dominance in desktop and mobile devices. Buttons, windows, and the mouse/touchscreen became ingrained in our digital experience. Interacting with a computer transitioned from being via a form of language that we typed in on a command line to visual representations of things that we click.
 
In the 1990s, another important player was quietly growing in tandem with the exponential growth of the World Wide Web: the search engine. Before Google emerged, search engines like AltaVista were popular options. Google, however, became the category king and, notably, with a user interface that is characterized by (intentional) simplicity. Navigating to the Google home page to this day greets users with a search box on an otherwise blank page.
 
Now things are starting to change once more.

Human language: the hot new programming language

In early 2024, Andrej Karpathy, a founding member at OpenAI, posted on X that English had become “the hottest new programming language.” This reflects the rise of prompt engineering coupled with the rise of large language models (LLMs) and other generative AI (Gen AI) systems. The parallels with the command-line are clear.
 
Gen AI is also becoming increasingly embedded into the search experience. Users of Microsoft Edge, for example, are no longer presented with just the search bar. They can now essentially ‘chat’ with the search engine to refine searches, ask for summaries, etc.
 
The leaps forward in AI technologies are simply astounding. You can use Google’s NotebookLM to have a chat with your content. You can point Gemini at a tube of toothpaste and get it to tell you exactly what the ingredients are and what they do. Companies like Notion AI are releasing tools that do it all – search, generate, analyze, chat.
 
The evolution feels like a return of sorts, not just to Neal Stephenson’s essay but further back to the vision of computer scientist Grace Hopper. In the 1950s, Hopper, with incredible foresight, championed the notion of programming languages being based on English words and sentences. The concept was dismissed on the basis that computers didn’t understand English. Hopper went on to spearhead the development of programming languages such as COBOL, an English-like language still in use today.

The era of co-pilots and conversational experiences

The proliferation of Gen AI has brought the term ‘co-pilot’ to the forefront. Co-pilots are becoming ubiquitous in all sorts of applications, from word processors to CAD programs to web browsers. Even programmers are finding the new co-pilot, chat-based forms of interaction very useful. We now have a digital mechanism for implementing what is known as ‘pair programming’. It’s not developer A and developer B sitting beside each other. It’s developer A being assisted by a digital co-pilot, developer B. 
The entire user interface modality is now shifting back towards text and language as the primary means of interaction, as opposed to more and more menus and buttons to be learned and clicked.
 
The term ‘low-code and no-code’, aimed at giving business users a tool to create their own software, is now clearly here, thanks to the natural language paradigm of co-pilots and AI agents. We are rapidly moving towards an era of ‘citizen programmers.’
 
Consider the scenario of a Microsoft Word user wanting to identify the ten largest files in their SharePoint drive, ranked by date and presented in a table. This is now achievable via a co-pilot conversation – no new buttons required. Agentic AI promises to take this further, I can say – ‘remember that. I’ll ask you again next Tuesday at 12 pm.’
 
The core principle is empowering users to configure their own experiences and create custom functionality on the fly using natural language. The result? Goodbye buttons, or at least most buttons. But this is a positive farewell, aligning with the prescience of visionaries like Hopper.

Implications for the legal landscape: the rise of ‘vibe drafting’

This year, Karpathy also coined the buzzword ‘vibe coding’, describing the use of AI tools to handle the heavy lifting in coding. Writing in Artificial Lawyer last month, Antti Innanen brought a unique, parallel angle to the legislative domain – ‘vibe drafting’.
 
Vibe coding is completely revolutionary to the idea of drafting. Programming is essentially a formalized form of drafting. There are interesting similarities between both crafts.
 
A tremendous amount of writing code is recombination of existing materials conforming to tried-and-trusted idioms/patterns. A developer is highly unlikely to be coming up with any way of expressing a ‘sentence’ of Python that’s never been done before. The co-pilot gives you access to that store of knowledge – e.g., here are the things that people like you have previously done in your situation. This is also exactly how much legal drafting works. Drafting attorneys rarely, if ever, start with a blank sheet of paper. In the case of legislation, regulation, and standards, they’re always looking at the language that currently exists.
 
In contract law, the objective is often to minimize unnecessary variability. While specific clauses may require unique language, drafting attorneys will default to tried and trusted language for particular types of clauses. Such as force majeure clauses, for example.
 
The software development community offers learnings here in that it is also a domain that manages large corpora of very formalized documents. That’s what programming is.

Five years from now: a profound change

The next five years will likely see the productivity tools and the conversational user interfaces being crafted today by programmers, for programmers, finding their way over into the drafting of formal documents in the world of legislation, regulation, and standards. However, a note of caution is warranted. We all tend to anthropomorphize these AI tools, won over by their confidence and assured tone. They do hallucinate; they do make mistakes. Furthermore, the potential for outdated information within LLMs adds another layer of risk. LLMs can take years to train and thus are less accurate on rapidly changing data than, say, search engines. It is important to consider the AI future strategically and with the appropriate structures as we navigate the next stage of the evolution in human-computer interaction.
 
Are we on the cusp of a future where drafting attorneys ‘talk’ to their co-pilots and then dedicate more time to quality-checking the machine-generated legal language? The evolution of the user interface towards conversational approaches suggests a profound shift is now taking place in how legal professionals interact with and create legal documents.
Sean McGrath
Author

Sean McGrath

Co-founder and Head of Innovation at Propylon
Sean McGrath, co-founder and Head of Innovation at Propylon®, has 30+ years in IT, focusing on legal/regulatory publishing and compliance. He holds a first-class Computer Science degree from Trinity College Dublin. Sean served as an invited expert to the W3C special interest group that created the XML standard in 1996 and is the author of three books on markup languages.
All from Sean McGrath