Generative AI is transforming UX. I’m especially interested in its impact on information architecture — i.e., how experiences are structured so people can find and understand things. The key question is: How does AI change IA?

We can tackle this question from two directions. The first is AI’s impact on the process of designing IAs — i.e., AI as an assistant to a designer producing an IA. There’s a lot to explore here, and I’ve already written about my experiments in this direction. (See here, here, and here.) In this post, I will consider the other direction: the impact of AI on IA at the level where users experience it. That is, the app or website’s navigation, labeling, search, and metadata (AKA ‘invisible’) systems.

Of course, both directions ultimately affect the system’s UX. The difference is that in the first case, humans use AI to design IAs that subsequently remain relatively stable. In the second case, system structures and interfaces are dynamically controlled by AIs, which change them on-the-fly to provide experiences that are highly customized to individual users.

Think of the difference between a free-form generative AI tool like ChatGPT and an old-school IVR system, with its rigid top-down choice trees (e.g., “For sales, press one; for support, press two…”). Both are interfaces to a knowledge domain, and both can get you the information you need, but the former is much more flexible — even if that flexibility comes at the expense of clear boundaries about the limits of the domain.

When considering the areas of UI most affected by information architecture, three come to mind: navigation, search, and labeling. IA also encompasses the system’s metadata structures, which are often ‘invisible’ to the user but significantly affect the experience. Let’s consider AI’s possible impact on all of them.

Navigation and search serve similar purposes: both allow users to ‘move’ to different parts of the system. Nav bars enable browsing, while search boxes allow users to look for things they expect to be there. That said, mechanisms and mental models for searching and browsing are different enough that the two are worth considering separately. Let’s start with search, since it’s where AI might have the most obvious impact.

AI holds significant promise for improving search. Traditional search systems work by matching user-entered queries with keywords stored in databases of indexed content. Results are then ranked for relevance and shown to users on results screens they can (often) sort and filter. AIs can improve every step of that process, from parsing user queries to finding related content and presenting relevant results.

Moving beyond improving current search mechanisms, you can imagine systems that implement AI-driven feedback loops. That is, the system tweaks content indices based on user search terms and site traffic patterns. If there are lots of queries for a particular phrase and no good content mapped to it, the system could suggest that such content be created or create it itself.

But search users don’t necessarily want results that point to web pages. Instead, they want answers. Search results pages that offer a list of links to “pages” in the system will come to be seen as old-fashioned, at least for that use case. (Compare Google Search with Perplexity.)

Of course, there are use cases where you definitely want a list of results. For example, when searching a store, you want a list of products — not a synthesized “answer.” (Although a list of the ten best products that meet some arbitrary criterion might be useful in some cases.) But for many situations, a more ‘narrative’ answer may be better than a list of search results. Expect to see such search interfaces proliferate in the next few years.

AI’s Impact on Navigation

While I can clearly see how AI will improve search — and soon — I’m less confident about AI’s ability to usefully transform navigation bars and other ‘browseable’ UI nav elements. While you can imagine AI-based systems that render novel nav bars on the fly, I struggle to see the utility of such approaches for most use cases.

The problem here isn’t findability, but context-setting. Navigation is one of the purposes of nav bars. But it’s not their only purpose. Arguably just as important is nav bars’ ability to establish the context and boundaries of the system. For example, consider a navigation bar that includes the following choices:

  • Checking
  • Savings
  • Credit Cards
  • Lending
  • Investing
  • Wealth Management

I don’t have to tell you what this website is; the mere presence of these phrases and words in each other’s proximity is all you need. Establishing a sense of context is an important part of making systems more usable. Users know what to expect from a bank’s website. They have good ideas about what they can expect to find and do there.

Terms in global navigation bars must be carefully chosen for both understandability and cohesiveness, with the goal of establishing a clear sense of place. (Of course, you also want to clarify in which parts of the site users will find what they’re looking for.) A system that customizes terminology for each user might gain personalization at the expense of that cohesive sense of context.

Standardization is also an issue. I’ve heard people daydream of AI-driven UIs that adapt to their users’ needs and levels of competence. Imagine a device whose UI is completely customized to your idiosyncrasies. While it might serve your needs, it’d be difficult for another person to use it. And, of course, by going with a one-off UI, you’d lose the ability to learn from documentation or video tutorials. I expect this would also make tech support harder, especially when dealing with novice users. (I loathe the prospect of having to help elder relatives configure and debug their AI-driven UIs.)

You may sense I’m not keen to use AIs for dynamic nav generation. But I must clarify my comments so far apply to global navigation structures — i.e., the type of nav elements that define what a system is, how it works, and how users understand it. I expect there are valid ways to use AI to generate local nav elements that users expect to be personalized. For example, an application may show different options depending on the user’s location, time of day, previous interactions, etc. There are techniques for doing this sort of thing that have been around for a long time. I expect that generative AI can help do it better and faster.

AI’s Impact on Labeling, Taxonomies, and Metadata

Let’s now turn to the other areas where information architecture has an important impact: using language to organize content throughout the system. This includes the labeling of UI elements and taxonomies for categorizing and organizing information. We know generative AI can help with these activities. I’ve written about my experiments in tagging blog posts and curating podcast transcripts.

But these are cases of AI in service to designing an information architecture — “AI as an assistant to the IA,” our first use case above. I bring it up here because this kind of categorization would be experienced differently if it happened on-the-fly, so it could be experienced in real time — i.e., if information was reorganized in close to real time as you interacted with the system.

In some ways, it already does. For example, photo management apps like Apple Photos and Google Photos tag your photos with metadata based on what their models think is in the image. This is why you can search these apps for words like ‘beach’ and get back relevant pictures. Again, this is AI categorizing information, much like humans would have in the past; it’s just happening faster and more efficiently.

Many of the points I mentioned in the previous sections also apply here. You want some degree of consistency in the system’s language, especially if it’s meant to be a multi-user system. You want to create relatively stable contexts that clearly define the boundaries of the system. As a result, you don’t want these things to be changing too frequently or individually. That said, I expect that it would be useful to have an AI agent tweaking language and categories periodically to optimize findability and understandability.

But What About Chat?

The most common UI for interacting with AIs today is a free-form text input box. The user enters a prompt and gets answers back from the AI. LLMs have been around for several years, but it wasn’t until the fall of 2022 when OpenAI grafted a chat-based UI onto their LLM that they gained traction in the mainstream. As a result, many people expect interactions with AIs to happen via chat.

Some people might argue that traditional UI constructs such as nav bars are less relevant now that we have ‘smart’ chat interfaces: why render all this UI chrome when you can just tell the system what you want? I strongly disagree. Chat-based UIs are useful for situations that lend themselves to conversational interactions, such as phone conversations, but they’re also inefficient. There are many interactions that are more effectively served by more traditional GUIs. (This is true for sighted people, of course. I expect that chat-based UIs could be very powerful for people with visual disabilities.)

Chat-based UIs also require high language proficiency. Users must know how to ask for things using words. Although it’s obviously an impediment for people who can’t read or write well (or at all,) it’s true even for spoken-word interfaces. Such ‘conversational’ interactions might feel natural in some scenarios, but there are many scenarios in which users might not know what to ask for or what things are called.

The chat box only provides affordances for general conversation, not for interacting with concepts in a particular domain. When considered as an API, the entirety of the English language is too open-ended. The result is the same contextual issues I highlighted above, plus serious discoverability problems. Because of this, I expect chat-based interactions to be most useful in scenarios in which users have a clear mental model of the subject domain and where conversation is the norm.

So What’s This Stuff Good For?

After reading this post, you might conclude that I’m pessimistic about the use of generative AI in information architecture. That’s not the case. I’m very excited about the possibilities. But admittedly, I’m more keen on AI as a tool to help information architects do our jobs more effectively than as a means to either restructure UIs on the fly or eliminate them altogether.

To recap: of the use cases above, I’m most optimistic about using AI to improve search. I expect search systems to become radically better in the near term. It’s also an interaction where users expect to enter free-form queries into a text field. I’m also interested in using AI to optimize and improve taxonomies and other metadata in close to real time. I’m less keen on using AI to restructure navigation systems on-the-fly.

My experiments so far have convinced me that generative AIs can be useful as design and production tools. But I’m more conservative when it comes to deploying unmoderated AI-driven experiences. There are ethical and possibly legal issues with serving up AI-driven UI structures. For one thing, LLMs can produce unpredictable and unreliable results, including possibly reflecting and perpetuating biases or serving up someone else’s intellectual property. There are also obvious privacy concerns about using personal data as input for AI-driven UIs.

So, I feel more comfortable using AI to augment us humans who design information architectures than having AIs design them directly. Admittedly, this position is based on speculation, since I haven’t experimented firsthand with developing dynamic AI-driven UIs. It also likely reflects a lack of imagination on my part. And of course, you could argue that, as an IA, I have a conflict of interest.

But for now, I’m more excited about the possibilities for AI to augment the work of information architects rather than replace them outright. I’m convinced that once the current hype cycle settles down, these tools will find a place alongside others we use to craft great experiences for users. What about you? Have you considered the impact of AI on IA in ways I haven’t considered here? If so, please let me know — I’m working on a presentation on this subject, and your input would help.