Event Insight: The Boundary Between Generative AI and Design―A Value-Focused Division of Roles for Successful AI Utilization

Insight
Feb 25, 2026
  • Automotive
  • Telecom
  • New Business Development
  • AI
521899105

The evolution of generative AI is going beyond the domain of design, which we take up in this Insight, to ask us to re-evaluate the “roles of people and AI” across many workplaces involved in DX and business transformation. How much should we leave up to AI? Where should the work of people begin? Answers to these questions do not arise from tool selection or technical theory.
In this Insight, we organize, based on project cases from ABeam Consulting, the scenarios where generative AI exhibits value and the scenarios where human insight is essential along the axes of “speed” and “depth.” We present a division of roles between people and AI and an approach to demarcation that maximize value across all design-first operations, for an era in which the existence of AI is a given.

(This Insight is based on our presentation given as part of “AI de Kawaru UX Dezain” (“How AI is Changing Design”), a side event of the Kyoto Creative Collection, presented by Vivivit, Inc. on August 21, 2025.)

About the Author

  • Makoto Takimoto

    Senior Manager
  • Nana Hamaguchi

    Nana Hamaguchi

    Manager

1. The “Role of the Designer” After the Spread of AI

The rapid evolution of generative AI has thrown design workplaces into upheaval.
Work that used to take hours, including image generation, writing copy, creating UI components and summarizing survey logs can now be automated in a short span of time. AI is steadily accelerating parts of the design process, while also expanding the areas in which it supplements human abilities.

Amid this change, a question many designers are facing in common is that of how much to leave up to AI, and where the work of designers begins.

This is an essential question that is not touched on when presenting techniques and tools for utilizing AI.

At ABeam Consulting, we drive projects through an end-to-end process that ranges from formulating business strategy to performing research, designing, implementing and evaluating services and UX, and pursuing growth. Figure 1 presents an overview of that process. Its aim is to produce optimal solutions that combine feasibility and growth potential by repeating the strategy, research, design and growth phases, then iteratively evaluating them over a short span from the perspectives of design, business and technology (see Figure 1).

Figure 1. The Process for Creating New Services at ABeam Consulting

Within this process, there are both cases where the multidirectional use of generative AI significantly increased the speed with which projects advanced and cases where there was almost no opportunity to use AI despite following the same process. Looking at the differences between the two, we see fundamental “outlooks” not limited to any particular domain that exist between AI and designers. This is not so much related to how the tool is used as it is deeply connected to the nature of the value that design should provide.

In this Insight, we look at

  • the scenarios where AI delivers value,
  • the scenarios where designers are essential, and
  • how such outlooks are beginning to shift due to the evolution of AI,

from the perspectives of “speed” and “depth,” while discussing actual ABeam projects.

What we want to share is a perspective that asks how design can create value in an era where AI is commonplace, rather than simply asking whether AI will replace the work of designers.

2. Design Domains Where “Speed” Delivers Value

Projects requiring “speed” are one scenario where generative AI can deliver significant impact. A typical example of such a project on which ABeam Consulting worked is the design of a dashboard for visualizing financial assets in operational systems.

The aim of this project was to implement a dashboard that allowed anyone to get an intuitive understanding of the trends in a financial asset, regardless of their level of financial literacy. Given that there was such a broad range of target users with a wide array of behavioral patterns and perspectives, we needed to highlight a shared ease of use.

To that end, in the initial steps, we interviewed large numbers of users and collected information about trends in how often their checked their assets, what points they checked and their awareness of risk. What mattered here was to capture a large number of opinions “quickly” and thus understand the overall picture. Thus the core of the work was gathering wide but shallow information and getting to grips with the general framework of user behavior patterns.

In these situations, generative AI plays a powerful role (see Figure 2).

  • Drafting questions
  • Transcribing interviews
  • Summarizing logs
  • Classifying a vast array of opinions
  • Producing micro-copy ideas in initial UI design

These tasks, which would be time-consuming if done manually, can be processed quickly and with a certain level of quality using AI. In particular, if tens of interviews are conducted in succession, the speed at which they can be summarized and analyzed has a massive impact on subsequent design work overall.
Scenarios like the financial asset dashboard where you have to collate the opinions of a wide array of users in a short span of time and extract common patterns from among are examples of projects where AI’s “processing speed” is valuable as is. As a result, we were able to incorporate the views of many users quickly and advance through the initial design considerations at pace.

Thus, in processes where you want to “gather widely and capture the broad strokes,” generative AI fits extremely well.

Figure 2. The UX Design Workflow When Thinking Through a Financial Asset Dashboard and Scenarios Where AI Contributes

3. Design Domains Where “Depth” Delivers Value

While there are scenarios where we can make powerful use of generative AI, there are also cases where its use is limited.

A project in which we planned a service that employs AI on smartphones to increase employee productivity was a case in point (see Figure 3). *However, it needs to be taken into account that this project was conducted in 2024. At the time, generative AI’s ability to read into intentions and supplement context was relatively underdeveloped compared to today.

This project focused on the behavior of sales employees searching for hints for upcoming work on their smartphones while they were in transit. In doing so, we needed to get a deep understanding of what information they were checking, why they acted in the order they did and what expectations and anxieties they had.

Key to this was interpreting the “intentions” behind the words and actions of users.

Specifically, the focus was on the process of reading into implicit information coming from interview participants such as:

  • why they chose to behave as they did,
  • what state of mind lay behind this,
  • what subconscious demands and values were at play, and
  • what new experience values could be derived from the above.

In such scenarios, “superficial processing” in the form of organizing interview logs and extracting keywords is not enough.

While generative AI as of 2024 was good at summarizing and classifying text, it faced issues in accurately reading the unspoken nuances or psychologies behind what users said, so the core of analysis relied on human insight.

The project ultimately led to the introduction of the service concept of a “smartphone app that lets users ‘encounter’ new information.” This was, however, the product of non-consecutive ideas that based on behavioral insights and psychological background that did not follow directly from surface-level needs.

What this project showed is the fundamental differences in the quality of thinking required in “broad understanding” and “deep insight” work, rather than a limitation to generative AI.

While human observation and insight were core to such “depth” work as of 2024, at present, the evolution of generative AI means that its potential for application to depth domains is steadily expanding. However, deciding how to decipher the “intentions” of users and which interpretations will deliver value is something that needs to be done carefully depending on the context and aims of the project.

Figure 3. The UX Design Workflow When Creating an Operational Streamlining AI Service and Scenarios Where AI Contributes

4. The Boundaries Between Generative AI and Design

The two case studies we have looked at so far both proceeded through the same design process adopted by ABeam Consulting - going, in turn, from user survey to defining challenges to UX/service design to testing. Despite this, there was a clear divide between the project where AI was used effectively used effectively to a great extent and where it was largely not employed.

Organizing the points of difference between the two brings into relief the boundaries between AI and designers that lie along the axes of “speed” and “depth.”

Scenarios Favoring “Speed” Where AI Shows Its Greatest Strengths

In the case of the financial asset dashboard, we needed to organize the results of a large number of interviews in a short span of time and get an understanding of the common patterns among users. Generative AI can deliver significant impact in the context of this sort of process that treats information “broadly and quickly.”

AI excels at tasks such as

  • processing large volumes of data,
  • extracting surface-level trends,
  • summarizing text,
  • generating potential micro-copy, and
  • work involving finding similarities among patterns,

where speed and comprehensiveness directly contribute to value.

In such domains, utilizing AI allows designers to make progress in understanding the information that underlies design in a short span of time.

Designers Are Central in Scenarios Where “Depth” Is Needed

In the 2024 service design project, by contrast, we needed to get insight into the motivations and values that lay behind the words and actions of users. The scope of application of generative AI was more limited in this sort of process where we were “going for depth” and “getting to grips with user intentions.”

Processes that require deeper understanding call for non-consecutive, context-dependent thinking, such as:

  • inferring implicit sentiments,
  • relating behavior and psychology,
  • interpreting the context behind user behavior,
  • making judgments about how interpretations connect to value, and
  • “making the imaginative leap” to creating new value.

The Boundaries Between AI and Design Are “Judgments Aimed at Maximizing Value,” Not “Limitations of AI”

Comparing the two processes, we see that the boundaries between AI and design are not technical limitations, but, rather, something that is determined according to the nature of the value needed for the project.

  • When value comes from speed -> AI leads
  • When value comes from depth -> Designers lead

This is not so much a hard and fast division of roles as it is an approach in which the best lead differs depending on value required in the project. What is important is not deciding the boundaries based on the limits of AI’s capabilities, but deciding based on value-focused judgments on questions such as:

  • which phase of the project creates value,
  • which processes require speed, and
  • which stages need deep understanding.

In other words, the boundaries between the two are not a matter of “where the things AI cannot do begin,” but, rather, lines that shift continuously depending on “what can take the lead to maximize value in each phase.”

Judging these boundaries incorrectly will cause a project to simply stall out.

For example, if only people are employed to perform the processes that need to be advanced “broadly and quickly,” as AI can do, then the surveying and initial considerations will take too long and the speed of decision making will suffer. On the other hand, mechanically processing the “deep understanding” and the “value judgments” that cannot be delegated to AI – that is, the judgments that decipher the intentions of users – makes your user insight shallower, potentially leading you to design experiences based only on surface-level interpretations of user needs.

These boundaries are also more than just simple divisions of roles. If set incorrectly, they can invite the risks of both degraded project speed and loss of depth.

Precisely for this reason, properly understanding these boundaries and making value-focused judgments is more important than ever before.

5. Constantly Shifting Boundaries: The Evolution of AI Towards a Depth

The boundaries between “speed” and “depth” that we took on in the previous section are not fixed. Instead, by their nature, they are constantly shifting with the evolution of generative AI.

In recent years, AI has gone beyond text and image generation, breaking into higher level domains such as understanding context, inferring user attributes and deriving simple insights. This shows that room is opening up for AI to contribute in a supporting role in parts of domains where “depth” is called for.

AI Accelerates the Approach Towards Depth: Lightning Insight

An example of this at ABeam Consulting is the AI tool “Lightning Insight” that we developed in-house and offer to clients (see Figure 4). By setting an AI to have the attributes of a particular market segment and having it answer questions in that role, Lightning Insight serves as a framework for quickly grasping user trends.

For example, by setting an AI to have the attributes “26-year-old male, lives in Tokyo, mainly does office work, no experience purchasing a car,” the AI can take on that role and answer in a way that simulates the value and tendencies of thought that such a consumer would have. While this is not a “real user opinion,” it has value in helping form initial hypotheses quickly to serve as an entry point for pursuing further depth.

This represents part of the work traditionally performed by a designer being supplemented by AI.

Figure 4. An Illustration of How to Use Lightning Insight

Positioning AI as an Accelerator of the Initial Processes of the HCD Cycle

 In Human-Centered Design (HCD), the first phase of the process is defined as going from “understanding and clarifying how something is used” to “clarifying user needs.” This stage requires designers to understand the context surrounding users, gather a broad range of information and formulate hypotheses in order to understand what kinds of needs might arise.

Lightning Insight is genuinely helpful in this phase. By allowing designers to get a simulated understanding of the usage situation and derive initial hypotheses about user needs in a short span of time, Lightning Insight fulfils the role of quickly organizing the starting point of a project (see Figure 5).

The role that AI takes on here is not that of making deep design judgments such as deciphering user intentions or having non-consecutive ideas. Rather, it does the preparatory work that allows designers to pursue further depth.

Figure 5. The HCD Cycle and the Value of Lightning Insight

AI That Expands the “Entry Points” to Depth, Rather Than Replacing Depth Itself

Generative AI cannot fully replace the processes of “interpretation” and “insight.” What matters is that it can accelerate processes for getting to that depth.

Thanks to AI, designers can devote more of their time and energy to the judgment and interpretation work that should be their original focus, by creating changes such as:

  • quickly expanding the range of hypotheses,
  • enabling the multi-dimensional simulation of user profiles, and
  • quickly putting in place the “entry points” for beginning on deeper analysis.

The initial hypotheses formulated using AI are no more than a support for reaching deeper analysis. Humans thus continue to play the key role in making judgments about what level of depth will deliver value and what interpretation to adopt, as these will vary with the contexts and aims of projects. At the same time, AI is positioned as a “co-creative partner” leading to high quality interpretation.

Boundaries Will Continue to Shift

Given these conditions, the boundaries between AI and designers can be thought of as having the property of being constantly in flux, depending on the aims of a given project and the value being pursued, rather than as something indicating the technical limits of AI.

The more AI evolves, the more its role will expand. However, those changes do not represent a replacing of the role of the designer, but, rather, progress in the direction of creating an environment where designers can better concentrate their efforts through deeper value judgment and intention design.

6. Towards an Era of Designing the Boundaries Themselves

As generative AI permeates the design workplace, the question of, “How much should we leave up to AI, and where should the work of designers begin?” is a theme that will appear again and again. This Insight has attempted to outline these boundaries from the perspectives of “speed” and “depth,” taking the domain of design as an example. But the fact that the boundaries between existing work and AI are in flux is not something that is limited to the world of design.

Presently, the same question is beginning to arise across a wide array of industries and operational spheres.

Across all manner of workplaces, from corporate back offices to sales teams to systems developers to creatives to call centers to research and development labs, people are figuring out “how much to leave up to AI.” With that said, just because AI technically can do something does not mean that AI should replace everything in that field.

What is needed is not to figure out technical limits, but to discern where the line is that maximizes value. In design, these lines fall along the axes of “speed” and “depth.” In other industries, different elements will define these boundaries.

  • Call center operations: “Processing volume” and “emotional understanding”
    Even with AI doing the first-line handling of large volumes of inquiries, a demarcation has arisen between this and tasks handled by humans such as emotional care and judgments about exceptions.
  • Sales: “Streamlining information gathering” and “relationship building quality”
    While AI can quickly organize customer information and draft a proposal, building relationships of trust with clients remains an area that should be handled by people.
  • Back office operations: “Accuracy and reproducibility” and “exception handling and complexity”
    A division has arisen in which AI can replace people for rote processing, but for exceptions or in situations where contextual judgment is needed, humans are responsible.

Thus, while the standards for drawing these boundaries differ by type of work, what they have in common is “making value-focused judgments” regardless of industry or process. At the same time, those boundaries are continuously shifting as technology evolves.

Generative AI is evolving quickly, and there are a number of situations where AI has now replaced people in processes that, until a few months ago, were centered on humans. In other words, these boundaries are “dynamic lines” that are constantly being redrawn in response to technological evolution and changes in the business environment. The areas that AI can take charge of will continue to grow, and the areas that people take charge of will shift to “higher-level value judgments.”

These boundaries are not fixed demarcations between the roles of AI and humans, but, rather, targets that should be constantly redesigned while drilling down to where the source of value lies.

So, where should companies start in practice?

One starting point for companies could be to take stock of their own AI utilization projects in terms of “areas where speed delivers value” and “areas where depth delivers value.” They should then draw a clear line between processes where AI or humans should play the leading role. Companies need to think of these lines not as one-and-dones, but as something that needs constant revision in response to the evolution of technology and changes in the business environment.

In an era in which generative AI has become commonplace, our task is not to compare the strengths and weaknesses of AI and humans. Rather, depending on project aims, the value being pursued and the characteristics of the work, deciding who should take charge of what and designing the boundaries between AI and human work itself is our new task.

AI accelerates “speed” and humans generate “depth.” By combining both as appropriate from a value-focused perspective, all business, including design, will expand towards new horizons in the future.

ABeam Consulting has designed roles for AI and humans on the ground across everything from strategy design to UX design, implementation and operation. AI is not something “you use because you can.” The insight gained from using AI while figuring out what actors produce value in which processes is the input for practical decision making aimed at dynamically redesigning these boundaries. Going forward, by supporting our clients, we hope to pioneer new ways of working through cooperation between AI and humans and contribute to creating value in companies and in society.


Contact

Click here for inquiries and consultations