In the verification study, interviews were conducted with staff in each section to gather insights into their expectations for generative AI and the work-related challenges they face. By carefully listening to staff, the team analyzed and identified use cases grounded in frontline operations. Based on these findings, tasks were classified into nine categories—inquiry handling, drafting, summarization, translation, issue identification, sounding board discussions, code generation, case collection, and data analysis—and it was hypothesized that these tasks are particularly well suited to the use of generative AI.
“In the course of verifying the introduction of generative AI, one of our major roles was to accurately organize the feedback received from METI staff so that Mr. Osada and Mr. Ishii could make well-informed decisions. In classifying the work, we conducted detailed interviews on the tasks actually being performed and then gradually abstracted them into broader categories. It was a painstaking process, but we worked through the prerequisites for the necessary decision-making toward the implementation of generative AI, one by one,” recalls Takashi Kuwabara, Manager, Artificial Intelligence Leap Sector, Digital Technology Business Unit, ABeam Consulting.
Following these efforts, METI began developing a security-focused system to ensure that staff could use generative AI with confidence. At that time, in 2023, the broader adoption of generative AI had only just begun, and there were no clear de facto standards or established reference cases. Accordingly, 150 participating staff members were asked to use generative AI in their work; their usage was logged, and their evaluations were collected through questionnaires. Responding directly to staff feedback, METI then moved forward with building a generative AI testing environment optimized for operational needs and continued validating relevant use cases.
“As generative AI became a major focus of attention, many staff members had high expectations that it would enable them to perform a wide range of tasks with considerable flexibility. While carefully balancing expectations with practical realities, we assessed which tasks were suitable for the introduction of generative AI and which were not, and moved forward with preparations for implementation while ensuring staff understanding,” recalls Mr. Ishii.
As METI staff are caught up in their day-to-day responsibilities, it is not easy for them to step back and objectively review workflows. In this context, ABeam, acting as a consultant, supported the structuring of these workflows and, based on the findings, examined and proposed approaches to improving operational efficiency through the use of generative AI.
“Rather than approaching the challenges faced by METI simply as a contractor, ABeam engaged with the ministry from the same perspective as METI’s staff, which allowed the verification process to proceed smoothly,” says Mr. Osada. At the same time, during system implementation, ABeam rapidly developed and delivered prototypes, incorporated feedback from users participating in the verification process, and repeatably refined the system by introducing new ideas. As a result, the system was implemented to a very high standard.