Modular and Adaptable Output Decomposition in Large Language Models

A framework that decomposes large language model outputs into modular, adaptable components to improve clarity, efficiency, and collaboration in AI-assisted work.

Link:

Background:
Large Language Models (LLMs) are powerful tools, but their outputs are often delivered as large, monolithic text blocks. This makes it difficult for users, especially researchers and developers, to understand, refine, or adapt the model's responses. These rigid outputs hinder critical tasks like verifying information, making iterative improvements, and adapting responses to changing needs.​​

This project tackles the need for LLMs to produce more modular and adaptable outputs by breaking down complex answers into manageable, structured components. The goal is to help users better understand, edit, and reuse model outputs.The LLM Landscape Report by 99P Labs provides an in-depth analysis of large language models (LLMs), highlighting industry trends, cutting-edge research, and practical applications. The report examines the growing role of LLMs in business, the impact of Retrieval-Augmented Generation (RAG), small model experimentation, and best practices for optimizing LLM performance. Additionally, it explores 99P Labs’ contributions to AI innovation, including structured problem-solving techniques and academic collaborations that drive real-world applications.

Methods:
The team is developing a framework that guides LLMs to generate outputs as a collection of clear, editable modules. This work brings together several methods, including prompting strategies that encourage the model to structure its responses into logical sections, dependency management tools that track relationships between modules, and a browser-based interface that allows users to interact with and refine individual parts of the output.

The framework also integrates system components such as context isolation, which supports the generation of focused modules, persistent memory, which manages shared information across modules, and collaborative tools that enable refinement through user feedback.

Initial prototypes feature a modular output decomposition system, node-level editing capabilities, and the option to selectively rerun specific sections. Early progress has produced wireframes, prompt templates, and initial rounds of user feedback.

Findings:
Although still in development through July 2025, early progress shows promise in transforming how LLM outputs are handled. Modular outputs provide greater clarity, making responses easier to read, understand, and debug. They also improve efficiency by allowing users to refine specific sections without having to rework the entire response. Flexibility is enhanced as outputs can be adapted to new requirements simply by updating modules. This modular approach also supports collaboration by making it easier for humans and AI to work together, while reusability is strengthened through better versioning, tracking, and control of AI outputs.

This work lays the foundation for more maintainable and human-aligned AI systems, which is particularly valuable in research environments where transparency and precision are essential.

Stay Connected

Follow our journey on Medium and LinkedIn.