39 Module 7: Extension and Synthesis
Beyond Process Heat and the Closing Argument
39.1 §7.1 Multi-Site and Corporate-Scale Extension
The Edendale proof of concept evaluates one facility’s pathway decision. The same architecture, without modification to any of its fundamental components, extends to evaluate the pathway decisions of every facility in a large industrial company’s portfolio simultaneously. The extension does not require building a new system. It requires adding more Facility Modules that share the same regional infrastructure and resource modules that are already in the architecture.
Consider Fonterra’s South Island processing portfolio. The company operates facilities at Edendale, Clandeboye, Darfield, Canpac, and Waitoa, each connected to the national transmission system at a different GXP, each drawing on regional biomass resources from different supply clusters, and each facing the same coal phase-out deadline under the same corporate sustainability commitment. From the framework’s perspective, these facilities are five Facility Modules connected to a shared Regional Module that represents the South Island transmission system. Each Facility Module produces its own IncrementalElectricityPack, and the Regional Module evaluates all five simultaneously against the regional grid’s hosting capacity, producing a set of SignalsPacks that reflect the aggregate demand pressure on the transmission system rather than the site-isolated demand of any one facility.
This aggregate assessment reveals a class of consequence invisible from any individual facility analysis. When five large electrification pathways are evaluated simultaneously, the aggregate incremental electricity demand may exceed the South Island grid’s hosting capacity in futures where no individual facility’s demand would trigger an exceedance on its own. The 23-of-64 finding at Edendale would become, in a portfolio assessment, a finding about the proportion of futures in which the aggregate electrification demand requires transmission reinforcement at the regional scale rather than the GXP scale. That finding changes the nature of the investment decision from a site-level pathway choice to a portfolio staging question: which facilities should electrify first, and in what sequence, to manage the aggregate grid demand and to capture the available grid headroom before competing demand fills it?
The portfolio staging question is answered by the DMDU orchestration layer operating across the multi-site ensemble. Rather than evaluating two pathway alternatives for one facility against 64 futures, the orchestration layer evaluates a combinatorial space of staging sequences for five facilities against the same 64 futures, or a representative sample of the staging space if full enumeration is computationally prohibitive. The regret and robustness metrics are computed over the portfolio cost and emissions outcomes, and the scenario discovery analysis identifies which combinations of grid headroom and demand growth futures most strongly determine the preferred staging sequence.
The data translation requirement for a corporate-scale deployment is addressed by the framework’s data intake architecture. A facility’s existing energy management system records its operational demand history as hourly electricity and fuel consumption data. This data translates directly into a calibrated DemandPack, replacing the synthetic profiles of the proof of concept with measured operational records. The capital cost estimates from existing engineering assessments translate into SiteConfigArtefact parameters. The financial projections from existing corporate planning models translate into FutureArtefact uncertain driver calibrations. None of these translations requires the organisation to discard its existing analytical work; they require that work to be formatted as governed artefacts.
The operational dispatch extension connects the same architecture to day-ahead and week-ahead scheduling optimisation. Once the strategic pathway comparison has been resolved and a technology commitment has been made, the Facility Module can be reconfigured to support operational dispatch rather than pathway comparison. The same IncrementalElectricityPack and SignalsPack exchange that the strategic comparison uses becomes the interface through which real-time tariff signals from the distribution network inform day-ahead dispatch decisions. The strategic and operational layers share the same analytical backbone, and the accumulated operational dispatch record progressively enriches the DemandPack calibration for the next round of strategic reassessment.
The commercial advantage of this architecture is specific and auditable. A corporate sustainability officer who can present a portfolio transition strategy evaluated against system-level interaction effects, using a governed, traceable analytical chain, occupies a qualitatively different position in negotiations with EECA, Transpower, and the Electricity Authority than one relying on a collection of independent site assessments. The interaction effects are not merely acknowledged qualitatively; they are computed, attributed to specific cost components, and evaluated across plausible futures.
39.2 §7.2 Sector-Scale and National-Scale Extension
The multi-site extension of §7.1 encompasses all facilities within one company. The sector-scale extension encompasses all major industrial heat users in a region, regardless of ownership. In the Southland context, the relevant actors are Fonterra’s dairy processing facilities, the meat processing sector including Alliance Group and Silver Fern Farms, the wood products sector including sawmills and fibre processing operations, and any other significant industrial heat users in the region. Each of these actors faces a version of the same transition decision: replace coal-fired heat with electrification, biomass, or some combination, under the same regional grid constraints, the same biomass resource competition, and the same carbon price trajectory.
The sector-scale deployment adds multiple company-affiliated groups of Facility Modules to the shared Regional Module. The aggregate electricity demand assessment now represents the sum of all sectors’ electrification commitments against the regional grid’s hosting capacity. The biomass resource assessment represents the competition among all sectors for the same regional supply base. The interactions that are invisible within any single company’s analysis become analytically visible when the sector as a whole is represented simultaneously.
The sector-scale question of most direct policy relevance is: what combination of electrification and biomass commitments across all regional industrial heat users produces the most robust regional decarbonisation pathway, where robustness means performing acceptably across the widest range of plausible grid, resource, and policy futures? This question cannot be answered from individual site or company analyses. It requires the sector-level representation that the framework’s multi-module architecture provides. The answer informs regional infrastructure planning: Transpower and PowerNet can use it to prioritise transmission and distribution investment, and EECA can use it to target GIDI co-investment where it most reduces systemic risk rather than merely supporting the most commercially advanced applicants.
At the national scale, the framework couples to TIMES-NZ through the FutureArtefact ensemble calibration layer. TIMES-NZ scenario outputs, including the electricity price trajectories, sectoral demand projections, and national carbon price pathways under different policy settings, are translated into the boundary conditions for the ensemble’s uncertain driver ranges. This one-directional coupling ensures that the pathway comparisons conducted at the site and regional levels are consistent with plausible national energy trajectories, rather than relying on independently constructed assumptions about macroeconomic and policy futures.
The vision beyond one-directional coupling is the Gauss-Seidel iterative coupling architecture: an exchange in which the framework’s site-level and regional-level findings are summarised as aggregate demand scenarios and passed back to TIMES-NZ as industrial electrification demand projections, TIMES-NZ is re-run under those projections to produce updated national electricity price and carbon price trajectories, and those updated trajectories are used to recalibrate the FutureArtefact ensemble for the next iteration of the regional-level analysis. The coupling iterates until the site-level, regional-level, and national-level projections converge to a mutually consistent set of pathway recommendations.
This iterative coupling is technically feasible with existing tools. PyPSA-NZ, the national electricity system model built on the PyPSA framework, provides a natural implementation environment for the electricity dimension. TIMES-NZ provides the multi-energy system and economic dimension. The framework’s thin-waist architecture is the governance layer that makes the exchange of outputs between TIMES-NZ, PyPSA-NZ, and the site-and-regional modules traceable and auditable. The Gauss-Seidel coupling is the architectural vision; the one-directional TIMES-NZ coupling described in Module 5 is the first-generation step toward it.
39.3 §7.3 Multi-Domain Extension
The decision-first boundary principle is domain-agnostic. The architectural components of Modules 3 and 4, modular decomposition, thin-waist artefact exchange, governed backbone, DMDU orchestration, are the scaffolding on which any problem with the five structural features of §1.1 can be instantiated. The process heat domain was chosen as the anchor exemplar because it concentrates all five features at an analytically tractable scale. The same concentration appears in at least three other domains that warrant explicit consideration.
Water infrastructure planning under deep uncertainty presents the five structural features in a form that is structurally isomorphic to the process heat problem. A regional water system planner deciding whether to invest in reservoir expansion, groundwater development, or demand-side management under an uncertain precipitation regime, an uncertain population growth trajectory, and a regulatory environment governing water allocation rights that is itself contested, faces durable commitments, cross-scale interactions between catchment hydrology and downstream users, deep uncertainty about precipitation trajectories under climate change, plural objectives spanning agricultural productivity and environmental flows, and multiple actors including farmers, municipal authorities, conservation agencies, and recreational users.
Instantiating the framework for this domain requires specifying the equivalent of each component in the energy domain translation layer. The Facility Module becomes an Agricultural or Municipal Demand Module that translates irrigation or urban water demand under declared technology and efficiency assumptions into the incremental water demand that crosses the facility boundary. The Regional Module becomes a Catchment Hydrology and Allocation Module that evaluates whether the incremental demand can be met under the future’s precipitation and competing-allocation conditions. The artefact families become: DemandPack (time-resolved water demand), IncrementalWaterPack (demand increment at the catchment boundary), CatchmentSignalsPack (allocation headroom, reliability, and scarcity signals), and the same FutureArtefact, ResultArtefact, ValidationArtefact, and DecisionSummaryArtefact families. The uncertain drivers become precipitation anomaly, competing allocation demand growth, regulatory allocation stringency, and demand management effectiveness.
Regional transport infrastructure planning presents the same structure. A regional transport authority deciding whether to invest in road capacity expansion, public transport development, or active travel infrastructure under uncertain population growth and uncertain travel behaviour change faces durable infrastructure commitments, cross-scale interactions between land use patterns and transport demand, deep uncertainty about future travel behaviour, plural objectives spanning mobility, safety, emissions, and land use, and multiple actors including private vehicle users, freight operators, public transport users, and land developers. The Facility Module becomes a Land Use and Trip Generation Module; the Regional Module becomes a Network Capacity and Congestion Module.
Land-use planning and urban development is the third domain. Planning decisions about where housing, industrial, and agricultural land uses are permitted create infrastructure consequences that propagate across transport, water, energy, and community services networks. The framework would represent individual development decisions as Facility Modules whose outputs are incremental demands on shared infrastructure, with Regional Modules representing each infrastructure system’s capacity response and artefact families carrying the multi-infrastructure consequence signals.
In each domain, the thin-waist contract between the Facility Module and the Regional Module is what makes the decision-first architecture analytically tractable. The contract specifies the minimum necessary information that crosses the boundary, expressing the facility’s demand in the form the regional system needs to evaluate its consequences. The governance architecture of Module 3 applies without modification: every artefact must be schema-conforming, provenance-carrying, and validation-gated before admission to the comparison chain. Community contributors working in these domains can develop domain-specific module instantiations and contribute them through the process described in the Context Declaration’s Community Extension section.
39.4 §7.4 The Adaptive Decision Laboratory Vision
The proof of concept documented in Module 6 demonstrates a first-generation analytical environment. It evaluates two pathways for one facility against 64 futures using proportional dispatch, stylised regional screening, and a post-processing robustness overlay. Its governance chain is complete. Its results are traceable. Its limitations are declared. What it does not yet demonstrate is the full vision of what the architecture is designed to support when its specified and vision-level components are realised.
That full vision is an adaptive decision laboratory: a persistent, governed, AI-augmented analytical environment in which site operators, regional planners, policy analysts, and community representatives can simultaneously interrogate a shared, continuously updated knowledge base about infrastructure transition decisions under deep uncertainty. Its characteristics are five.
Surrogate-accelerated ensemble evaluation. The first characteristic is scale. Trained ML surrogates of the regional electricity module and the biomass resource module replace their full computational equivalents for the interior of the future ensemble while retaining those full models for regime-boundary validation. The surrogate architecture of Sub-Module SM-2.3-A enables the ensemble to expand from 64 futures to several hundred or several thousand, making PRIM-based scenario discovery and vulnerability mapping statistically meaningful. A vulnerability map produced from a 2,000-future ensemble gives the decision-maker a quantitatively grounded picture of which conditions concentrate pathway risk, rather than the illustrative pattern that 64 futures can only suggest.
Multi-scale simultaneous evaluation. The second characteristic is comprehensiveness. The five Facility Modules of the South Island Fonterra portfolio, the sector-scale Regional Module representing all Southland industrial heat users, and the TIMES-NZ-calibrated FutureArtefact ensemble all run simultaneously in a single orchestration cycle. The decision-maker can ask questions about individual facility decisions, about portfolio staging, and about sector-level infrastructure requirements within the same analytical session, because the backbone contains the artefacts from all scales in a single queryable store.
Natural-language query interface. The third characteristic is accessibility. A governed analytical backbone, carrying schema-conforming artefacts with explicit lineage and SHA256-verified integrity, is exactly the kind of structured knowledge store that a capable AI assistant can interrogate reliably. A regional planner who asks “which Southland industrial facilities face grid exceedances in more than 30 percent of the ensemble’s futures?” receives an answer that the assistant can produce by querying the DecisionSummaryArtefacts in the backbone, and the answer comes with an explicit provenance trail: the query was evaluated against artefacts produced by the regional screening module in release bundle poc_20260105_release02, using the 100-future ensemble calibrated to the RETA Southland reference capacity data. That provenance is what makes the answer trustworthy and auditable rather than merely plausible.
Adaptive pathway monitoring. The fourth characteristic is continuity. The framework does not produce a plan and then disengage; it supports ongoing monitoring of the conditions under which the chosen pathway remains robust and the signposts whose observation would trigger a pathway revision. The DAPP framework of Haasnoot and colleagues provides the methodological basis for this monitoring logic: signposts are observable indicators whose values narrow the plausible range of the future ensemble, and triggers are threshold crossings that indicate the need to advance to the next planned pathway stage. The framework’s governance architecture makes it possible to declare signposts and triggers as structured artefacts, monitor them against the backbone’s evolving operational record, and generate update notifications when thresholds are approached.
Community-governed extension. The fifth characteristic is openness. The thin-waist architecture ensures that domain specialists can contribute new module implementations, new artefact families, and new uncertain driver dimensions through the community contribution process, with each contribution reviewed for compatibility with the interface contracts and incorporated with full provenance. The adaptive decision laboratory is not a closed product; it is a governed commons.
39.5 §7.5 The Framework’s Current Limits
The progressive-refinement philosophy requires honesty about what the current framework does not yet achieve. The following four categories of limitation are stated precisely rather than hedged.
Technical limits. The current proof of concept implements one-pass coupling, proportional dispatch, and stylised regional screening. These three simplifications together mean that the system-level cost estimates in futures where grid upgrades are required are somewhat underestimated relative to what an iterative coupling, scheduling-grade dispatch, and full network optimisation would produce. The direction of the bias is conservative for the electrification pathway: the one-pass architecture understates the infrastructure-conditional cost of electrification, making the EB pathway appear somewhat more competitive than it would appear under a higher-fidelity assessment. The 23-of-64 finding therefore likely understates rather than overstates the frequency and magnitude of infrastructure-conditional cost penalties. The next-phase developments specified in §6.10 address each of these technical limits in the order of their analytical consequence.
Institutional limits. The framework as currently specified addresses the decision problem of one site operator or one corporate portfolio manager. It does not yet represent the multi-actor negotiation and bargaining processes through which infrastructure investment decisions are actually made at the regional scale. PowerNet’s Capacity Guarantee Agreement terms, Transpower’s transmission investment prioritisation criteria, and the Electricity Authority’s pricing rules all influence when and at what cost grid infrastructure is available to electrifying industrial users, and these institutional arrangements are more complex than the stylised upgrade menu of the current regional screening module represents. The institutional dimensions are analytically tractable within the framework’s architecture, but they require a richer representation of the regulatory and commercial environment than the current proof of concept provides.
Data limits. The synthetic DemandPacks, the RETA-calibrated headroom estimates, and the indicative upgrade cost menu are all appropriate for a methodological proof of concept but would need to be replaced by measured operational data, Transpower connection assessment outputs, and negotiated commercial terms for a planning-grade application. The framework’s data intake architecture is designed to accommodate this upgrade without structural changes; the replacement is a module-level change that does not affect the analytical backbone or the evaluation layer. However, until that replacement is made, the numerical findings of Module 6 should be understood as illustrative of the framework’s analytical architecture rather than as estimates of the specific cost exposure at the Edendale facility.
Scope limits. The current framework addresses two pathway alternatives, two spatial scales, one temporal horizon, and one region. The extension arguments of §§7.1 through 7.3 show how the architecture extends to multiple facilities, multiple scales, and multiple domains. Those extensions have not been implemented. The eleven propositions of §7.6 are claims about the architecture and its proof-of-concept instantiation; they are not claims about the fully deployed multi-scale system. The propositions should be read with this scope in mind.
39.6 §7.6 Eleven Propositions
The following eleven propositions summarise the intellectual contribution of this manuscript. They are statements about the framework, its architecture, and its proof-of-concept instantiation. They are not statements about the general problem of infrastructure decision-making, which others have addressed, nor about the specific future of the Edendale facility, which the framework cannot determine. They are claims the evidence assembled in Modules 0 through 6 supports.
Proposition I. Decision-centred boundary setting is the appropriate architectural response to long-horizon planning problems characterised by durable commitments, cross-scale infrastructure interactions, deep uncertainty, plural objectives, and multiple actors. Model-centred boundary setting produces systematic analysis of the consequences that happen to lie within a conveniently available physical boundary. Decision-centred boundary setting produces analysis of the consequences that determine whether the decision was well-made. Module 1 establishes the five structural features that distinguish these problem types; §0.1 and §0.3 articulate the reversal; Module 6 demonstrates it by producing a finding, the 23-of-64 GXP exceedance, that is invisible within the site boundary and visible at the decision-defined boundary.
Proposition II. The thin-waist artefact exchange is the mechanism that makes decision-first boundary setting operationally feasible across heterogeneous analytical components. The decision-first principle establishes what should be inside the boundary of the analysis. The thin-waist principle establishes how components operating at different scales and using different methods participate in one analytical chain without requiring any component to encompass or reproduce the others. §0.5 and §3.4 develop the principle; the two-repository architecture of Module 6 instantiates it; the SignalsPack SHA256 hash integrity record demonstrates that it is not merely a design aspiration but an implementable governance mechanism.
Proposition III. Deep uncertainty is not a deficiency of data or computation that can be overcome with more sophisticated models; it is a structural feature of certain planning environments that requires a different evaluative standard. When probabilities over futures are contested or structurally unavailable, expected-value reasoning optimises against a summary measure that may obscure the features that most matter to a long-horizon commitment. Module 1 §§1.2 and 1.3 and Sub-Module SM-1.1-A develop the philosophical and methodological case. The five uncertain driver dimensions of the Edendale ensemble, none of which can be assigned a defensible probability distribution from current evidence, instantiate the point.
Proposition IV. Regret, robustness, and satisficing are jointly necessary evaluative standards for long-horizon decisions under deep uncertainty; each captures a dimension that the others do not. Regret is future-specific and comparative. Robustness is cross-future and distributional. Satisficing is threshold-based and institutionally aligned. A decision that minimises maximum regret may have an unacceptably low satisficing rate at a regulatory threshold. A decision with a high satisficing rate may carry intolerable tail exposure. Module 1 §1.5 develops the three standards; Module 6 §6.9 demonstrates their joint application to the Edendale pathway comparison.
Proposition V. Modular decomposition governed by explicit interface contracts and validated artefact families enables progressive analytical refinement without loss of comparability across generations. The implementation independence principle of §3.2 allows modules to evolve from proportional dispatch to LP-based scheduling to OpenModelica thermal simulation without requiring any other module to change. The schema versioning and append-only backbone of §3.7 preserve every prior generation’s outputs for comparison. Module 6 demonstrates the first generation; §6.10 specifies the path to the second.
Proposition VI. The analytical backbone is a methodological commitment, not infrastructure; its governance properties are what make AI-assisted interrogation of planning analysis trustworthy. An AI assistant querying an ungoverned data store produces plausible-sounding answers whose provenance cannot be verified. An AI assistant querying a governed backbone, in which every artefact carries schema-conforming provenance, validation records, and explicit lineage, produces answers whose basis can be traced to specific analytical choices. §3.7 develops this claim; the natural-language query vision of §7.4 elaborates it; the ValidationArtefact and lineage chain of Module 6 §6.8 demonstrate that the governance properties needed for trustworthy AI-assisted interrogation are achievable in practice.
Proposition VII. AI and ML methods can contribute to a governed decision-centred analytical environment without undermining its epistemic standards, provided outputs are governed by the same artefact schema and provenance requirements as deterministic analytical outputs. The argument is structural: what enters the comparison chain must be schema-conforming and provenance-carrying, regardless of production method. §3.8 and Sub-Module SM-3.8-A specify the additional provenance fields required for AI/ML artefacts, including the human acceptance requirement. The surrogate development pathway of Sub-Module SM-2.3-A applies decision-ranking preservation as the validation criterion rather than average prediction accuracy.
Proposition VIII. The divergence between site-perspective and system-perspective cost assessments of electrification pathways under grid-constrained futures is a structurally predictable consequence of current network pricing arrangements, not an incidental finding. The formal derivation of Sub-Module SM-4.4-A shows that positive system-level regret occurs whenever the regional infrastructure cost adder attributable to electrification exceeds the private cost advantage of electrification over the competing pathway. Module 5 documents that under PowerNet’s commercial terms for new and altered connections, the Customer Contribution mechanism means this condition is met whenever a large industrial electrification commitment triggers a connection upgrade that is not fully charged to the triggering load. The 23-of-64 finding of Module 6 §6.9 is the empirical instantiation of the theoretical prediction.
Proposition IX. The seven requirements derived in §1.7 are jointly necessary and sufficient for decision support that is honest about uncertainty, comparable across alternatives, and traceable to its assumptions. Each requirement follows from one or more of the five structural features of the problem class. The architecture of Module 3 provides a mechanism satisfying each requirement. The proof of concept of Module 6 demonstrates each mechanism in operation. A system that satisfies all seven requirements can support legitimate institutional accountability for complex planning decisions; a system that fails any one of them has a structural gap that will eventually produce either an unexaminable result or an incomparable one.
Proposition X. Progressive refinement governed by regret sensitivity diagnostics is a more reliable development strategy for decision-centred analytical environments than commitment to physical completeness. A model that grows toward physical completeness without checking whether the additions change the decision-relevant comparison is adding complexity that may be analytically irrelevant while deferring the simplifications that most matter. A model that grows toward the parameters that regret diagnostics identify as most strongly determining pathway preference adds exactly what the decision requires, in the order it is needed. Requirement 7 from §1.7 formalises this claim; the proof of concept’s self-directed development logic, described in §6.9 Finding 5, demonstrates it.
Proposition XI. The decision-centred modelling framework described in this manuscript makes a methodological contribution to the DMDU tradition that is distinct from its empirical contribution to the New Zealand process heat decarbonisation literature. The empirical contribution, the 23-of-64 GXP finding and its policy interpretation, is specific to the Edendale context and to the current generation of the proof of concept. The methodological contribution, the thin-waist artefact architecture, the decision-first boundary principle, the governed AI/ML integration protocol, and the requirements-based architecture specification, is applicable to any planning problem with the five structural features of §1.1. The DMDU tradition has established the evaluative standards and the ensemble methods; this framework contributes the architectural logic that makes those methods applicable at multi-scale, multi-actor resolution in a governed and auditable way.
39.7 §7.7 Closing Argument
This manuscript began with a claim about architecture. The most important analytical choice in complex planning analysis is not which model to build; it is where to draw the boundary. Boundaries drawn by the physical system guarantee that decision-relevant consequences outside that system will be invisible. Boundaries drawn by the decision guarantee that the consequences that determine whether the decision was well-made remain in view.
The eleven propositions of §7.6 are the distilled form of the case this manuscript has assembled. They are supported by a philosophical argument about what kinds of analytical problems require decision-centred organisation, a formal specification of the architecture that makes decision-first boundary setting operationally feasible across heterogeneous components, a domain translation that instantiates that architecture in the energy-domain context of industrial process heat decarbonisation, an empirical grounding in the specific New Zealand conditions that make the Southland case analytically important and tractable, a proof of concept that demonstrates five specific claims with traceable artefacts and declared limitations, and an honest account of what those demonstrations do and do not imply about the framework at larger scale.
What has been demonstrated is bounded but significant. A two-pathway comparison of electrification and biomass alternatives for one large dairy processing facility, evaluated against 64 structured futures, produces a finding that site-level analysis cannot produce: 23 of those futures generate GXP hosting capacity exceedances that impose infrastructure costs invisible within the site boundary. That finding changes the nature of the pathway evaluation from a site-level cost comparison to a system-level robustness assessment. It changes what a responsible investment decision looks like: not the pathway with the lower private net present value under a central-case assumption, but the pathway that performs acceptably across the widest range of conditions the decision-maker cannot control, where those conditions include both site-economics uncertainty and regional infrastructure uncertainty simultaneously.
What has not been demonstrated is equally important to state. The framework is not a planning tool ready for deployment at industrial scale; several of its key components are specified but not yet implemented. The proof-of-concept findings are based on synthetic demand profiles, stylised regional screening, and conservative coupling assumptions, making the specific cost figures illustrative rather than planning-grade. The eleven propositions describe a framework whose architectural claims are supported; they do not describe a system whose numerical outputs should be used to make actual investment recommendations without the next-generation implementations specified in §6.10.
The framework is designed to absorb those implementations progressively without losing the comparability of what has been established. The backbone is append-only. The schemas are versioned. The interface contracts are stable. The lineage is explicit. Each generation of implementation adds to the analytical record rather than replacing it, and the regret sensitivity diagnostics identify which addition should come next by revealing which uncertain driver most strongly determines the current comparison. The framework is self-directing in exactly the sense that progressive refinement requires.
The invitation is open and specific. A PyPSA developer who reads Sub-Module SM-6.6-E and recognises the regional electricity module specification as work they could implement is invited to do so through the GitHub repository’s contribution process. A DMDU researcher who identifies a robustness metric or a scenario discovery algorithm that would improve the evaluation layer is invited to propose its integration as a methodological extension. A practitioner in water infrastructure, regional transport, or land-use planning who recognises in §7.3 a decision structure analogous to their own is invited to contribute a domain instantiation that specifies the relevant module classes, artefact families, and uncertain drivers in terms of the framework’s interface contract.
The decision makes the boundary; the artefact makes the connection. This document describes, with as much precision as its current state allows, how that principle is implemented, what it reveals when it is, and what remains to be built before it reveals more.
39.8 §7.8 The Open Knowledge Architecture
The manuscript’s Node Declaration system, introduced in the Navigation Guide and used throughout every section and sub-module, was designed to serve two audiences: human readers who need to navigate a long and technically heterogeneous document, and AI agents that need to process the document selectively and reliably. The system’s seven fields, Tier, Status, Assumes, Contributes, Skip condition, Passes to, and Sub-Modules here, provide the metadata that makes both forms of engagement efficient. Each node carries enough information about its own role and dependencies that a reader or agent can make principled decisions about which nodes to process fully and which to skip without losing analytical continuity.
This section proposes an extension of that system that applies the same logic to a harder problem: how should a document that is explicitly designed to grow, through community contributions, progressive refinement, and multi-domain instantiation, govern that growth without losing coherence?
The problem is not trivial. A document that simply accumulates contributions over time becomes incoherent. Sections written under different assumptions, at different levels of technical development, and for different audiences are navigable only if the document’s metadata system can tell a reader, precisely and consistently, what kind of claim each section is making and what kind of engagement it is inviting. The framework’s current Status field distinguishes Implemented from Specified from Vision. This is the right beginning. It is not sufficient for a document that aspires to serve researchers, practitioners, educators, policy analysts, community members, and AI agents simultaneously.
The proposed extension adds two new fields to the Node Declaration Table. The first is a Content Tag, which classifies the primary purpose of the node’s content. The second is an Engagement Tag, which declares what kind of contribution or response the node is inviting. Together these two fields turn the Node Declaration from a navigation aid into a governance declaration: every node states not only what it contains and what it assumes, but what it is for and what it wants from its readers.
The Content Tag vocabulary uses five classes. A node tagged Implemented presents working analytical machinery whose claims are grounded in the current proof of concept. A node tagged Specified presents designed but unbuilt components whose claims are architectural commitments. A node tagged Vision presents directions whose claims are explicitly provisional. A node tagged Educational presents material whose primary purpose is building understanding rather than advancing new claims; it may draw on established methods and literature rather than original findings. A node tagged Collaborative presents an open specification or open question that is explicitly inviting domain-specific contribution, co-development, or critical engagement.
The Engagement Tag vocabulary uses three classes. A node tagged Research contains unresolved methodological questions or open theoretical problems that invite scholarly engagement. A node tagged Platform relates to the broader architecture of how the manuscript functions as a knowledge resource, including the tagging system itself. A node tagged Contribute signals that the node contains a contribution template or specification that defines what a valid community contribution to that node would look like.
The practical consequence is a filtered reading experience. An educator developing a course on decision-making under deep uncertainty can filter the manuscript’s Table of Contents to show only Educational nodes, receiving a curated teaching sequence that does not require navigating the full technical apparatus. A PyPSA developer assessing implementation opportunities can filter for Specified + Contribute nodes, seeing a clean list of module specifications that invite implementation contributions. A doctoral student reviewing the framework’s open problems can filter for Research nodes, receiving the manuscript’s honest account of where its own methods are incomplete or contested. An AI agent tasked with summarising the framework’s established claims can filter for Implemented nodes, confining its summary to what the proof of concept has demonstrated.
This filtering is not merely a convenience. It is an expression of the decision-first principle at the level of the document itself. Just as the analytical environment draws its boundaries around what must remain visible for a given decision, the document’s tagging system draws its navigation boundaries around what is relevant for a given reader’s purpose. The document becomes, in this sense, a decision-centred knowledge resource rather than a sequentially structured text.
The governance implication is equally important. When the manuscript grows through community contribution, each new section carries its own tags at the point of contribution. The contributor declares whether the section is Educational, Specified, or Visionary; declares whether it contains open Research questions; and declares whether it is inviting further Contribution. The editorial process assesses whether the tags are accurately applied, not only whether the content is intellectually sound. A section that claims Implemented status must demonstrate that claim through a reference to a working implementation and a ValidationArtefact. A section that claims Educational status must be written at a level of generality accessible to a reader without prior framework knowledge. The tags are not decorative; they are governance commitments that the editorial process enforces.
The connection to the broader scholarly communication vision discussed in the Context Declaration and in the side argument of §7.6’s Proposition XI is direct. The claim attestation system proposed there, in which specific technical claims are individually attested by named domain experts, operates most effectively when the claims to be attested are already individually identified and tagged. A manuscript with a structured Content Tag system provides the claim register that attestation requires. The tags make explicit which claims belong to which epistemic community, and that explicitness is the precondition for targeted expert engagement.
The tagging system proposed here does not require immediate implementation across the full manuscript. It can be introduced progressively: first in new sections as they are contributed, then retroactively in the sections whose tags are most consequential for navigation. Table 7.8 presents the full proposed vocabulary with definitions, usage conditions, and the visual indicators used in the Table of Contents.
Table 7.8: Extended Node Declaration tag vocabulary
| Tag | Class | Symbol | Definition | Usage condition |
|---|---|---|---|---|
| Implemented | Content | ✓ | Claims grounded in the current proof of concept; working code or method exists | Requires reference to a ValidationArtefact or documented pipeline output |
| Specified | Content | ○ | Fully designed component not yet built; claims are architectural commitments | Requires a specification document or sub-module that defines the implementation target |
| Vision | Content | ◇ | Provisional direction; claims are exploratory and explicitly subject to revision | Must acknowledge the speculative character of the claims within the section prose |
| Educational | Content | ⬡ | Primary purpose is understanding-building; may draw on established methods | Should be accessible to a reader without prior framework knowledge |
| Collaborative | Content | ✦ | Open specification or open question explicitly inviting contribution | Should include or reference a contribution template declaring what a valid response looks like |
| Research | Modifier | [R] | Contains unresolved methodological questions or open theoretical problems | Should state the question precisely enough that a reader could design a study to address it |
| Platform | Modifier | [P] | Relates to how the manuscript functions as a knowledge resource | Used sparingly; reserved for sections whose subject is the document’s own architecture |
| Contribute | Modifier | [C] | Contains a contribution template or formal specification for community extension | Must specify the interface contract or schema requirements that a valid contribution must satisfy |
39.9 §7.9 Beyond Infrastructure: Personal, Organisational, and Financial Decisions
The framework described in this manuscript was developed through the lens of industrial infrastructure. The Edendale dairy facility, the Southland electricity grid, and the New Zealand ETS are specific. The underlying problem class is not. Every principle in Modules 1 through 3 applies to any situation where the following five features are present: a durable commitment, consequences that propagate beyond the boundary of the decision-maker’s immediate view, genuine uncertainty that cannot be resolved into probabilities, plural objectives, and multiple stakeholders with different exposures to different outcomes.
Consider a personal financial decision with a 20-year horizon: choosing a career specialisation, deciding whether to purchase property, or selecting a retirement investment allocation. The predict-and-commit logic that most financial planning tools offer assumes that a central-case projection of future income, inflation, and asset returns is the right basis for the choice. The decision-centred logic asks instead: which allocation performs acceptably across the full range of futures I cannot predict — the futures where my income trajectory is disrupted, where inflation behaves differently from the projection, where my priorities change? The regret metric applied to a personal financial portfolio is conceptually identical to the regret metric applied to an industrial energy pathway. The plausible futures are different; the evaluative logic is the same.
Consider an organisational strategy decision: whether to commit to a specific technology platform, enter a new market, or restructure a supply chain. Each of these decisions involves durable commitments, cross-boundary consequences (through partners, competitors, and customers who are not inside the organisation), deep uncertainty about the regulatory and market futures that will determine whether the decision was wise, plural objectives spanning financial return and organisational resilience, and multiple stakeholders. The decision-centred architecture responds to this problem class with the same structure it applies to industrial infrastructure: explicit alternatives, explicit futures, decision-relevant outcomes, declared boundaries, paired comparison, and traceability.
The extension to personal and organisational contexts does not require implementing the full technical stack of Modules 3 and 4. A person making a career decision does not need a governed analytical backbone or a schema-conforming artefact exchange. They need the intellectual discipline: what am I actually choosing between, what are the futures that will determine whether I chose well, and which choice performs acceptably across the widest range of those futures? The principles are the portable element; the architecture is the scaling mechanism that makes those principles analytically rigorous and institutionally accountable at the infrastructure scale.
Module 0 states that the decision makes the boundary. This applies as much to a person deciding whether to pursue a doctorate as to a corporation deciding whether to electrify a boiler fleet. The boundary should encompass the consequences that matter for that decision — including the ones that lie outside the frame you started with.
39.10 §7.10 Governed Expert Attestation: A Community Quality Architecture
The open knowledge architecture of §7.8 establishes how content is classified, tagged, and governed as it enters the manuscript. It establishes that modules may carry Implemented, Specified, or Vision status; that individual claims may be tagged for Research investigation or Collaborative contribution; and that the manuscript is designed to grow through community extension without losing coherence. What §7.8 does not yet establish is how the intellectual quality of contributed content is assured once it has been classified and admitted. This section proposes a governed expert attestation mechanism that addresses that question, not as an optional quality check applied after publication but as a structural component of the framework’s community extension architecture.
The problem it solves is specific. A manuscript about decision-centred modelling for industrial process heat decarbonisation makes technical claims that span at least five distinct epistemic communities: DMDU methodology, energy system modelling, New Zealand electricity network operation, industrial process heat technology, and applied decision theory. No single reviewer commands genuine expertise across all five. The academic peer review system, which assigns two or three reviewers for a paper as a whole, covers this epistemic landscape imperfectly and cannot be expected to provide domain-specific attestation for claims that lie outside each reviewer’s primary competence. The result is a systematic gap: methodological claims are reviewed by methods specialists who may not evaluate their application validity; application-specific claims are reviewed by practitioners who may not evaluate their methodological rigour.
The gap matters because it directly affects the value of the framework’s outputs. A planning decision supported by a framework whose technical claims have been independently attested by named domain experts occupies a qualitatively different institutional position from one supported by a framework that has passed undifferentiated peer review. The former can be audited claim by claim; the latter can only be accepted or rejected as a whole.
The proposed mechanism is claim-level expert attestation. Individual technical claims within the manuscript are registered in a public claims register — a structured document maintained in the project repository as CLAIMS.md. Each registered claim carries: a unique claim identifier, the module and section where it appears, a plain-language statement of the claim, the epistemic community whose expertise is most relevant to evaluating it, and its current attestation status.
Domain experts attest to the validity of specific claims within their declared competence. An attestation record carries: the claim identifier, the attesting expert’s full name and institutional affiliation, their declared domain of competence, the date of attestation, and a structured verdict from the following vocabulary: confirmed (the claim is correct within its stated scope), confirmed with qualification (the claim is substantially correct but requires a specific caveat), or contested (the claim is disputed, with a documented basis for the contestation). Contested attestations are as valuable as confirmations: they identify, precisely, where the analytical environment’s assumptions are most uncertain and where refinement would most improve decision quality.
The governance parallel is exact. The mechanism operates on the same logic as the framework’s ValidationArtefact. Just as every analytical artefact that enters the comparison chain must pass a declared acceptance gate and carry a validation record before it is admitted to the backbone, every technical claim in a community-contributed module carries a public attestation record before it is considered established within the framework’s knowledge base. The attestation record does not replace peer review of the manuscript as a whole; it complements it by making the domain-specific quality assurance of individual claims visible, traceable, and cumulative across the framework’s development history.
The connection to progressive refinement is direct. A claim that attracts contestation from a domain expert is exactly the kind of signal that the framework’s self-directing development logic is designed to act on. It identifies, precisely, where the analytical environment’s assumptions are most contested and therefore where refinement most improves the quality of the decision support provided. A claim that is confirmed by experts from multiple epistemic communities simultaneously achieves a higher epistemic status than one confirmed by a single community. A module that has received attested confirmations from experts in three or more distinct epistemic communities carries a qualitatively different epistemic standing than one that has not, and the attestation register makes that standing visible and queryable by any reader, AI agent, or institutional auditor.
The minimum viable implementation does not require building a new platform. A public claims register is maintained as CLAIMS.md in the project repository. Attestations are submitted through structured GitHub Issues using a declared template, reviewed by the lead author, and admitted to the register with a provenance record. The register is linked from the relevant module section in the manuscript, so any reader who wants to assess the attestation status of a specific claim can navigate directly to the record. The template for a contribution is: Claim ID: [from CLAIMS.md] Attester name and institution: Declared domain of competence: Date: Verdict: [confirmed / confirmed with qualification / contested] Basis (required if contested or qualified):
This infrastructure is operational today. The CLAIMS.md file and the GitHub Issues template represent a two-hour implementation investment. What requires investment is the community norm: the shared understanding that claim-level attestation is a valuable form of scholarly contribution, that domain experts who provide attestations are acknowledged contributors to the framework’s knowledge base, and that the attestation register constitutes a living quality record that improves the framework’s institutional credibility over time.
The broader vision described in the Context Declaration — a framework that is simultaneously rigorous, community-governed, and institutionally accountable — depends on exactly this kind of quality architecture. The thin-waist artefact governance that makes analytical outputs trustworthy, the decision-first boundary principle that keeps the analysis relevant, and the expert attestation mechanism that makes knowledge claims auditable are three expressions of the same underlying commitment: that what the framework asserts, it can defend, trace, and improve through principled engagement with the relevant expert communities.
[Collaborative]
If you have domain expertise relevant to any claim in this manuscript, you are invited to submit an attestation through GitHub Issues. The claims register and submission template are maintained at:
github.com/Ahmad-Mahmoudi-coder/DCM
Attestations from any epistemic community relevant to the framework’s claims are welcomed.