asset workspace progress
This commit is contained in:
parent
4c8190cc31
commit
59e1ad5ecf
@ -0,0 +1,175 @@
|
|||||||
|
# Filesystem-First Operational Runtime and Reconcile Boundary Decision
|
||||||
|
|
||||||
|
Status: Accepted
|
||||||
|
Date: 2026-03-15
|
||||||
|
Domain Owner: `docs/packer`
|
||||||
|
Cross-Domain Impact: `docs/studio`
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
The current packer model already has the right core separation:
|
||||||
|
|
||||||
|
- `assets/.prometeu/index.json` is the authoritative registry/catalog for managed assets;
|
||||||
|
- each asset root is anchored by `asset.json`;
|
||||||
|
- `asset.json` is the authoring-side declaration of how that asset is packed;
|
||||||
|
- Studio is meant to consume packer-owned operational semantics instead of recreating them locally.
|
||||||
|
|
||||||
|
What is still missing is a clear architectural decision for how the packer should behave operationally at runtime.
|
||||||
|
|
||||||
|
The repository now needs an answer to these questions:
|
||||||
|
|
||||||
|
1. should the packer remain a collection of direct filesystem services, recomputing state per request;
|
||||||
|
2. should the packer become a pure database-style system that displaces the open filesystem workflow;
|
||||||
|
3. or should it become a filesystem-first operational runtime that maintains an in-memory snapshot while preserving the workspace as the durable authoring surface.
|
||||||
|
|
||||||
|
The wrong answer here would create product friction.
|
||||||
|
|
||||||
|
If the packer behaves like a pure database, it will fight the real creative workflow where developers:
|
||||||
|
|
||||||
|
- edit files with their preferred tools;
|
||||||
|
- move directories manually when needed;
|
||||||
|
- inspect and version workspace files directly;
|
||||||
|
- expect the Studio to help organize work, not imprison it.
|
||||||
|
|
||||||
|
At the same time, if the packer stays purely filesystem-per-call, it will remain too expensive, too incoherent under concurrent use, and too weak as the operational source of truth for Studio.
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
The following direction is adopted:
|
||||||
|
|
||||||
|
1. `prometeu-packer` remains `filesystem-first`.
|
||||||
|
2. The packer becomes a `project-scoped operational runtime`, not a pure database.
|
||||||
|
3. The packer maintains an in-memory project snapshot for live operational reads and write coordination.
|
||||||
|
4. The durable authoring workspace on disk remains the final persisted source of truth.
|
||||||
|
5. The packer owns request/response read and write APIs over that runtime snapshot.
|
||||||
|
6. Writes execute through a packer-owned project write lane and become durably visible only after commit succeeds.
|
||||||
|
7. The packer will support background divergence detection between runtime snapshot and filesystem state.
|
||||||
|
8. Divergence detection must surface reconcile state explicitly; it must not silently invent or hide semantic repairs.
|
||||||
|
9. Studio remains a frontend consumer of packer responses, commands, and events.
|
||||||
|
10. When embedded inside Studio, the packer runtime is bootstrapped with a typed event bus reference supplied by the Studio bootstrap container.
|
||||||
|
|
||||||
|
## Adopted Constraints
|
||||||
|
|
||||||
|
### 1. Filesystem-First Authority
|
||||||
|
|
||||||
|
- the asset workspace under `assets/` remains the authoring environment;
|
||||||
|
- `asset.json` remains the asset-local declaration contract;
|
||||||
|
- `assets/.prometeu/index.json` remains the authoritative registry/catalog for managed assets;
|
||||||
|
- the packer runtime snapshot is an operational projection of packer-owned workspace artifacts, not a replacement authoring format.
|
||||||
|
|
||||||
|
### 2. Identity and Declaration Split
|
||||||
|
|
||||||
|
- `asset_id`, `included_in_build`, and registry-managed location tracking remain registry/catalog concerns;
|
||||||
|
- `asset.json` remains primarily a declaration of the asset contract and packing inputs/outputs;
|
||||||
|
- `asset.json` may carry the asset-local identity anchor needed for reconcile, specifically `asset_uuid`;
|
||||||
|
- `asset.json` must not become a dumping ground for transient UI state, cache state, or catalog-derived bookkeeping that belongs in `index.json` or other packer-owned control files.
|
||||||
|
|
||||||
|
### 3. Snapshot-Backed Read Semantics
|
||||||
|
|
||||||
|
- normal read APIs should serve from a coherent in-memory project snapshot;
|
||||||
|
- the packer must not require a full workspace recomputation for every normal read call once the runtime is active;
|
||||||
|
- concurrent reads may proceed when they observe a coherent snapshot generation;
|
||||||
|
- reads must not expose torn intermediate write state as committed truth.
|
||||||
|
|
||||||
|
### 4. Packer-Owned Write Execution
|
||||||
|
|
||||||
|
- write operations on one project are coordinated by the packer, not by caller timing;
|
||||||
|
- the baseline policy remains a single-writer semantic lane per project;
|
||||||
|
- write intent may compute previews before final commit when safe;
|
||||||
|
- final apply/commit remains serialized per project;
|
||||||
|
- successful durable commit defines post-write visibility.
|
||||||
|
|
||||||
|
### 5. Durable Commit Boundary
|
||||||
|
|
||||||
|
- the packer runtime may stage write changes in memory before disk commit;
|
||||||
|
- partially applied intermediate state must not be presented as durably committed truth;
|
||||||
|
- commit failure must leave the project in an explicitly diagnosable condition;
|
||||||
|
- recovery and reconcile rules must be designed as packer behavior, not delegated to Studio guesswork.
|
||||||
|
|
||||||
|
### 6. Divergence Detection and Reconcile
|
||||||
|
|
||||||
|
- a future packer-owned background observation path may detect divergence between runtime snapshot and filesystem state;
|
||||||
|
- this path exists to keep the runtime honest with respect to manual or external edits;
|
||||||
|
- divergence must result in explicit runtime state such as stale, diverged, reconciling, or failed;
|
||||||
|
- the packer must not silently rewrite user content just because divergence was observed;
|
||||||
|
- reconcile is an explicit packer-owned behavior and must preserve causality in events and responses.
|
||||||
|
|
||||||
|
### 7. Studio Consumer Boundary
|
||||||
|
|
||||||
|
- Studio consumes packer read responses, write outcomes, and packer-native lifecycle events;
|
||||||
|
- Studio may render stale/diverged/reconciling states, but must not invent packer-side reconcile semantics;
|
||||||
|
- Studio must not become the owner of filesystem-vs-snapshot conflict resolution;
|
||||||
|
- the Studio integration contract should remain command-oriented and event-driven.
|
||||||
|
|
||||||
|
### 8. Embedded Event Bus Bootstrap
|
||||||
|
|
||||||
|
- when the packer is embedded inside Studio, it must receive a typed event bus reference during bootstrap instead of creating an unrelated local event system;
|
||||||
|
- that reference is used for packer event publication and any packer-side subscription/unsubscription behavior needed by the embedded runtime;
|
||||||
|
- in Studio, the owner of that typed event bus reference is the application container;
|
||||||
|
- the Studio `Container` must be initialized as part of Studio boot before packer-backed workspaces or adapters start consuming packer services.
|
||||||
|
|
||||||
|
## Why This Direction Was Chosen
|
||||||
|
|
||||||
|
- It preserves the developer's open creative workflow around normal files and directories.
|
||||||
|
- It keeps the packer useful as an organizing and coordinating system instead of turning it into an opaque silo.
|
||||||
|
- It allows fast and coherent reads for Studio and tooling.
|
||||||
|
- It gives write coordination, commit visibility, and operational causality one owner.
|
||||||
|
- It creates a realistic path toward future background divergence detection without pretending that the filesystem stopped mattering.
|
||||||
|
|
||||||
|
## Explicit Non-Decisions
|
||||||
|
|
||||||
|
This decision does not define:
|
||||||
|
|
||||||
|
- the final class/module names of the packer runtime implementation;
|
||||||
|
- the final executor/thread primitive used internally;
|
||||||
|
- the exact event vocabulary for all future reconcile states;
|
||||||
|
- the final automatic-vs-manual reconcile policy for each drift scenario;
|
||||||
|
- a watch-service or daemon transport implementation;
|
||||||
|
- remote or multi-process synchronization;
|
||||||
|
- a pure database persistence model.
|
||||||
|
|
||||||
|
## Implications
|
||||||
|
|
||||||
|
- the packer runtime track must preserve `index.json` plus `asset.json` as the durable workspace artifacts;
|
||||||
|
- `asset.json` should evolve carefully to support local identity anchoring without absorbing catalog-only fields;
|
||||||
|
- the runtime snapshot should be described as an operational cache/projection with authority for live service behavior, not as a new authoring truth;
|
||||||
|
- mutation, doctor, build, and read services should converge on the same runtime state model;
|
||||||
|
- future drift detection work must be designed together with diagnostics, refresh, and reconcile surfaces in Studio.
|
||||||
|
- embedded Studio wiring must preserve one container-owned typed event bus reference instead of fragmented packer-local bus ownership.
|
||||||
|
|
||||||
|
## Propagation Targets
|
||||||
|
|
||||||
|
Specs:
|
||||||
|
|
||||||
|
- [`../specs/2. Workspace, Registry, and Asset Identity Specification.md`](../specs/2.%20Workspace,%20Registry,%20and%20Asset%20Identity%20Specification.md)
|
||||||
|
- [`../specs/3. Asset Declaration and Virtual Asset Contract Specification.md`](../specs/3.%20Asset%20Declaration%20and%20Virtual%20Asset%20Contract%20Specification.md)
|
||||||
|
- [`../specs/5. Diagnostics, Operations, and Studio Integration Specification.md`](../specs/5.%20Diagnostics,%20Operations,%20and%20Studio%20Integration%20Specification.md)
|
||||||
|
- [`../specs/6. Versioning, Migration, and Trust Model Specification.md`](../specs/6.%20Versioning,%20Migration,%20and%20Trust%20Model%20Specification.md)
|
||||||
|
|
||||||
|
Plans:
|
||||||
|
|
||||||
|
- [`../pull-requests/PR-11-packer-runtime-restructure-snapshot-authority-and-durable-commit.md`](../pull-requests/PR-11-packer-runtime-restructure-snapshot-authority-and-durable-commit.md)
|
||||||
|
|
||||||
|
Cross-domain references:
|
||||||
|
|
||||||
|
- [`../../studio/specs/4. Assets Workspace Specification.md`](../../studio/specs/4.%20Assets%20Workspace%20Specification.md)
|
||||||
|
|
||||||
|
Implementation surfaces:
|
||||||
|
|
||||||
|
- future packer project-runtime/bootstrap code
|
||||||
|
- snapshot-backed read services
|
||||||
|
- project write-lane and durable commit pipeline
|
||||||
|
- drift detection and reconcile state reporting
|
||||||
|
- Studio packer adapters for stale/diverged/reconciling operational states
|
||||||
|
- Studio bootstrap/container wiring for the shared typed event bus reference
|
||||||
|
|
||||||
|
## Validation Notes
|
||||||
|
|
||||||
|
This decision is correctly implemented only when all of the following are true:
|
||||||
|
|
||||||
|
- developers may continue to inspect and edit asset workspace files directly;
|
||||||
|
- packer reads are coherent without requiring full recomputation on each normal request;
|
||||||
|
- same-project writes remain serialized by the packer;
|
||||||
|
- Studio does not observe torn committed truth during write activity;
|
||||||
|
- divergence between runtime snapshot and filesystem state can be detected and surfaced explicitly;
|
||||||
|
- Studio remains a consumer of packer-owned reconcile and lifecycle semantics rather than inventing them.
|
||||||
@ -5,6 +5,7 @@ This directory contains packer decision records.
|
|||||||
Retained packer decision records:
|
Retained packer decision records:
|
||||||
|
|
||||||
- [`Concurrency, Observability, and Studio Adapter Boundary Decision.md`](./Concurrency,%20Observability,%20and%20Studio%20Adapter%20Boundary%20Decision.md)
|
- [`Concurrency, Observability, and Studio Adapter Boundary Decision.md`](./Concurrency,%20Observability,%20and%20Studio%20Adapter%20Boundary%20Decision.md)
|
||||||
|
- [`Filesystem-First Operational Runtime and Reconcile Boundary Decision.md`](./Filesystem-First%20Operational%20Runtime%20and%20Reconcile%20Boundary%20Decision.md)
|
||||||
|
|
||||||
The first packer decision wave was already consolidated into:
|
The first packer decision wave was already consolidated into:
|
||||||
|
|
||||||
|
|||||||
@ -0,0 +1,194 @@
|
|||||||
|
# PR-11 Packer Runtime Restructure, Snapshot Authority, and Durable Commit
|
||||||
|
|
||||||
|
Domain Owner: `docs/packer`
|
||||||
|
Cross-Domain Impact: `docs/studio`
|
||||||
|
|
||||||
|
## Briefing
|
||||||
|
|
||||||
|
The current `prometeu-packer` production track established the packer as the semantic owner of asset state, write semantics, diagnostics, and operational events.
|
||||||
|
|
||||||
|
The next architectural step is to restructure the packer so it behaves like a filesystem-first project-scoped operational runtime for the service surface the Studio actually uses today, rather than a collection of filesystem-per-call services:
|
||||||
|
|
||||||
|
- reads should come from a coherent in-memory snapshot;
|
||||||
|
- writes should execute through a packer-owned write path;
|
||||||
|
- state transitions should be coordinated by the packer, not by incidental caller sequencing;
|
||||||
|
- durable visibility should be defined by commit to disk, not by partially observed intermediate filesystem state;
|
||||||
|
- Studio should remain a frontend consumer of packer-owned read/write/event contracts.
|
||||||
|
|
||||||
|
This is a service-first re-architecture program, not a cosmetic refactor.
|
||||||
|
The likely outcome is a substantial internal rewrite of the packer service layer while preserving and tightening the external semantic contract already defined by the packer specs.
|
||||||
|
|
||||||
|
The current wave is intentionally narrow:
|
||||||
|
|
||||||
|
- build only the embedded service runtime needed by Studio asset management;
|
||||||
|
- remove unused or out-of-scope capabilities from the active code path instead of carrying them forward speculatively;
|
||||||
|
- reintroduce `doctor`, `build/pack`, and background reconcile only when a later concrete service need justifies them.
|
||||||
|
|
||||||
|
This PR is an umbrella planning artifact only.
|
||||||
|
It does not authorize direct implementation work by itself.
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
|
||||||
|
Define and execute a family of packer PRs that turns the packer into a project-scoped runtime with:
|
||||||
|
|
||||||
|
- explicit read and write APIs;
|
||||||
|
- a coherent in-memory project snapshot;
|
||||||
|
- packer-owned threading for state write coordination;
|
||||||
|
- durable commit to workspace files as the persistence boundary;
|
||||||
|
- causality-preserving events for Studio and other tooling consumers;
|
||||||
|
- an aggressively reduced active surface focused on the service capabilities currently consumed by Studio.
|
||||||
|
|
||||||
|
Communication model baseline:
|
||||||
|
|
||||||
|
- request/response is the primary contract for queries and commands;
|
||||||
|
- events are the primary contract for asynchronous lifecycle, progress, divergence, and terminal operation reporting;
|
||||||
|
- synchronous command entrypoints may return a `Future` directly when the caller needs an operation handle for later completion;
|
||||||
|
- long-running command completion may still be observed through causality-preserving events correlated to that operation;
|
||||||
|
- the packer should not use events as a replacement for normal query/command response semantics.
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- [`../specs/1. Domain and Artifact Boundary Specification.md`](../specs/1.%20Domain%20and%20Artifact%20Boundary%20Specification.md)
|
||||||
|
- [`../specs/2. Workspace, Registry, and Asset Identity Specification.md`](../specs/2.%20Workspace,%20Registry,%20and%20Asset%20Identity%20Specification.md)
|
||||||
|
- [`../specs/5. Diagnostics, Operations, and Studio Integration Specification.md`](../specs/5.%20Diagnostics,%20Operations,%20and%20Studio%20Integration%20Specification.md)
|
||||||
|
- [`../decisions/Concurrency, Observability, and Studio Adapter Boundary Decision.md`](../decisions/Concurrency,%20Observability,%20and%20Studio%20Adapter%20Boundary%20Decision.md)
|
||||||
|
- cross-domain reference: [`../../studio/specs/4. Assets Workspace Specification.md`](../../studio/specs/4.%20Assets%20Workspace%20Specification.md)
|
||||||
|
|
||||||
|
Decision baseline already in place:
|
||||||
|
|
||||||
|
- [`../decisions/Filesystem-First Operational Runtime and Reconcile Boundary Decision.md`](../decisions/Filesystem-First%20Operational%20Runtime%20and%20Reconcile%20Boundary%20Decision.md)
|
||||||
|
|
||||||
|
This PR is the umbrella execution plan for that direction.
|
||||||
|
It is not a substitute for that decision record and it is not itself an implementation PR.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- lock the architectural target and implementation decomposition for the runtime-restructure wave
|
||||||
|
- perform cleanup and active-surface reduction before runtime work begins
|
||||||
|
- define a packer-internal project runtime model that owns one coherent state snapshot per active project
|
||||||
|
- define packer-owned read APIs that serve data from the runtime snapshot instead of recomputing the full model from disk for each call
|
||||||
|
- define a packer-owned write lane that executes on packer-controlled threading rather than caller-controlled sequencing
|
||||||
|
- define the durable commit model from in-memory state to workspace files under `assets/` and `assets/.prometeu/`
|
||||||
|
- define snapshot refresh/bootstrap/recovery behavior
|
||||||
|
- define embedded-host bootstrap rules for supplying the explicit `PackerEventSink` used by packer event publication, with host-side bridging to any shared typed event bus
|
||||||
|
- define the query/command versus event boundary for Studio integration
|
||||||
|
- define how synchronous command entrypoints expose `Future`-based completion to callers that need direct operation handles
|
||||||
|
- migrate only the service surface currently used by Studio asset management onto the runtime model
|
||||||
|
- remove or retire implementation paths that are not used by that active service wave
|
||||||
|
- preserve Studio as a consumer of packer responses and events, not an owner of packer semantics
|
||||||
|
- retire the current filesystem-per-call service style once the runtime-backed path is stable
|
||||||
|
|
||||||
|
## Non-Goals
|
||||||
|
|
||||||
|
- no direct code implementation inside `PR-11`
|
||||||
|
- no direct rollout of one monolithic runtime rewrite under one follow-up change
|
||||||
|
- no redesign of Studio workspace UX
|
||||||
|
- no remote daemon or IPC transport in this wave
|
||||||
|
- no distributed or multi-process transactional protocol
|
||||||
|
- no final watch-service design for external filesystem edits
|
||||||
|
- no `doctor` implementation in this wave
|
||||||
|
- no `build/pack` implementation in this wave
|
||||||
|
- no background reconcile implementation in this wave
|
||||||
|
- no silent semantic changes to asset identity, registry authority, or write behavior already defined in packer specs
|
||||||
|
- no replacement of the packer event model with Studio-local orchestration
|
||||||
|
|
||||||
|
## Execution Method
|
||||||
|
|
||||||
|
This work must be executed as a family of follow-up PRs.
|
||||||
|
`PR-11` freezes the target architecture and the decomposition logic, but implementation starts only in later PR documents.
|
||||||
|
|
||||||
|
The follow-up implementation PR family is:
|
||||||
|
|
||||||
|
1. [`PR-12-cleanup-and-unused-surface-removal-before-runtime-service-wave.md`](./PR-12-cleanup-and-unused-surface-removal-before-runtime-service-wave.md)
|
||||||
|
2. [`PR-13-embedded-bootstrap-container-owned-event-bus-and-packer-composition-root.md`](./PR-13-embedded-bootstrap-container-owned-event-bus-and-packer-composition-root.md)
|
||||||
|
3. [`PR-14-project-runtime-core-snapshot-model-and-lifecycle.md`](./PR-14-project-runtime-core-snapshot-model-and-lifecycle.md)
|
||||||
|
4. [`PR-15-snapshot-backed-asset-query-services.md`](./PR-15-snapshot-backed-asset-query-services.md)
|
||||||
|
5. [`PR-16-write-lane-command-completion-and-used-write-services.md`](./PR-16-write-lane-command-completion-and-used-write-services.md)
|
||||||
|
6. [`PR-17-studio-runtime-adapter-and-assets-workspace-consumption.md`](./PR-17-studio-runtime-adapter-and-assets-workspace-consumption.md)
|
||||||
|
7. [`PR-18-legacy-service-retirement-and-regression-hardening.md`](./PR-18-legacy-service-retirement-and-regression-hardening.md)
|
||||||
|
|
||||||
|
Each follow-up PR should remain granular enough to:
|
||||||
|
|
||||||
|
- have a narrow acceptance surface;
|
||||||
|
- carry its own tests and rollback story;
|
||||||
|
- avoid mixing cleanup, bootstrap, runtime-core, UI-adapter, and deferred capability concerns in one code change.
|
||||||
|
|
||||||
|
Wave discipline for all follow-up PRs:
|
||||||
|
|
||||||
|
- remove code that is not used by the active Studio-facing service wave instead of preserving speculative extension points;
|
||||||
|
- do not reintroduce `doctor`, `build/pack`, or background reconcile as placeholders;
|
||||||
|
- add capabilities later only when the active Studio integration requires them.
|
||||||
|
|
||||||
|
Deferred from the current wave:
|
||||||
|
|
||||||
|
- `doctor`
|
||||||
|
- `build/pack`
|
||||||
|
- background reconcile/diff observer
|
||||||
|
|
||||||
|
Those capabilities should be reintroduced only when the active service wave needs them.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- `PR-11` remains an umbrella plan rather than a direct implementation vehicle
|
||||||
|
- the follow-up implementation family is clear enough that later PRs can be opened without reopening the architecture debate
|
||||||
|
- the packer has an explicit project-scoped runtime authority model
|
||||||
|
- read operations observe coherent snapshot state
|
||||||
|
- write operations are executed through packer-owned coordination on packer-controlled threading
|
||||||
|
- durable visibility is defined by successful commit, not by partially observed intermediate filesystem state
|
||||||
|
- same-project write conflicts cannot commit concurrently
|
||||||
|
- same-project read/write interaction does not expose torn committed truth
|
||||||
|
- asset identity, registry authority, and write semantics remain consistent with existing packer specs
|
||||||
|
- the active implementation surface contains only the service capabilities currently used by Studio
|
||||||
|
- unused or out-of-scope legacy capability paths are removed instead of lingering in parallel
|
||||||
|
- Studio consumes packer read/write/event APIs as a frontend consumer and does not regain semantic ownership
|
||||||
|
- request/response remains the primary query/command contract while events remain the asynchronous observability contract
|
||||||
|
- synchronous command APIs may expose `Future` completion directly without collapsing the event lane into ad hoc RPC polling
|
||||||
|
- event ordering and `operation_id` causality remain valid through the restructured runtime
|
||||||
|
- the packer keeps `PackerEventSink` as its publication boundary instead of depending directly on host event bus types
|
||||||
|
- embedded hosts may bridge `PackerEventSink` into a container-owned typed event bus, but the packer API must not normalize a silent `noop` default as production bootstrap behavior
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
- snapshot bootstrap tests for projects with valid, invalid, and partially broken asset workspaces
|
||||||
|
- read coherence tests under concurrent read pressure
|
||||||
|
- write serialization tests for same-project conflicting writes
|
||||||
|
- failure and recovery tests for interrupted durable commit paths
|
||||||
|
- write-path regression tests on top of the runtime core for the commands currently used by Studio
|
||||||
|
- cleanup validation proving that inactive `doctor`, `build/pack`, and background reconcile implementation paths are no longer part of the active wave
|
||||||
|
- event ordering and terminal lifecycle tests through the Studio adapter path
|
||||||
|
- Studio smoke validation for:
|
||||||
|
- asset listing
|
||||||
|
- details loading
|
||||||
|
- staged writes
|
||||||
|
- relocation
|
||||||
|
- refresh after packer-owned operations
|
||||||
|
- bootstrap validation that Studio initializes the container-owned typed event bus before packer-backed runtime use
|
||||||
|
|
||||||
|
## Risks and Rollback
|
||||||
|
|
||||||
|
- this program may expose that the current service boundaries are too filesystem-centric to preserve cleanly
|
||||||
|
- removing out-of-scope capabilities now may require later reintroduction work when those capabilities become necessary again
|
||||||
|
- external filesystem edits during runtime lifetime are not fully solved by this plan and must not be hidden as if they were
|
||||||
|
- if runtime-backed services prove too invasive, rollback should preserve the current stable service contracts while isolating the runtime work behind new internal packages until the migration is complete
|
||||||
|
|
||||||
|
## Affected Artifacts
|
||||||
|
|
||||||
|
- `docs/packer/decisions/**`
|
||||||
|
- `docs/packer/pull-requests/**`
|
||||||
|
- `docs/packer/decisions/Filesystem-First Operational Runtime and Reconcile Boundary Decision.md`
|
||||||
|
- `docs/packer/specs/1. Domain and Artifact Boundary Specification.md`
|
||||||
|
- `docs/packer/specs/2. Workspace, Registry, and Asset Identity Specification.md`
|
||||||
|
- `docs/packer/specs/3. Asset Declaration and Virtual Asset Contract Specification.md`
|
||||||
|
- `docs/packer/specs/5. Diagnostics, Operations, and Studio Integration Specification.md`
|
||||||
|
- `docs/packer/specs/6. Versioning, Migration, and Trust Model Specification.md`
|
||||||
|
- `docs/studio/specs/2. Studio UI Foundations Specification.md`
|
||||||
|
- `docs/studio/specs/4. Assets Workspace Specification.md`
|
||||||
|
- `prometeu-packer/src/main/java/p/packer/**`
|
||||||
|
- `prometeu-packer/src/test/java/p/packer/**`
|
||||||
|
- `prometeu-studio/**` integration adapter and smoke coverage
|
||||||
|
|
||||||
|
## Suggested Next Step
|
||||||
|
|
||||||
|
Do not start code execution directly from this plan.
|
||||||
|
|
||||||
|
The next correct step is to derive granular implementation PRs from `PR-11`, each scoped to one execution front of the runtime-restructure wave.
|
||||||
@ -0,0 +1,74 @@
|
|||||||
|
# PR-12 Cleanup and Unused Surface Removal Before the Runtime Service Wave
|
||||||
|
|
||||||
|
Domain Owner: `docs/packer`
|
||||||
|
Cross-Domain Impact: `docs/studio`
|
||||||
|
|
||||||
|
## Briefing
|
||||||
|
|
||||||
|
Before introducing the runtime service wave, the current packer code should be reduced and cleaned so the next PRs are not built on top of contradictory seams.
|
||||||
|
|
||||||
|
The current code still mixes:
|
||||||
|
|
||||||
|
- implicit concrete instantiation inside services;
|
||||||
|
- filesystem-per-call orchestration;
|
||||||
|
- service boundaries that do not match the desired runtime model;
|
||||||
|
- inactive or out-of-scope capabilities that are not part of the immediate Studio-driven service wave.
|
||||||
|
|
||||||
|
This PR creates the cleanup baseline.
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
|
||||||
|
Remove unused/out-of-scope packer surfaces, align code with the current specs, and prepare a smaller active service boundary for the runtime implementation track.
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- [`./PR-11-packer-runtime-restructure-snapshot-authority-and-durable-commit.md`](./PR-11-packer-runtime-restructure-snapshot-authority-and-durable-commit.md)
|
||||||
|
- [`../decisions/Filesystem-First Operational Runtime and Reconcile Boundary Decision.md`](../decisions/Filesystem-First%20Operational%20Runtime%20and%20Reconcile%20Boundary%20Decision.md)
|
||||||
|
- [`../specs/2. Workspace, Registry, and Asset Identity Specification.md`](../specs/2.%20Workspace,%20Registry,%20and%20Asset%20Identity%20Specification.md)
|
||||||
|
- [`../specs/3. Asset Declaration and Virtual Asset Contract Specification.md`](../specs/3.%20Asset%20Declaration%20and%20Virtual%20Asset%20Contract%20Specification.md)
|
||||||
|
- [`../specs/5. Diagnostics, Operations, and Studio Integration Specification.md`](../specs/5.%20Diagnostics,%20Operations,%20and%20Studio%20Integration%20Specification.md)
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- align `asset.json` handling with the current spec baseline, including `asset_uuid`
|
||||||
|
- remove inactive `doctor`, `build/pack`, and reconcile-oriented implementation paths from the active runtime-service wave
|
||||||
|
- remove concrete default instantiation patterns that hide composition ownership
|
||||||
|
- simplify the active service surface to what the current Studio integration actually needs
|
||||||
|
- remove code that is not being used for the immediate service-only wave
|
||||||
|
- correct service contract seams that currently mix read-oriented and mutation-oriented responsibilities in contradictory ways
|
||||||
|
|
||||||
|
## Non-Goals
|
||||||
|
|
||||||
|
- no runtime snapshot yet
|
||||||
|
- no write lane yet
|
||||||
|
- no Studio adapter redesign yet
|
||||||
|
- no reintroduction of doctor/build/reconcile in this wave
|
||||||
|
|
||||||
|
## Execution Method
|
||||||
|
|
||||||
|
1. Align manifest/declaration code with the current spec baseline.
|
||||||
|
2. Remove inactive service paths that are not part of the current Studio-driven runtime wave.
|
||||||
|
3. Eliminate implicit composition where services instantiate concrete collaborators by default.
|
||||||
|
4. Correct active service contracts so the remaining surface matches the Studio-facing runtime plan.
|
||||||
|
5. Leave the repository with one smaller active surface that the runtime work can replace cleanly.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- `asset_uuid` is no longer missing from the active declaration path
|
||||||
|
- inactive `doctor`, `build/pack`, and reconcile implementation paths are removed from the active wave
|
||||||
|
- concrete service composition is no longer hidden behind broad default constructors in the active path
|
||||||
|
- contradictory active service contracts are corrected before runtime work starts
|
||||||
|
- the remaining active surface is focused on the service capabilities the Studio currently needs
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
- declaration/parser tests for the manifest baseline
|
||||||
|
- cleanup tests confirming out-of-scope service paths are no longer part of the active surface
|
||||||
|
- regression tests confirming the remaining active service surface still works
|
||||||
|
- smoke validation that Studio-facing packer usage still boots after the cleanup
|
||||||
|
|
||||||
|
## Affected Artifacts
|
||||||
|
|
||||||
|
- `prometeu-packer/src/main/java/p/packer/**`
|
||||||
|
- `prometeu-packer/src/test/java/p/packer/**`
|
||||||
|
- `prometeu-studio/**` only where the active service surface changes
|
||||||
@ -0,0 +1,73 @@
|
|||||||
|
# PR-13 Embedded Bootstrap, Container-Owned Event Bus, and Packer Composition Root
|
||||||
|
|
||||||
|
Domain Owner: `docs/packer`
|
||||||
|
Cross-Domain Impact: `docs/studio`
|
||||||
|
|
||||||
|
## Briefing
|
||||||
|
|
||||||
|
After the cleanup baseline, the next step is to make embedded Studio bootstrap explicit and introduce one composition root for the packer.
|
||||||
|
|
||||||
|
This is where the Studio `Container` becomes a contract plus global holder, while the concrete embedded boot and `prometeu-packer-v1` wiring move into the application layer.
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
|
||||||
|
Deliver the embedded bootstrap contract, explicit `PackerEventSink` wiring, and an explicit `prometeu-packer-api` to `prometeu-packer-v1` composition root for Studio embedding.
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- [`./PR-12-cleanup-and-unused-surface-removal-before-runtime-service-wave.md`](./PR-12-cleanup-and-unused-surface-removal-before-runtime-service-wave.md)
|
||||||
|
- [`../specs/5. Diagnostics, Operations, and Studio Integration Specification.md`](../specs/5.%20Diagnostics,%20Operations,%20and%20Studio%20Integration%20Specification.md)
|
||||||
|
- cross-domain reference: [`../../studio/specs/2. Studio UI Foundations Specification.md`](../../studio/specs/2.%20Studio%20UI%20Foundations%20Specification.md)
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- define the embedded packer bootstrap contract
|
||||||
|
- define the packer composition root for the active service wave inside the application layer
|
||||||
|
- keep `prometeu-studio` bound only to `prometeu-packer-api`
|
||||||
|
- wire the Studio `Container` contract/holder as the owner of the shared typed event bus reference used by the host-side `PackerEventSink` bridge
|
||||||
|
- ensure application boot installs a `Container` implementation before packer-backed use begins
|
||||||
|
- make the active embedded runtime entrypoint explicit enough that future capabilities do not depend on hidden constructors or side boot paths
|
||||||
|
|
||||||
|
## Non-Goals
|
||||||
|
|
||||||
|
- no runtime snapshot yet
|
||||||
|
- no read migration yet
|
||||||
|
- no write lane yet
|
||||||
|
- no alternate bootstrap retained for inactive `doctor`, `build/pack`, or reconcile paths
|
||||||
|
|
||||||
|
## Execution Method
|
||||||
|
|
||||||
|
1. Define the explicit packer bootstrap/composition entrypoint.
|
||||||
|
2. Make the host-provided `PackerEventSink` an explicit dependency for Studio embedding.
|
||||||
|
3. Refactor Studio `Container` into a contract plus installed global holder.
|
||||||
|
4. Move concrete packer wiring to the application layer that chooses `prometeu-packer-v1`.
|
||||||
|
5. Remove remaining ambiguity around packer-local versus container-owned event visibility by bridging `PackerEventSink` into the host bus at the application layer.
|
||||||
|
5. Remove remaining embedded bootstrap variants that only exist to keep inactive service surfaces alive.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- the active packer wave has one explicit composition root
|
||||||
|
- `prometeu-studio` depends only on `prometeu-packer-api`
|
||||||
|
- the application layer installs the `Container` implementation and chooses `prometeu-packer-v1`
|
||||||
|
- Studio `Container` owns the shared typed event bus reference through its installed implementation
|
||||||
|
- the packer composition root receives an explicit `PackerEventSink` rather than reaching directly into host event bus types
|
||||||
|
- packer-backed work starts only after `Container.install(...)`
|
||||||
|
- packer publication uses `PackerEventSink`, and the application layer bridges that sink into the container-owned path when embedded in Studio
|
||||||
|
- no public `PackerEventSink.noop()`-style default is treated as acceptable production bootstrap behavior
|
||||||
|
- hidden bootstrap paths that only support inactive service surfaces are removed
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
- bootstrap tests for the packer composition root
|
||||||
|
- Studio boot tests for `Container.install(...)`
|
||||||
|
- integration tests for packer event visibility through the host bridge into the container-owned path
|
||||||
|
|
||||||
|
## Affected Artifacts
|
||||||
|
|
||||||
|
- `prometeu-packer/prometeu-packer-api/src/main/java/p/packer/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/services/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/events/**`
|
||||||
|
- `prometeu-studio/src/main/java/p/studio/Container.java`
|
||||||
|
- `prometeu-studio/src/main/java/p/studio/events/**`
|
||||||
|
- `prometeu-app/src/main/java/p/studio/App.java`
|
||||||
|
- `prometeu-app/src/main/java/p/studio/AppContainer.java`
|
||||||
@ -0,0 +1,68 @@
|
|||||||
|
# PR-14 Project Runtime Core, Snapshot Model, and Lifecycle
|
||||||
|
|
||||||
|
Domain Owner: `docs/packer`
|
||||||
|
|
||||||
|
## Briefing
|
||||||
|
|
||||||
|
With cleanup and bootstrap in place, the packer can now introduce the actual project runtime.
|
||||||
|
|
||||||
|
This PR adds the runtime boundary, snapshot state model, bootstrap/load behavior, and disposal lifecycle that later query and command services will use.
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
|
||||||
|
Deliver the project-scoped runtime core and one coherent in-memory snapshot model for the active service wave.
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- [`./PR-12-cleanup-and-unused-surface-removal-before-runtime-service-wave.md`](./PR-12-cleanup-and-unused-surface-removal-before-runtime-service-wave.md)
|
||||||
|
- [`./PR-13-embedded-bootstrap-container-owned-event-bus-and-packer-composition-root.md`](./PR-13-embedded-bootstrap-container-owned-event-bus-and-packer-composition-root.md)
|
||||||
|
- [`../specs/2. Workspace, Registry, and Asset Identity Specification.md`](../specs/2.%20Workspace,%20Registry,%20and%20Asset%20Identity%20Specification.md)
|
||||||
|
- [`../specs/5. Diagnostics, Operations, and Studio Integration Specification.md`](../specs/5.%20Diagnostics,%20Operations,%20and%20Studio%20Integration%20Specification.md)
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- add the project runtime abstraction
|
||||||
|
- define snapshot content and generation ownership only for the active service wave
|
||||||
|
- define runtime bootstrap/load and disposal behavior
|
||||||
|
- isolate filesystem repositories behind the runtime boundary
|
||||||
|
- keep snapshot scope limited to the data needed by the active Studio-facing service path
|
||||||
|
- keep the runtime implementation inside `prometeu-packer-v1` while preserving the external contract in `prometeu-packer-api`
|
||||||
|
|
||||||
|
## Non-Goals
|
||||||
|
|
||||||
|
- no Studio adapter work yet
|
||||||
|
- no doctor/build/reconcile functionality
|
||||||
|
- no full query migration yet
|
||||||
|
- no write lane yet
|
||||||
|
- no speculative snapshot fields for capabilities that are not part of the active wave
|
||||||
|
|
||||||
|
## Execution Method
|
||||||
|
|
||||||
|
1. Introduce project runtime state/container types.
|
||||||
|
2. Load registry plus asset declarations into the runtime snapshot.
|
||||||
|
3. Define runtime generation, refresh, and disposal rules.
|
||||||
|
4. Keep the runtime state minimal and aligned with the currently used service surface.
|
||||||
|
5. Make later query/command work depend on this runtime boundary rather than direct filesystem scans.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- one coherent runtime exists per active project
|
||||||
|
- runtime bootstrap/load is explicit and testable
|
||||||
|
- runtime disposal is explicit
|
||||||
|
- filesystem repositories are isolated behind the runtime boundary
|
||||||
|
- runtime state is limited to what the active Studio service wave actually consumes
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
- runtime bootstrap tests
|
||||||
|
- snapshot generation tests
|
||||||
|
- lifecycle tests for bootstrap, refresh, and disposal
|
||||||
|
|
||||||
|
## Affected Artifacts
|
||||||
|
|
||||||
|
- `prometeu-packer/prometeu-packer-api/src/main/java/p/packer/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/services/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/models/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/events/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/test/java/p/packer/services/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/test/java/p/packer/testing/**`
|
||||||
@ -0,0 +1,70 @@
|
|||||||
|
# PR-15 Snapshot-Backed Asset Query Services
|
||||||
|
|
||||||
|
Domain Owner: `docs/packer`
|
||||||
|
Cross-Domain Impact: `docs/studio`
|
||||||
|
|
||||||
|
## Briefing
|
||||||
|
|
||||||
|
The first functional runtime-backed service wave should focus on queries.
|
||||||
|
|
||||||
|
This PR moves the asset query surface used by Studio onto the runtime snapshot and defines coherent query behavior without expanding into doctor/build/reconcile.
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
|
||||||
|
Deliver snapshot-backed query services for the currently used asset-management surface.
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- [`./PR-14-project-runtime-core-snapshot-model-and-lifecycle.md`](./PR-14-project-runtime-core-snapshot-model-and-lifecycle.md)
|
||||||
|
- [`../specs/2. Workspace, Registry, and Asset Identity Specification.md`](../specs/2.%20Workspace,%20Registry,%20and%20Asset%20Identity%20Specification.md)
|
||||||
|
- [`../specs/3. Asset Declaration and Virtual Asset Contract Specification.md`](../specs/3.%20Asset%20Declaration%20and%20Virtual%20Asset%20Contract%20Specification.md)
|
||||||
|
- cross-domain reference: [`../../studio/specs/4. Assets Workspace Specification.md`](../../studio/specs/4.%20Assets%20Workspace%20Specification.md)
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- migrate `init_workspace`
|
||||||
|
- migrate `list_assets`
|
||||||
|
- migrate `get_asset_details`
|
||||||
|
- keep the query path coherent through the runtime snapshot
|
||||||
|
- preserve the packer-owned summary/details contract used by Studio
|
||||||
|
- remove leftover query orchestration that only existed to feed inactive `doctor`, `build/pack`, or reconcile flows
|
||||||
|
- preserve the modular boundary where `prometeu-studio` consumes only `prometeu-packer-api`
|
||||||
|
|
||||||
|
## Non-Goals
|
||||||
|
|
||||||
|
- no command/write lane yet
|
||||||
|
- no mutation apply yet
|
||||||
|
- no doctor/build/reconcile
|
||||||
|
|
||||||
|
## Execution Method
|
||||||
|
|
||||||
|
1. Route the active query APIs through the runtime snapshot.
|
||||||
|
2. Preserve coherent results across repeated query use.
|
||||||
|
3. Remove repeated recomputation from the active query path.
|
||||||
|
4. Remove active query seams that only support deferred capabilities.
|
||||||
|
5. Keep Studio-facing response semantics stable.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- active asset queries are served from the runtime snapshot
|
||||||
|
- normal query use no longer depends on full filesystem recomputation
|
||||||
|
- Studio-facing details/listing semantics remain stable
|
||||||
|
- no doctor/build/reconcile behavior is introduced by this PR
|
||||||
|
- unused query seams kept only for deferred capabilities are removed from the active path
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
- regression tests for `init_workspace`
|
||||||
|
- regression tests for `list_assets`
|
||||||
|
- regression tests for `get_asset_details`
|
||||||
|
- Studio smoke validation for list/details loading
|
||||||
|
|
||||||
|
## Affected Artifacts
|
||||||
|
|
||||||
|
- `prometeu-packer/prometeu-packer-api/src/main/java/p/packer/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-api/src/main/java/p/packer/messages/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/services/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/models/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/test/java/p/packer/services/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/test/java/p/packer/testing/**`
|
||||||
|
- `prometeu-studio/**` query adapter coverage
|
||||||
@ -0,0 +1,71 @@
|
|||||||
|
# PR-16 Write Lane, Command Completion, and Used Write Services
|
||||||
|
|
||||||
|
Domain Owner: `docs/packer`
|
||||||
|
Cross-Domain Impact: `docs/studio`
|
||||||
|
|
||||||
|
## Briefing
|
||||||
|
|
||||||
|
After queries are runtime-backed, the next step is the minimal command surface actually used by Studio.
|
||||||
|
|
||||||
|
This PR introduces the project write lane, synchronous command completion semantics, and the write services currently needed by Studio, without reintroducing doctor, build, or reconcile work.
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
|
||||||
|
Deliver the write lane plus only the write/command surface currently exercised by the Studio `Assets` workspace on top of the runtime model.
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- [`./PR-14-project-runtime-core-snapshot-model-and-lifecycle.md`](./PR-14-project-runtime-core-snapshot-model-and-lifecycle.md)
|
||||||
|
- [`./PR-15-snapshot-backed-asset-query-services.md`](./PR-15-snapshot-backed-asset-query-services.md)
|
||||||
|
- [`../decisions/Concurrency, Observability, and Studio Adapter Boundary Decision.md`](../decisions/Concurrency,%20Observability,%20and%20Studio%20Adapter%20Boundary%20Decision.md)
|
||||||
|
- [`../specs/5. Diagnostics, Operations, and Studio Integration Specification.md`](../specs/5.%20Diagnostics,%20Operations,%20and%20Studio%20Integration%20Specification.md)
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- implement the project-scoped write lane
|
||||||
|
- define durable visibility after successful commit
|
||||||
|
- define request/response command semantics with optional `Future`-based completion
|
||||||
|
- implement only the write surface currently used by Studio `Assets`
|
||||||
|
- preserve causal lifecycle events for command execution
|
||||||
|
- reintroduce command/write support in `prometeu-packer-v1` without collapsing the `prometeu-packer-api` boundary
|
||||||
|
|
||||||
|
## Non-Goals
|
||||||
|
|
||||||
|
- no doctor
|
||||||
|
- no build/pack
|
||||||
|
- no background reconcile observer
|
||||||
|
|
||||||
|
## Execution Method
|
||||||
|
|
||||||
|
1. Add the runtime-backed write lane.
|
||||||
|
2. Define synchronous command response plus optional `Future` completion semantics.
|
||||||
|
3. Reintroduce only the currently used write services onto the runtime.
|
||||||
|
4. Remove command surfaces that remain out of scope for the active Studio service wave.
|
||||||
|
5. Preserve asynchronous lifecycle events as observability, not as the primary command contract.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- same-project commands are serialized by the packer
|
||||||
|
- committed state becomes visible only after successful durable commit
|
||||||
|
- synchronous command APIs may expose `Future` completion directly
|
||||||
|
- the write services currently used by Studio run on the runtime-backed path
|
||||||
|
- no doctor/build/reconcile behavior is introduced by this PR
|
||||||
|
- deferred command surfaces are not kept alive in the active implementation by placeholder adapters
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
- write-lane concurrency tests
|
||||||
|
- commit failure/recovery tests
|
||||||
|
- command completion tests for response plus `Future`
|
||||||
|
- write regression tests for the active Studio write surface
|
||||||
|
- negative validation proving `doctor`, `build/pack`, and reconcile command paths are not part of the active wave
|
||||||
|
|
||||||
|
## Affected Artifacts
|
||||||
|
|
||||||
|
- `prometeu-packer/prometeu-packer-api/src/main/java/p/packer/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/services/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/models/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/events/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/test/java/p/packer/services/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/test/java/p/packer/testing/**`
|
||||||
|
- `prometeu-studio/**` command integration coverage
|
||||||
@ -0,0 +1,73 @@
|
|||||||
|
# PR-17 Studio Runtime Adapter and Assets Workspace Consumption
|
||||||
|
|
||||||
|
Domain Owner: `docs/packer`
|
||||||
|
Cross-Domain Impact: `docs/studio`
|
||||||
|
|
||||||
|
## Briefing
|
||||||
|
|
||||||
|
Once the active query and command surface is runtime-backed, Studio should consume that path as a frontend without recreating packer semantics.
|
||||||
|
|
||||||
|
This PR hardens the Studio adapters and the `Assets` workspace consumption path for the service-only wave while preserving the modular split between `prometeu-packer-api`, `prometeu-packer-v1`, `prometeu-studio`, and `prometeu-app`.
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
|
||||||
|
Deliver the Studio-side adapter and `Assets` workspace integration for the active runtime-backed service surface.
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- [`./PR-13-embedded-bootstrap-container-owned-event-bus-and-packer-composition-root.md`](./PR-13-embedded-bootstrap-container-owned-event-bus-and-packer-composition-root.md)
|
||||||
|
- [`./PR-15-snapshot-backed-asset-query-services.md`](./PR-15-snapshot-backed-asset-query-services.md)
|
||||||
|
- [`./PR-16-write-lane-command-completion-and-used-write-services.md`](./PR-16-write-lane-command-completion-and-used-write-services.md)
|
||||||
|
- cross-domain reference: [`../../studio/specs/2. Studio UI Foundations Specification.md`](../../studio/specs/2.%20Studio%20UI%20Foundations%20Specification.md)
|
||||||
|
- cross-domain reference: [`../../studio/specs/4. Assets Workspace Specification.md`](../../studio/specs/4.%20Assets%20Workspace%20Specification.md)
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- adapt Studio to consume runtime-backed packer queries and commands
|
||||||
|
- preserve `request/response` as the primary integration model
|
||||||
|
- consume packer lifecycle events through the host bridge from `PackerEventSink` into the container-owned typed event bus path
|
||||||
|
- keep the `Assets` workspace aligned with the active service-only wave
|
||||||
|
- remove adapter branches that only exist for inactive `doctor`, `build/pack`, or reconcile usage
|
||||||
|
- keep `prometeu-studio` bound only to `prometeu-packer-api`
|
||||||
|
- let `prometeu-app` remain responsible for installing the concrete `Container` implementation, applying the `p.packer.Packer` entrypoint from `prometeu-packer-v1`, and bridging `PackerEventSink` into the host bus
|
||||||
|
|
||||||
|
## Non-Goals
|
||||||
|
|
||||||
|
- no doctor UI
|
||||||
|
- no pack/build UI
|
||||||
|
- no reconcile-state UI beyond what the current service wave actually exposes
|
||||||
|
|
||||||
|
## Execution Method
|
||||||
|
|
||||||
|
1. Update the Studio adapter layer to consume the runtime-backed service path.
|
||||||
|
2. Preserve translational mapping only.
|
||||||
|
3. Validate that `prometeu-studio` does not depend on `prometeu-packer-v1` classes directly.
|
||||||
|
4. Validate command submission plus event-driven lifecycle visibility through the host `PackerEventSink` bridge and shared bus path.
|
||||||
|
4. Remove adapter branches that only keep deferred capabilities artificially wired.
|
||||||
|
5. Keep the `Assets` workspace focused on the currently active service surface.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- Studio remains a consumer of packer runtime semantics
|
||||||
|
- `Assets` workspace list/details/actions run through the active runtime-backed service path
|
||||||
|
- command submission plus event observation are coherent end to end
|
||||||
|
- no inactive doctor/build/reconcile surfaces are reintroduced
|
||||||
|
- Studio adapters no longer preserve dead branches for deferred capability families
|
||||||
|
- `prometeu-studio` depends only on `prometeu-packer-api`
|
||||||
|
- `prometeu-app` is the layer that binds the concrete `Container` implementation and the `p.packer.Packer` entrypoint from `prometeu-packer-v1`
|
||||||
|
- Studio consumes packer lifecycle visibility through a host-provided `PackerEventSink` bridge rather than by exposing host bus types inside packer contracts
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
- Studio adapter tests
|
||||||
|
- `Assets` workspace smoke tests
|
||||||
|
- end-to-end tests for list/details/write flows used by Studio
|
||||||
|
|
||||||
|
## Affected Artifacts
|
||||||
|
|
||||||
|
- `prometeu-studio/src/main/java/p/studio/**`
|
||||||
|
- `prometeu-studio/src/test/java/p/studio/**`
|
||||||
|
- `prometeu-app/src/main/java/p/studio/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-api/src/main/java/p/packer/**` integration-facing contracts
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/services/**` embedded runtime implementation surfaces
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/events/**` embedded runtime event surfaces
|
||||||
@ -0,0 +1,66 @@
|
|||||||
|
# PR-18 Legacy Service Retirement and Regression Hardening
|
||||||
|
|
||||||
|
Domain Owner: `docs/packer`
|
||||||
|
|
||||||
|
## Briefing
|
||||||
|
|
||||||
|
After the active service-only wave is fully running through the runtime path, the repository should not keep duplicated legacy orchestration around as a competing semantic track.
|
||||||
|
|
||||||
|
This PR retires the superseded legacy paths and hardens regression coverage around the smaller active service surface.
|
||||||
|
It also closes the cleanup promise from `PR-12` by ensuring no unused packer capability families survive just because they existed before the runtime service wave.
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
|
||||||
|
Remove superseded legacy service paths and strengthen regression protection for the runtime-backed service wave.
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- [`./PR-15-snapshot-backed-asset-query-services.md`](./PR-15-snapshot-backed-asset-query-services.md)
|
||||||
|
- [`./PR-16-write-lane-command-completion-and-used-write-services.md`](./PR-16-write-lane-command-completion-and-used-write-services.md)
|
||||||
|
- [`./PR-17-studio-runtime-adapter-and-assets-workspace-consumption.md`](./PR-17-studio-runtime-adapter-and-assets-workspace-consumption.md)
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- retire duplicated filesystem-per-call paths superseded by the active runtime-backed service wave
|
||||||
|
- remove temporary shims that were tolerated only during the migration window
|
||||||
|
- harden regression coverage around the remaining active service surface
|
||||||
|
- remove leftover inactive `doctor`, `build/pack`, and reconcile code that no longer belongs to the service-only wave
|
||||||
|
- preserve the `prometeu-packer-api` surface as the stable consumer contract while retiring legacy implementation paths in `prometeu-packer-v1`
|
||||||
|
|
||||||
|
## Non-Goals
|
||||||
|
|
||||||
|
- no doctor reintroduction
|
||||||
|
- no build/pack reintroduction
|
||||||
|
- no reconcile observer work
|
||||||
|
- no new architecture decisions
|
||||||
|
|
||||||
|
## Execution Method
|
||||||
|
|
||||||
|
1. Remove superseded legacy paths.
|
||||||
|
2. Remove temporary migration shims once the runtime-backed path is complete.
|
||||||
|
3. Simplify the active service composition around the runtime boundary.
|
||||||
|
4. Strengthen regression coverage around the remaining service wave.
|
||||||
|
5. Verify no split-brain semantics remain between active and legacy paths.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- superseded legacy paths are removed
|
||||||
|
- the active runtime-backed service wave is the only semantic path for the currently used functionality
|
||||||
|
- regression coverage protects the reduced active surface
|
||||||
|
- no inactive capability family survives in code solely as speculative future support
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
- full active-service regression suite
|
||||||
|
- Studio embedding regression suite
|
||||||
|
- targeted tests proving no disagreement remains between active and legacy paths
|
||||||
|
|
||||||
|
## Affected Artifacts
|
||||||
|
|
||||||
|
- `prometeu-packer/prometeu-packer-api/src/main/java/p/packer/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/services/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/models/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/main/java/p/packer/events/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/test/java/p/packer/services/**`
|
||||||
|
- `prometeu-packer/prometeu-packer-v1/src/test/java/p/packer/testing/**`
|
||||||
|
- integration fixtures
|
||||||
@ -70,7 +70,22 @@ The current production track for the standalone `prometeu-packer` project is:
|
|||||||
8. [`PR-08-assets-pa-and-companion-artifact-emission.md`](./PR-08-assets-pa-and-companion-artifact-emission.md)
|
8. [`PR-08-assets-pa-and-companion-artifact-emission.md`](./PR-08-assets-pa-and-companion-artifact-emission.md)
|
||||||
9. [`PR-09-event-lane-progress-and-studio-operational-integration.md`](./PR-09-event-lane-progress-and-studio-operational-integration.md)
|
9. [`PR-09-event-lane-progress-and-studio-operational-integration.md`](./PR-09-event-lane-progress-and-studio-operational-integration.md)
|
||||||
10. [`PR-10-versioning-migration-trust-and-production-gates.md`](./PR-10-versioning-migration-trust-and-production-gates.md)
|
10. [`PR-10-versioning-migration-trust-and-production-gates.md`](./PR-10-versioning-migration-trust-and-production-gates.md)
|
||||||
|
11. [`PR-11-packer-runtime-restructure-snapshot-authority-and-durable-commit.md`](./PR-11-packer-runtime-restructure-snapshot-authority-and-durable-commit.md)
|
||||||
|
12. [`PR-12-cleanup-and-unused-surface-removal-before-runtime-service-wave.md`](./PR-12-cleanup-and-unused-surface-removal-before-runtime-service-wave.md)
|
||||||
|
13. [`PR-13-embedded-bootstrap-container-owned-event-bus-and-packer-composition-root.md`](./PR-13-embedded-bootstrap-container-owned-event-bus-and-packer-composition-root.md)
|
||||||
|
14. [`PR-14-project-runtime-core-snapshot-model-and-lifecycle.md`](./PR-14-project-runtime-core-snapshot-model-and-lifecycle.md)
|
||||||
|
15. [`PR-15-snapshot-backed-asset-query-services.md`](./PR-15-snapshot-backed-asset-query-services.md)
|
||||||
|
16. [`PR-16-write-lane-command-completion-and-used-write-services.md`](./PR-16-write-lane-command-completion-and-used-write-services.md)
|
||||||
|
17. [`PR-17-studio-runtime-adapter-and-assets-workspace-consumption.md`](./PR-17-studio-runtime-adapter-and-assets-workspace-consumption.md)
|
||||||
|
18. [`PR-18-legacy-service-retirement-and-regression-hardening.md`](./PR-18-legacy-service-retirement-and-regression-hardening.md)
|
||||||
|
|
||||||
|
Current wave discipline from `PR-11` onward:
|
||||||
|
|
||||||
|
- cleanup and active-surface reduction happen before runtime implementation;
|
||||||
|
- the wave is service-first and Studio-driven;
|
||||||
|
- `doctor`, `build/pack`, and background reconcile are explicitly deferred;
|
||||||
|
- code that is not used by the active service wave should be removed instead of preserved speculatively.
|
||||||
|
|
||||||
Recommended dependency chain:
|
Recommended dependency chain:
|
||||||
|
|
||||||
`PR-01 -> PR-02 -> PR-03 -> PR-04 -> PR-05 -> PR-06 -> PR-07 -> PR-08 -> PR-09 -> PR-10`
|
`PR-01 -> PR-02 -> PR-03 -> PR-04 -> PR-05 -> PR-06 -> PR-07 -> PR-08 -> PR-09 -> PR-10 -> PR-11 -> PR-12 -> PR-13 -> PR-14 -> PR-15 -> PR-16 -> PR-17 -> PR-18`
|
||||||
|
|||||||
@ -20,10 +20,11 @@ This specification consolidates the initial packer agenda and decision wave into
|
|||||||
2. One asset root contains exactly one anchor `asset.json`.
|
2. One asset root contains exactly one anchor `asset.json`.
|
||||||
3. `assets/.prometeu/index.json` is the authoritative registry of registered assets.
|
3. `assets/.prometeu/index.json` is the authoritative registry of registered assets.
|
||||||
4. `asset.json` is the authoritative asset-local declaration.
|
4. `asset.json` is the authoritative asset-local declaration.
|
||||||
5. An asset root absent from the registry is `unregistered`.
|
5. `asset.json` must carry the stable `asset_uuid` identity anchor for that asset root.
|
||||||
6. `unregistered` assets are always `excluded` from build participation.
|
6. An asset root absent from the registry is `unregistered`.
|
||||||
7. Registered assets may be `included` or `excluded` from build participation without losing identity.
|
7. `unregistered` assets are always `excluded` from build participation.
|
||||||
8. The baseline build set includes registered assets whose registry entry is marked as build-included.
|
8. Registered assets may be `included` or `excluded` from build participation without losing identity.
|
||||||
|
9. The baseline build set includes registered assets whose registry entry is marked as build-included.
|
||||||
|
|
||||||
## Identity Model
|
## Identity Model
|
||||||
|
|
||||||
@ -33,6 +34,12 @@ Each registered asset has:
|
|||||||
- `asset_uuid`: stable long-lived identity for migration/tooling scenarios
|
- `asset_uuid`: stable long-lived identity for migration/tooling scenarios
|
||||||
- `included_in_build`: build participation flag persisted in the registry
|
- `included_in_build`: build participation flag persisted in the registry
|
||||||
|
|
||||||
|
Identity authority is intentionally split:
|
||||||
|
|
||||||
|
- `asset_uuid` is anchored locally in `asset.json`;
|
||||||
|
- `asset_id` is allocated and persisted by the registry;
|
||||||
|
- registry-managed location and build participation remain catalog concerns in `index.json`.
|
||||||
|
|
||||||
The following are not primary identity:
|
The following are not primary identity:
|
||||||
|
|
||||||
- `asset_name`
|
- `asset_name`
|
||||||
@ -41,6 +48,18 @@ The following are not primary identity:
|
|||||||
|
|
||||||
`asset_name` may still be used by authoring and runtime-facing APIs as a logical reference label.
|
`asset_name` may still be used by authoring and runtime-facing APIs as a logical reference label.
|
||||||
|
|
||||||
|
## Local Identity Anchor
|
||||||
|
|
||||||
|
`asset_uuid` is the stable asset-local identity anchor.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
|
||||||
|
- `asset_uuid` must be present in `asset.json`;
|
||||||
|
- `asset_uuid` is stable across relocate and rename flows;
|
||||||
|
- `asset_uuid` is not allocated from path shape or `asset_name`;
|
||||||
|
- `asset_uuid` allows the packer to reconcile manual workspace changes with the registry/catalog model;
|
||||||
|
- `asset_uuid` does not replace registry authority over `asset_id`, build participation, or managed root tracking.
|
||||||
|
|
||||||
## Relocation and Rename
|
## Relocation and Rename
|
||||||
|
|
||||||
Moving or renaming an asset root does not change identity.
|
Moving or renaming an asset root does not change identity.
|
||||||
@ -62,7 +81,8 @@ Rules:
|
|||||||
|
|
||||||
- it is excluded from the build automatically;
|
- it is excluded from the build automatically;
|
||||||
- it is diagnosable;
|
- it is diagnosable;
|
||||||
- it becomes registered only through explicit flow.
|
- it becomes registered only through explicit flow;
|
||||||
|
- its local `asset_uuid` still matters for structural validation and future reconcile behavior.
|
||||||
|
|
||||||
## Build Participation
|
## Build Participation
|
||||||
|
|
||||||
@ -94,7 +114,21 @@ Examples:
|
|||||||
|
|
||||||
- duplicate or ambiguous anchors under registered expectations;
|
- duplicate or ambiguous anchors under registered expectations;
|
||||||
- manual copy that creates identity collision;
|
- manual copy that creates identity collision;
|
||||||
- registered root missing anchor.
|
- registered root missing anchor;
|
||||||
|
- duplicate `asset_uuid` across different asset roots;
|
||||||
|
- registry/catalog location that no longer matches the asset root carrying the expected `asset_uuid`;
|
||||||
|
- `asset.json` missing or malformed in a root expected to preserve identity.
|
||||||
|
|
||||||
|
## Reconcile Expectations
|
||||||
|
|
||||||
|
The packer may need to reconcile registry/catalog state against the authoring workspace.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
|
||||||
|
- manual move or rename of an asset root must not imply identity loss by itself;
|
||||||
|
- reconcile should prefer `asset_uuid` when matching a durable asset identity across changed paths;
|
||||||
|
- path drift must not silently rebind one registered asset to another distinct `asset_uuid`;
|
||||||
|
- unresolved identity drift remains diagnosable until explicit repair or successful reconcile.
|
||||||
|
|
||||||
## Non-Goals
|
## Non-Goals
|
||||||
|
|
||||||
|
|||||||
@ -13,6 +13,7 @@ This specification consolidates the initial packer agenda and decision wave into
|
|||||||
The common `asset.json` contract requires these top-level fields:
|
The common `asset.json` contract requires these top-level fields:
|
||||||
|
|
||||||
- `schema_version`
|
- `schema_version`
|
||||||
|
- `asset_uuid`
|
||||||
- `name`
|
- `name`
|
||||||
- `type`
|
- `type`
|
||||||
- `inputs`
|
- `inputs`
|
||||||
@ -23,6 +24,19 @@ The common contract may also include:
|
|||||||
|
|
||||||
- `build`
|
- `build`
|
||||||
|
|
||||||
|
## Meaning of `asset_uuid`
|
||||||
|
|
||||||
|
`asset_uuid` is the stable asset-local identity anchor.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
|
||||||
|
- it is required;
|
||||||
|
- it must be stable across relocate and rename flows;
|
||||||
|
- it is not the project-local runtime artifact identity;
|
||||||
|
- it is not a substitute for registry-owned `asset_id`;
|
||||||
|
- it exists so the asset root can preserve identity even when path-based assumptions drift;
|
||||||
|
- it must remain compatible with packer reconcile behavior and migration flows.
|
||||||
|
|
||||||
## Meaning of `name`
|
## Meaning of `name`
|
||||||
|
|
||||||
`name` is the logical asset reference label.
|
`name` is the logical asset reference label.
|
||||||
@ -90,6 +104,17 @@ Rules:
|
|||||||
- if a parameter affects the runtime-facing output contract, it belongs in `output.metadata`;
|
- if a parameter affects the runtime-facing output contract, it belongs in `output.metadata`;
|
||||||
- `build` must not hide runtime-relevant semantics.
|
- `build` must not hide runtime-relevant semantics.
|
||||||
|
|
||||||
|
## Operational State Exclusion
|
||||||
|
|
||||||
|
`asset.json` is a declaration artifact, not a catalog cache.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
|
||||||
|
- transient UI state must not be stored in `asset.json`;
|
||||||
|
- registry-managed fields such as `asset_id` and `included_in_build` must not be duplicated into `asset.json`;
|
||||||
|
- packer cache or snapshot bookkeeping must not be materialized into `asset.json` as normal operational state;
|
||||||
|
- `asset.json` should remain focused on identity anchoring plus declared authoring/packing behavior.
|
||||||
|
|
||||||
## Preload
|
## Preload
|
||||||
|
|
||||||
Each registered asset must declare preload intent explicitly.
|
Each registered asset must declare preload intent explicitly.
|
||||||
|
|||||||
@ -44,6 +44,26 @@ Rules:
|
|||||||
|
|
||||||
The normative operational surface is service-based.
|
The normative operational surface is service-based.
|
||||||
|
|
||||||
|
The packer is a filesystem-first operational runtime.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
|
||||||
|
- the active project may maintain a coherent in-memory operational snapshot;
|
||||||
|
- normal read requests should be served from that coherent snapshot when the runtime is active;
|
||||||
|
- the runtime snapshot is an operational projection of packer-owned workspace artifacts, not a replacement authoring store;
|
||||||
|
- the durable authoring workspace remains the persisted source of truth after successful commit.
|
||||||
|
|
||||||
|
### Embedded Bootstrap Rule
|
||||||
|
|
||||||
|
When the packer is embedded inside another host such as Studio, the host must bootstrap the packer integration explicitly.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
|
||||||
|
- the embedded packer runtime must receive a typed event bus reference from the host when shared host visibility is part of the integration contract;
|
||||||
|
- that reference is the baseline path for packer publish/subscribe integration in the embedded host;
|
||||||
|
- the embedded packer runtime must not quietly create a disconnected parallel event system when host integration expects shared operational visibility;
|
||||||
|
- in Studio, this shared typed event bus reference is owned by the Studio `Container`.
|
||||||
|
|
||||||
Baseline core services:
|
Baseline core services:
|
||||||
|
|
||||||
- `init_workspace`
|
- `init_workspace`
|
||||||
@ -66,6 +86,26 @@ Operations must distinguish:
|
|||||||
|
|
||||||
This distinction is part of the service semantics and must be visible to the UI.
|
This distinction is part of the service semantics and must be visible to the UI.
|
||||||
|
|
||||||
|
## Runtime State and Visibility
|
||||||
|
|
||||||
|
The packer runtime must make operational freshness explicit.
|
||||||
|
|
||||||
|
At minimum, the model should support states equivalent to:
|
||||||
|
|
||||||
|
- `healthy`
|
||||||
|
- `stale`
|
||||||
|
- `diverged`
|
||||||
|
- `committing`
|
||||||
|
- `reconciling`
|
||||||
|
- `failed`
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
|
||||||
|
- successful durable commit defines the visibility boundary for committed write state;
|
||||||
|
- reads must not present torn in-progress writes as committed truth;
|
||||||
|
- background divergence detection may move a project or asset view into stale/diverged/reconciling state;
|
||||||
|
- Studio may render these states, but must not invent their semantics.
|
||||||
|
|
||||||
## Concurrency Model
|
## Concurrency Model
|
||||||
|
|
||||||
The packer concurrency model is conservative and project-scoped.
|
The packer concurrency model is conservative and project-scoped.
|
||||||
@ -84,6 +124,13 @@ Rules:
|
|||||||
- build/write on the same project is serialized unless a future spec introduces an explicit transactional coordination model;
|
- build/write on the same project is serialized unless a future spec introduces an explicit transactional coordination model;
|
||||||
- background observation may continue while a serialized write lane is active, but it must not publish misleading post-state before commit visibility.
|
- background observation may continue while a serialized write lane is active, but it must not publish misleading post-state before commit visibility.
|
||||||
|
|
||||||
|
### Write Ownership
|
||||||
|
|
||||||
|
- writes on one project execute through a packer-owned write lane;
|
||||||
|
- caller timing must not define final write interleaving semantics;
|
||||||
|
- preview generation may occur before the final commit section when safe;
|
||||||
|
- apply/commit remains the packer-owned visibility boundary.
|
||||||
|
|
||||||
## Preview/Apply Model
|
## Preview/Apply Model
|
||||||
|
|
||||||
Sensitive mutations use staged intent.
|
Sensitive mutations use staged intent.
|
||||||
@ -102,6 +149,7 @@ At minimum, the model should support fields equivalent to:
|
|||||||
|
|
||||||
- `status`
|
- `status`
|
||||||
- `summary`
|
- `summary`
|
||||||
|
- `runtime_state`
|
||||||
- `affected_assets`
|
- `affected_assets`
|
||||||
- `diagnostics`
|
- `diagnostics`
|
||||||
- `proposed_actions`
|
- `proposed_actions`
|
||||||
@ -118,6 +166,7 @@ Responsibilities:
|
|||||||
- report diagnostics/build/cache activity to the UI;
|
- report diagnostics/build/cache activity to the UI;
|
||||||
- support live refresh and progress reporting;
|
- support live refresh and progress reporting;
|
||||||
- avoid blocking the main UI thread.
|
- avoid blocking the main UI thread.
|
||||||
|
- surface operational freshness and reconcile transitions when they affect Studio-visible state.
|
||||||
|
|
||||||
### Initial Event Set
|
### Initial Event Set
|
||||||
|
|
||||||
@ -135,6 +184,8 @@ The initial structured event set includes:
|
|||||||
- `action_failed`
|
- `action_failed`
|
||||||
- `progress_updated`
|
- `progress_updated`
|
||||||
|
|
||||||
|
Additional runtime/reconcile state events may exist as the operational runtime evolves, but adapters must preserve their causal meaning instead of collapsing them into generic refresh noise.
|
||||||
|
|
||||||
### Mutation Serialization
|
### Mutation Serialization
|
||||||
|
|
||||||
Mutating asset workflows are serialized semantically.
|
Mutating asset workflows are serialized semantically.
|
||||||
@ -148,6 +199,7 @@ Rules:
|
|||||||
- preview generation may run outside the final commit section, but apply/commit remains serialized;
|
- preview generation may run outside the final commit section, but apply/commit remains serialized;
|
||||||
- build and sensitive mutation apply must not interleave on the same project in a way that obscures final state;
|
- build and sensitive mutation apply must not interleave on the same project in a way that obscures final state;
|
||||||
- cancellation or failure must leave the observable project state in a coherent post-operation condition.
|
- cancellation or failure must leave the observable project state in a coherent post-operation condition.
|
||||||
|
- divergence detection must not silently rewrite user-authored workspace content as if it were a normal mutation apply.
|
||||||
|
|
||||||
### Event Envelope
|
### Event Envelope
|
||||||
|
|
||||||
@ -169,6 +221,7 @@ Rules:
|
|||||||
- `operation_id` is stable for the full lifecycle of one logical operation;
|
- `operation_id` is stable for the full lifecycle of one logical operation;
|
||||||
- `sequence` is monotonic within one operation;
|
- `sequence` is monotonic within one operation;
|
||||||
- adapters may remap event shapes for UI consumption, but must not invent causal relationships not present in packer events.
|
- adapters may remap event shapes for UI consumption, but must not invent causal relationships not present in packer events.
|
||||||
|
- embedded hosts should preserve the same typed event bus reference across packer bootstrap and adapter wiring so subscription and publication stay causally coherent.
|
||||||
|
|
||||||
### Event Ordering and Coalescing
|
### Event Ordering and Coalescing
|
||||||
|
|
||||||
|
|||||||
@ -42,6 +42,7 @@ Rules:
|
|||||||
- migration may be automatic within the supported window;
|
- migration may be automatic within the supported window;
|
||||||
- unsupported versions fail clearly and early;
|
- unsupported versions fail clearly and early;
|
||||||
- migration failures must be diagnosable in Studio and CI.
|
- migration failures must be diagnosable in Studio and CI.
|
||||||
|
- migration may need to preserve or repair identity-bearing fields such as `asset_uuid` without fabricating a different asset identity silently.
|
||||||
|
|
||||||
## Runtime Compatibility Boundary
|
## Runtime Compatibility Boundary
|
||||||
|
|
||||||
@ -73,6 +74,7 @@ Untrusted until validated:
|
|||||||
- hand-edited or legacy declarations;
|
- hand-edited or legacy declarations;
|
||||||
- imported external project content;
|
- imported external project content;
|
||||||
- legacy packer artifacts and control data.
|
- legacy packer artifacts and control data.
|
||||||
|
- runtime snapshot observations that have not yet been reconciled against changed workspace files.
|
||||||
|
|
||||||
Trusted only after:
|
Trusted only after:
|
||||||
|
|
||||||
@ -80,6 +82,7 @@ Trusted only after:
|
|||||||
- structural validation;
|
- structural validation;
|
||||||
- semantic validation;
|
- semantic validation;
|
||||||
- version compatibility check.
|
- version compatibility check.
|
||||||
|
- reconcile or refresh when divergence between snapshot and filesystem has been detected.
|
||||||
|
|
||||||
## Plugin and Script Execution
|
## Plugin and Script Execution
|
||||||
|
|
||||||
@ -102,6 +105,7 @@ At minimum, diagnostics should make clear:
|
|||||||
- the version found;
|
- the version found;
|
||||||
- the supported range or expectation;
|
- the supported range or expectation;
|
||||||
- whether migration was attempted;
|
- whether migration was attempted;
|
||||||
|
- whether identity-bearing fields or registry/catalog alignment were implicated;
|
||||||
- whether manual action is required;
|
- whether manual action is required;
|
||||||
- whether the failure blocks build or Studio workflow.
|
- whether the failure blocks build or Studio workflow.
|
||||||
|
|
||||||
|
|||||||
@ -0,0 +1,117 @@
|
|||||||
|
# PR-08 Assets Workspace Panel Package Boundaries and Local Subscriptions
|
||||||
|
|
||||||
|
Domain owner: `docs/studio`
|
||||||
|
|
||||||
|
## Briefing
|
||||||
|
|
||||||
|
The current `Assets` workspace still keeps too much control logic concentrated in top-level workspace-area classes.
|
||||||
|
|
||||||
|
That leaves package boundaries flatter than they should be and still weakens the intended Studio model:
|
||||||
|
|
||||||
|
- every panel should own its own workspace-bus subscriptions;
|
||||||
|
- `AssetWorkspace` should stay as composition root and orchestration layer;
|
||||||
|
- the asset list should live in its own package area;
|
||||||
|
- the full right-hand details side should be split into package-owned panels with direct lifecycle-managed subscriptions.
|
||||||
|
|
||||||
|
This refactor is a structural follow-up to the `PR-07` family.
|
||||||
|
|
||||||
|
It does not redefine the event-driven direction; it completes it by enforcing package topology and subscription ownership at the panel level.
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
|
||||||
|
Reorganize the `Assets` workspace into explicit package areas so that every list or details panel consumes the workspace event bus directly and subscribes only to the state it needs.
|
||||||
|
|
||||||
|
After this PR:
|
||||||
|
|
||||||
|
- `AssetWorkspace` composes package-scoped controls instead of hosting panel logic directly;
|
||||||
|
- all asset-list controls live under an `asset list` package area;
|
||||||
|
- the right-hand details side is organized under `details/...` package areas;
|
||||||
|
- `summary`, `actions`, `contract`, `preview`, and `diagnostics` each manage their own subscriptions;
|
||||||
|
- the package layout itself teaches the correct Studio workspace architecture.
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- [`./PR-07a-assets-event-topology-and-lifecycle-foundation.md`](./PR-07a-assets-event-topology-and-lifecycle-foundation.md)
|
||||||
|
- [`./PR-07b-asset-navigator-and-row-subscriptions.md`](./PR-07b-asset-navigator-and-row-subscriptions.md)
|
||||||
|
- [`./PR-07c-asset-details-and-form-lifecycle.md`](./PR-07c-asset-details-and-form-lifecycle.md)
|
||||||
|
- [`./PR-07e-assets-refactor-cleanup-and-regression-coverage.md`](./PR-07e-assets-refactor-cleanup-and-regression-coverage.md)
|
||||||
|
- [`../specs/4. Assets Workspace Specification.md`](../specs/4.%20Assets%20Workspace%20Specification.md)
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- create an `asset list` package area under the `Assets` workspace implementation
|
||||||
|
- move the asset-list host and asset-list item control into that package area
|
||||||
|
- require both asset-list host and asset-list item to receive `StudioWorkspaceEventBus`
|
||||||
|
- require both asset-list host and asset-list item to own their own lifecycle-managed subscriptions
|
||||||
|
- create a `details` package area for the full right-hand side of the workspace
|
||||||
|
- split details internals into package-owned subareas such as:
|
||||||
|
- `details/summary`
|
||||||
|
- `details/actions`
|
||||||
|
- `details/contract`
|
||||||
|
- `details/preview`
|
||||||
|
- `details/diagnostics`
|
||||||
|
- require each details panel to subscribe directly to the event stream it consumes
|
||||||
|
- reduce coordinator-style pass-through logic where a child panel can consume the workspace bus directly
|
||||||
|
- keep shared details support code only where it removes real duplication without re-centralizing subscriptions
|
||||||
|
|
||||||
|
## Non-Goals
|
||||||
|
|
||||||
|
- no new mutation semantics
|
||||||
|
- no new global event-bus abstraction
|
||||||
|
- no visual redesign of the workspace
|
||||||
|
- no cross-workspace extraction unless a primitive is already justified by this refactor
|
||||||
|
- no return to top-level refresh orchestration as the normal update model
|
||||||
|
|
||||||
|
## Execution Method
|
||||||
|
|
||||||
|
1. Define the target package topology.
|
||||||
|
The package tree should reflect workspace areas, not arbitrary implementation convenience.
|
||||||
|
|
||||||
|
2. Move asset-list code into a dedicated package area.
|
||||||
|
The list host and list item should be colocated and should consume the workspace bus directly.
|
||||||
|
|
||||||
|
3. Normalize asset-list subscriptions.
|
||||||
|
The asset-list host should subscribe to list-level projection state.
|
||||||
|
The asset-list item should subscribe to item-local concerns such as selection and asset patch events.
|
||||||
|
|
||||||
|
4. Move the full right-hand details side into a dedicated `details` package area.
|
||||||
|
The top-level details host should stay thin and should mount panel controls by workspace area.
|
||||||
|
|
||||||
|
5. Split details panels by concern.
|
||||||
|
`summary`, `actions`, `contract`, `preview`, and `diagnostics` should each live in package-owned subareas and subscribe for themselves.
|
||||||
|
|
||||||
|
6. Remove parent-owned update routing where it is only forwarding state to children.
|
||||||
|
If a child panel can subscribe to the workspace bus safely, it should do so directly.
|
||||||
|
|
||||||
|
7. Re-check constructor contracts.
|
||||||
|
Every event-consuming panel should receive the `StudioWorkspaceEventBus` explicitly, plus only the interaction ports it truly needs.
|
||||||
|
|
||||||
|
8. Clean naming and file layout.
|
||||||
|
Class names, package names, and placement should make the `Assets` workspace structure obvious to a new maintainer.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- there is a dedicated package area for the asset list inside the `Assets` workspace
|
||||||
|
- asset-list host and asset-list item both receive `StudioWorkspaceEventBus`
|
||||||
|
- asset-list host and asset-list item both subscribe directly to the events they need
|
||||||
|
- there is a dedicated `details` package area for the right-hand workspace side
|
||||||
|
- `summary`, `actions`, `contract`, `preview`, and `diagnostics` each live in their own package-owned area
|
||||||
|
- each details panel owns its own lifecycle-managed subscriptions
|
||||||
|
- `AssetWorkspace` no longer acts as the effective subscriber for panel-internal state
|
||||||
|
- package structure and constructor boundaries make the lifecycle model explicit
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
- unit tests for lifecycle subscribe/unsubscribe on moved controls
|
||||||
|
- unit tests or focused integration tests proving list item and details panels react from their own subscriptions
|
||||||
|
- regression validation that asset selection and local patch flows still update without workspace-wide refresh
|
||||||
|
- package-level review that no event-consuming panel is left without direct bus access
|
||||||
|
|
||||||
|
## Affected Artifacts
|
||||||
|
|
||||||
|
- `prometeu-studio/src/main/java/p/studio/workspaces/assets/AssetWorkspace.java`
|
||||||
|
- `prometeu-studio/src/main/java/p/studio/workspaces/assets/...` moved into list/details package areas
|
||||||
|
- asset-list controls under a dedicated list package
|
||||||
|
- details controls under `details/...` packages
|
||||||
|
- tests for workspace lifecycle and subscription ownership
|
||||||
|
- `docs/studio/specs/4. Assets Workspace Specification.md` if package/lifecycle wording needs tightening after the refactor
|
||||||
@ -0,0 +1,164 @@
|
|||||||
|
# PR-09 Asset Move Action and Relocate Wizard
|
||||||
|
|
||||||
|
Domain owner: `docs/studio`
|
||||||
|
|
||||||
|
## Briefing
|
||||||
|
|
||||||
|
Adicionar a action `Move` na área de actions do asset selecionado e conectá-la a um wizard de relocação explícita.
|
||||||
|
|
||||||
|
O usuário deve escolher o destino do asset e revisar um resumo antes da execução.
|
||||||
|
Depois da confirmação, o Studio não move diretórios localmente.
|
||||||
|
Ele apenas envia um comando de relocação para o packer, que passa a ser o owner completo da mudança:
|
||||||
|
|
||||||
|
- atualiza o estado interno necessário;
|
||||||
|
- move o diretório do asset dentro da árvore `assets` do projeto;
|
||||||
|
- emite eventos operacionais até a conclusão.
|
||||||
|
|
||||||
|
Este plano também fecha uma regra operacional que precisa existir nas duas pontas:
|
||||||
|
|
||||||
|
- o diretório destino final do asset não pode já ser um root de asset;
|
||||||
|
- portanto, o root destino não pode conter `asset.json`.
|
||||||
|
|
||||||
|
Após a confirmação do resumo, o modal deve entrar em estado de espera com spinner, escutando o evento operacional do packer.
|
||||||
|
Quando a operação terminar:
|
||||||
|
|
||||||
|
- sucesso: o Studio dispara refresh, fecha o modal e reposiciona a seleção;
|
||||||
|
- falha: o modal sai do estado de espera e mostra a falha sem fechar silenciosamente.
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
|
||||||
|
Entregar um fluxo de `Move` que seja explícito, previsível e compatível com o modelo em que o packer executa a mutação real e o Studio apenas comanda e observa.
|
||||||
|
|
||||||
|
Após este PR:
|
||||||
|
|
||||||
|
- a seção `Actions` do selected asset expõe `Move`;
|
||||||
|
- clicar em `Move` abre um wizard dedicado;
|
||||||
|
- o wizard coleta o parent de destino e o nome final do diretório;
|
||||||
|
- o wizard mostra um passo final de resumo antes do comando final;
|
||||||
|
- o Studio não aceita um destino cujo root já contenha `asset.json`;
|
||||||
|
- o packer também rejeita esse destino como regra de segurança e conformance;
|
||||||
|
- a confirmação final envia um comando de relocation para o packer via API;
|
||||||
|
- o modal entra em espera com spinner até receber o evento terminal da operação;
|
||||||
|
- o refresh estrutural só ocorre depois do evento de conclusão do packer.
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- [`./PR-05e-assets-staged-mutations-preview-and-apply.md`](./PR-05e-assets-staged-mutations-preview-and-apply.md)
|
||||||
|
- [`./PR-07c-asset-details-and-form-lifecycle.md`](./PR-07c-asset-details-and-form-lifecycle.md)
|
||||||
|
- [`./PR-07d-asset-mutation-and-structural-sync-orchestration.md`](./PR-07d-asset-mutation-and-structural-sync-orchestration.md)
|
||||||
|
- [`../specs/4. Assets Workspace Specification.md`](../specs/4.%20Assets%20Workspace%20Specification.md)
|
||||||
|
- cross-domain reference: [`../../packer/pull-requests/PR-05-sensitive-mutations-preview-apply-and-studio-write-adapter.md`](../../packer/pull-requests/PR-05-sensitive-mutations-preview-apply-and-studio-write-adapter.md)
|
||||||
|
- cross-domain reference: [`../../packer/pull-requests/PR-09-event-lane-progress-and-studio-operational-integration.md`](../../packer/pull-requests/PR-09-event-lane-progress-and-studio-operational-integration.md)
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- adicionar a action `Move` na seção `Actions` do details
|
||||||
|
- introduzir um `Relocate Wizard` efetivamente utilizável pelo selected asset atual
|
||||||
|
- coletar destino por:
|
||||||
|
- parent directory
|
||||||
|
- destination name
|
||||||
|
- target root derivado
|
||||||
|
- mostrar passo final de resumo antes da confirmação
|
||||||
|
- enviar um comando `RELOCATE_ASSET` com `targetRoot` explícito para o packer
|
||||||
|
- abrir estado modal de espera com spinner após confirmação
|
||||||
|
- correlacionar a operação via `operationId`
|
||||||
|
- escutar o evento operacional do packer até `ACTION_APPLIED` ou `ACTION_FAILED`
|
||||||
|
- publicar structural sync explícito apenas após conclusão bem-sucedida
|
||||||
|
- validar no Studio que `targetRoot/asset.json` não exista
|
||||||
|
- validar no packer que `targetRoot/asset.json` não exista, mesmo se o Studio falhar em bloquear antes
|
||||||
|
|
||||||
|
## Non-Goals
|
||||||
|
|
||||||
|
- não redesenhar o mutation preview panel inteiro
|
||||||
|
- não introduzir rename inline fora do wizard
|
||||||
|
- não adicionar batch move
|
||||||
|
- não redefinir semântica de identidade do asset
|
||||||
|
- não permitir fallback para target automático quando o usuário iniciou `Move`
|
||||||
|
- não mover diretórios diretamente pelo Studio fora do comando ao packer
|
||||||
|
|
||||||
|
## Execution Method
|
||||||
|
|
||||||
|
1. Expor `Move` na seção `Actions`.
|
||||||
|
O botão deve existir apenas quando houver asset selecionado e deve abrir o wizard a partir do `AssetReference` atual.
|
||||||
|
|
||||||
|
2. Implementar o wizard de relocação como fluxo explícito de destino.
|
||||||
|
O wizard deve reutilizar a linguagem já existente de `relocateWizard` e coletar:
|
||||||
|
- root atual
|
||||||
|
- diretório parent de destino
|
||||||
|
- nome final do diretório
|
||||||
|
- target root calculado
|
||||||
|
|
||||||
|
3. Adicionar validação local de destino.
|
||||||
|
O wizard deve bloquear avanço/confirmação quando:
|
||||||
|
- o target root for igual ao root atual;
|
||||||
|
- o target root sair da área válida esperada;
|
||||||
|
- o target root já contiver `asset.json`.
|
||||||
|
|
||||||
|
4. Adicionar passo final de resumo.
|
||||||
|
Antes do comando final, o usuário deve ver um resumo do:
|
||||||
|
- asset atual
|
||||||
|
- root atual
|
||||||
|
- parent de destino
|
||||||
|
- nome final
|
||||||
|
- target root resultante
|
||||||
|
|
||||||
|
5. Integrar o wizard ao fluxo de comando assíncrono do packer.
|
||||||
|
A saída do wizard deve virar `PackerMutationRequest(RELOCATE_ASSET, ..., targetRoot)`, sem usar target automático.
|
||||||
|
Depois do `OK`, o Studio inicia a operação, captura o `operationId` retornado e coloca o modal em estado de espera.
|
||||||
|
|
||||||
|
6. Esperar conclusão por evento, não por refresh cego.
|
||||||
|
O modal deve escutar `StudioPackerOperationEvent` correlacionado por `operationId`.
|
||||||
|
Regras:
|
||||||
|
- `ACTION_APPLIED`: disparar refresh estrutural, fechar modal e reselecionar o asset relocado;
|
||||||
|
- `ACTION_FAILED`: sair do spinner, manter modal aberto e mostrar a falha;
|
||||||
|
- eventos intermediários como `PROGRESS_UPDATED` podem atualizar a mensagem/estado visual, mas não fecham o modal.
|
||||||
|
|
||||||
|
7. Endurecer a regra no packer.
|
||||||
|
O preview/apply de relocation deve recusar destino cujo root já contenha `asset.json`, produzindo blocker claro e estável.
|
||||||
|
|
||||||
|
8. Orquestrar o pós-conclusão como mudança estrutural.
|
||||||
|
Relocation bem-sucedida deve acionar sync estrutural explícito e reposicionar a seleção para o asset movido no novo root.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- `Move` aparece na seção `Actions` do asset selecionado
|
||||||
|
- clicar em `Move` abre wizard dedicado em vez de mutação imediata
|
||||||
|
- o wizard exige destino explícito escolhido pelo usuário
|
||||||
|
- existe passo final de resumo antes da conclusão
|
||||||
|
- `targetRoot/asset.json` bloqueia o fluxo já no Studio
|
||||||
|
- o packer também bloqueia `targetRoot/asset.json` como regra de segurança
|
||||||
|
- o `OK` final envia o comando ao packer e coloca o modal em espera com spinner
|
||||||
|
- o modal só fecha depois de `ACTION_APPLIED` para a mesma operação
|
||||||
|
- falha operacional não fecha o modal silenciosamente
|
||||||
|
- sucesso operacional executa structural sync explícito
|
||||||
|
- a seleção final aponta para o asset já movido, não para o root antigo
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
- unit tests para validação de target root no wizard
|
||||||
|
- unit tests para derivação de `targetRoot` a partir de parent + destination name
|
||||||
|
- unit tests para correlação `operationId -> modal state`
|
||||||
|
- unit tests para sucesso/falha dirigidos por `StudioPackerOperationEvent`
|
||||||
|
- unit tests para publication/orchestration de relocation como structural sync
|
||||||
|
- packer tests para blocker quando `targetRoot/asset.json` já existe
|
||||||
|
- smoke test de UI para:
|
||||||
|
- abrir `Move`
|
||||||
|
- editar destino
|
||||||
|
- ver resumo final
|
||||||
|
- bloquear destino inválido
|
||||||
|
- confirmar operação
|
||||||
|
- ver spinner de espera
|
||||||
|
- fechar no evento de sucesso
|
||||||
|
- manter aberto no evento de falha
|
||||||
|
|
||||||
|
## Affected Artifacts
|
||||||
|
|
||||||
|
- `docs/studio/specs/4. Assets Workspace Specification.md` se a regra de destino inválido precisar ser tornada mais explícita
|
||||||
|
- `prometeu-studio/src/main/java/p/studio/workspaces/assets/details/actions/...`
|
||||||
|
- `prometeu-studio/src/main/java/p/studio/workspaces/assets/wizards/...`
|
||||||
|
- `prometeu-studio/src/main/java/p/studio/workspaces/assets/...` mutation orchestration wiring
|
||||||
|
- `prometeu-studio/src/main/java/p/studio/events/...`
|
||||||
|
- `prometeu-studio/src/main/resources/i18n/messages.properties`
|
||||||
|
- `prometeu-packer/src/main/java/p/packer/mutations/...`
|
||||||
|
- testes de Studio para wizard/flow
|
||||||
|
- testes de packer para relocation target validation
|
||||||
@ -48,12 +48,16 @@ The current Studio execution queue is:
|
|||||||
8. [`PR-07c-asset-details-and-form-lifecycle.md`](./PR-07c-asset-details-and-form-lifecycle.md)
|
8. [`PR-07c-asset-details-and-form-lifecycle.md`](./PR-07c-asset-details-and-form-lifecycle.md)
|
||||||
9. [`PR-07d-asset-mutation-and-structural-sync-orchestration.md`](./PR-07d-asset-mutation-and-structural-sync-orchestration.md)
|
9. [`PR-07d-asset-mutation-and-structural-sync-orchestration.md`](./PR-07d-asset-mutation-and-structural-sync-orchestration.md)
|
||||||
10. [`PR-07e-assets-refactor-cleanup-and-regression-coverage.md`](./PR-07e-assets-refactor-cleanup-and-regression-coverage.md)
|
10. [`PR-07e-assets-refactor-cleanup-and-regression-coverage.md`](./PR-07e-assets-refactor-cleanup-and-regression-coverage.md)
|
||||||
11. [`PR-06-project-scoped-studio-state-and-activity-persistence.md`](./PR-06-project-scoped-studio-state-and-activity-persistence.md)
|
11. [`PR-08-assets-workspace-panel-package-boundaries-and-local-subscriptions.md`](./PR-08-assets-workspace-panel-package-boundaries-and-local-subscriptions.md)
|
||||||
|
12. [`PR-06-project-scoped-studio-state-and-activity-persistence.md`](./PR-06-project-scoped-studio-state-and-activity-persistence.md)
|
||||||
|
|
||||||
The `PR-07` family is a corrective refactor pass for the current `Assets` implementation.
|
The `PR-07` family is a corrective refactor pass for the current `Assets` implementation.
|
||||||
It exists to replace the refresh-heavy direction with lifecycle-managed, event-driven ownership.
|
It exists to replace the refresh-heavy direction with lifecycle-managed, event-driven ownership.
|
||||||
It also establishes the intended Studio-wide workspace framework, with `Assets` as the first consumer and proof point.
|
It also establishes the intended Studio-wide workspace framework, with `Assets` as the first consumer and proof point.
|
||||||
|
|
||||||
|
`PR-08` is the package-topology and subscription-ownership follow-up for that same direction.
|
||||||
|
It enforces list/details package boundaries and requires every panel to consume the workspace bus directly.
|
||||||
|
|
||||||
Recommended execution order:
|
Recommended execution order:
|
||||||
|
|
||||||
`PR-05a -> PR-05b -> PR-05c -> PR-05d -> PR-05e -> PR-07a -> PR-07b -> PR-07c -> PR-07d -> PR-07e -> PR-06`
|
`PR-05a -> PR-05b -> PR-05c -> PR-05d -> PR-05e -> PR-07a -> PR-07b -> PR-07c -> PR-07d -> PR-07e -> PR-08 -> PR-06`
|
||||||
|
|||||||
@ -73,6 +73,14 @@ Propagation rules:
|
|||||||
- propagation is workspace to global;
|
- propagation is workspace to global;
|
||||||
- global-to-workspace rebroadcast is not baseline behavior.
|
- global-to-workspace rebroadcast is not baseline behavior.
|
||||||
|
|
||||||
|
The global Studio bus is backed by a typed event bus reference owned by the Studio application container.
|
||||||
|
|
||||||
|
Container rules:
|
||||||
|
|
||||||
|
- the Studio `Container` owns the global typed event bus reference used by `StudioEventBus`;
|
||||||
|
- host-integrated services that need shared Studio operational visibility should receive that container-owned typed event bus reference during bootstrap instead of allocating disconnected local buses;
|
||||||
|
- the `Container` must be initialized during Studio boot before packer-backed integration services, workspaces, or adapters begin use.
|
||||||
|
|
||||||
This topology exists so callers do not need to duplicate publication decisions manually.
|
This topology exists so callers do not need to duplicate publication decisions manually.
|
||||||
|
|
||||||
## Event Categories
|
## Event Categories
|
||||||
|
|||||||
@ -49,7 +49,10 @@ The workspace must assume:
|
|||||||
- unregistered assets remain visible but excluded from builds;
|
- unregistered assets remain visible but excluded from builds;
|
||||||
- assets may aggregate many internal inputs;
|
- assets may aggregate many internal inputs;
|
||||||
- runtime-facing output contract data exists for each asset;
|
- runtime-facing output contract data exists for each asset;
|
||||||
- diagnostics, activity, progress, and staged mutation responses are available from Studio-facing services.
|
- diagnostics, activity, progress, and staged mutation responses are available from Studio-facing services;
|
||||||
|
- packer-facing operational freshness may surface states such as `healthy`, `stale`, `diverged`, `reconciling`, or `failed`.
|
||||||
|
|
||||||
|
Packer-backed operational events consumed by this workspace are expected to enter Studio through the container-owned typed event bus path rather than through workspace-local ad hoc bus creation.
|
||||||
|
|
||||||
## Workspace Model
|
## Workspace Model
|
||||||
|
|
||||||
@ -66,7 +69,8 @@ The workspace must help the user understand:
|
|||||||
- where the asset root lives;
|
- where the asset root lives;
|
||||||
- which internal inputs belong to that asset;
|
- which internal inputs belong to that asset;
|
||||||
- what the asset declares toward the runtime-facing contract;
|
- what the asset declares toward the runtime-facing contract;
|
||||||
- which operations are safe, staged, blocked, or destructive.
|
- which operations are safe, staged, blocked, or destructive;
|
||||||
|
- whether the current view reflects healthy, stale, diverged, or reconciling packer state.
|
||||||
|
|
||||||
The `Assets` workspace is also the first concrete consumer of the Studio event-driven workspace framework.
|
The `Assets` workspace is also the first concrete consumer of the Studio event-driven workspace framework.
|
||||||
|
|
||||||
@ -170,6 +174,7 @@ Rules:
|
|||||||
- The navigator must define explicit `no assets` state.
|
- The navigator must define explicit `no assets` state.
|
||||||
- The navigator must define explicit `no results` state.
|
- The navigator must define explicit `no results` state.
|
||||||
- The navigator must define explicit `workspace error` state.
|
- The navigator must define explicit `workspace error` state.
|
||||||
|
- The workspace must define explicit operational freshness states when packer runtime divergence is surfaced.
|
||||||
- Unregistered assets must appear in the same navigator flow as registered assets.
|
- Unregistered assets must appear in the same navigator flow as registered assets.
|
||||||
- Unregistered styling must communicate `present but not yet registered`, not `broken`.
|
- Unregistered styling must communicate `present but not yet registered`, not `broken`.
|
||||||
|
|
||||||
@ -241,6 +246,7 @@ Rules:
|
|||||||
- Actions that target the selected asset must appear beside `Summary`, not in the navigator area.
|
- Actions that target the selected asset must appear beside `Summary`, not in the navigator area.
|
||||||
- Safe and routine actions must be visually separated from sensitive mutations.
|
- Safe and routine actions must be visually separated from sensitive mutations.
|
||||||
- Hidden or automatic repair behavior is not allowed in the selected-asset view.
|
- Hidden or automatic repair behavior is not allowed in the selected-asset view.
|
||||||
|
- If packer surfaces stale/diverged/reconciling state, the selected-asset view may expose explicit refresh or reconcile-oriented actions, but it must not invent repair semantics locally.
|
||||||
|
|
||||||
## Action Rules
|
## Action Rules
|
||||||
|
|
||||||
@ -334,6 +340,7 @@ Navigator-level `Doctor` and `Pack`, plus asset-level `Include In Build` and suc
|
|||||||
- `preview_ready` should surface in global `Activity`.
|
- `preview_ready` should surface in global `Activity`.
|
||||||
- `action_applied` should surface in global `Activity`.
|
- `action_applied` should surface in global `Activity`.
|
||||||
- `action_failed` should surface in global `Activity`.
|
- `action_failed` should surface in global `Activity`.
|
||||||
|
- reconcile or divergence state transitions should remain visible when they affect current workspace truth.
|
||||||
- The preview itself remains a workspace-local review surface.
|
- The preview itself remains a workspace-local review surface.
|
||||||
|
|
||||||
## Non-Goals
|
## Non-Goals
|
||||||
|
|||||||
22
prometeu-app/build.gradle.kts
Normal file
22
prometeu-app/build.gradle.kts
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
plugins {
|
||||||
|
id("gradle.java-application-conventions")
|
||||||
|
alias(libs.plugins.javafx)
|
||||||
|
}
|
||||||
|
|
||||||
|
dependencies {
|
||||||
|
implementation(project(":prometeu-infra"))
|
||||||
|
implementation(project(":prometeu-packer:prometeu-packer-api"))
|
||||||
|
implementation(project(":prometeu-packer:prometeu-packer-v1"))
|
||||||
|
implementation(project(":prometeu-studio"))
|
||||||
|
implementation(libs.javafx.controls)
|
||||||
|
implementation(libs.javafx.fxml)
|
||||||
|
}
|
||||||
|
|
||||||
|
application {
|
||||||
|
mainClass = "p.studio.App"
|
||||||
|
}
|
||||||
|
|
||||||
|
javafx {
|
||||||
|
version = libs.versions.javafx.get()
|
||||||
|
modules("javafx.controls", "javafx.fxml")
|
||||||
|
}
|
||||||
@ -9,7 +9,7 @@ public class App extends Application {
|
|||||||
@Override
|
@Override
|
||||||
public void init() throws Exception {
|
public void init() throws Exception {
|
||||||
super.init();
|
super.init();
|
||||||
Container.init();
|
Container.install(new AppContainer());
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@ -17,6 +17,12 @@ public class App extends Application {
|
|||||||
new StudioWindowCoordinator(stage).showLauncher();
|
new StudioWindowCoordinator(stage).showLauncher();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void stop() throws Exception {
|
||||||
|
Container.shutdown();
|
||||||
|
super.stop();
|
||||||
|
}
|
||||||
|
|
||||||
public static void main(String[] args) {
|
public static void main(String[] args) {
|
||||||
launch();
|
launch();
|
||||||
}
|
}
|
||||||
79
prometeu-app/src/main/java/p/studio/AppContainer.java
Normal file
79
prometeu-app/src/main/java/p/studio/AppContainer.java
Normal file
@ -0,0 +1,79 @@
|
|||||||
|
package p.studio;
|
||||||
|
|
||||||
|
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||||
|
import p.studio.events.StudioEventBus;
|
||||||
|
import p.studio.events.StudioPackerEventAdapter;
|
||||||
|
import p.studio.utilities.ThemeService;
|
||||||
|
import p.studio.utilities.i18n.I18nService;
|
||||||
|
|
||||||
|
import java.util.concurrent.ExecutorService;
|
||||||
|
import java.util.concurrent.Executors;
|
||||||
|
import java.util.concurrent.ThreadFactory;
|
||||||
|
import java.util.concurrent.atomic.AtomicInteger;
|
||||||
|
|
||||||
|
public final class AppContainer implements Container {
|
||||||
|
private final I18nService i18nService;
|
||||||
|
private final ThemeService themeService;
|
||||||
|
private final StudioEventBus studioEventBus;
|
||||||
|
private final ObjectMapper mapper;
|
||||||
|
private final Packer packer;
|
||||||
|
private final StudioBackgroundTasks backgroundTasks;
|
||||||
|
|
||||||
|
public AppContainer() {
|
||||||
|
this.i18nService = new I18nService();
|
||||||
|
this.themeService = new ThemeService();
|
||||||
|
this.studioEventBus = new StudioEventBus();
|
||||||
|
this.mapper = new ObjectMapper();
|
||||||
|
final ExecutorService backgroundExecutor = Executors.newFixedThreadPool(2, new StudioWorkerThreadFactory());
|
||||||
|
this.backgroundTasks = new StudioBackgroundTasks(backgroundExecutor);
|
||||||
|
final p.packer.Packer embeddedPacker = p.packer.Packer.bootstrap(new StudioPackerEventAdapter(studioEventBus));
|
||||||
|
this.packer = new Packer(embeddedPacker.workspaceService(), embeddedPacker::close);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public I18nService getI18n() {
|
||||||
|
return i18nService;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public ThemeService getTheme() {
|
||||||
|
return themeService;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public StudioEventBus getEventBus() {
|
||||||
|
return studioEventBus;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public ObjectMapper getMapper() {
|
||||||
|
return mapper;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Packer getPacker() {
|
||||||
|
return packer;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public StudioBackgroundTasks getBackgroundTasks() {
|
||||||
|
return backgroundTasks;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void shutdownContainer() {
|
||||||
|
packer.shutdown();
|
||||||
|
backgroundTasks.shutdown();
|
||||||
|
}
|
||||||
|
|
||||||
|
private static final class StudioWorkerThreadFactory implements ThreadFactory {
|
||||||
|
private final AtomicInteger nextId = new AtomicInteger(1);
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Thread newThread(Runnable runnable) {
|
||||||
|
final Thread thread = new Thread(runnable, "studio-worker-" + nextId.getAndIncrement());
|
||||||
|
thread.setDaemon(true);
|
||||||
|
return thread;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -0,0 +1,5 @@
|
|||||||
|
package p.studio.utilities.events;
|
||||||
|
|
||||||
|
public interface EventBusPublisher<T extends SimpleEvent> {
|
||||||
|
<E extends T> void publish(E event);
|
||||||
|
}
|
||||||
@ -7,7 +7,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
|
|||||||
import java.util.concurrent.atomic.AtomicLong;
|
import java.util.concurrent.atomic.AtomicLong;
|
||||||
import java.util.function.Consumer;
|
import java.util.function.Consumer;
|
||||||
|
|
||||||
public final class TypedEventBus {
|
public final class TypedEventBus implements EventBusPublisher<SimpleEvent> {
|
||||||
private final AtomicLong nextSubscriptionId = new AtomicLong(1L);
|
private final AtomicLong nextSubscriptionId = new AtomicLong(1L);
|
||||||
private final ConcurrentMap<Class<?>, ConcurrentMap<Long, Consumer<Object>>> subscriptionsByType =
|
private final ConcurrentMap<Class<?>, ConcurrentMap<Long, Consumer<Object>>> subscriptionsByType =
|
||||||
new ConcurrentHashMap<>();
|
new ConcurrentHashMap<>();
|
||||||
@ -40,7 +40,8 @@ public final class TypedEventBus {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
public void publish(Object event) {
|
@Override
|
||||||
|
public <E extends SimpleEvent> void publish(E event) {
|
||||||
Objects.requireNonNull(event, "event");
|
Objects.requireNonNull(event, "event");
|
||||||
|
|
||||||
final ConcurrentMap<Long, Consumer<Object>> subscriptions = subscriptionsByType.get(event.getClass());
|
final ConcurrentMap<Long, Consumer<Object>> subscriptions = subscriptionsByType.get(event.getClass());
|
||||||
|
|||||||
3
prometeu-lsp/prometeu-lsp-api/build.gradle.kts
Normal file
3
prometeu-lsp/prometeu-lsp-api/build.gradle.kts
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
plugins {
|
||||||
|
id("gradle.java-library-conventions")
|
||||||
|
}
|
||||||
3
prometeu-lsp/prometeu-lsp-v1/build.gradle.kts
Normal file
3
prometeu-lsp/prometeu-lsp-v1/build.gradle.kts
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
plugins {
|
||||||
|
id("gradle.java-library-conventions")
|
||||||
|
}
|
||||||
3
prometeu-packer/prometeu-packer-api/build.gradle.kts
Normal file
3
prometeu-packer/prometeu-packer-api/build.gradle.kts
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
plugins {
|
||||||
|
id("gradle.java-library-conventions")
|
||||||
|
}
|
||||||
@ -1,4 +1,4 @@
|
|||||||
package p.packer.api;
|
package p.packer;
|
||||||
|
|
||||||
public enum PackerOperationStatus {
|
public enum PackerOperationStatus {
|
||||||
SUCCESS,
|
SUCCESS,
|
||||||
@ -1,4 +1,4 @@
|
|||||||
package p.packer.api;
|
package p.packer;
|
||||||
|
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
@ -1,13 +1,13 @@
|
|||||||
package p.packer.api.workspace;
|
package p.packer;
|
||||||
|
|
||||||
import p.packer.api.PackerOperationClass;
|
import p.packer.messages.*;
|
||||||
|
|
||||||
public interface PackerWorkspaceService {
|
public interface PackerWorkspaceService {
|
||||||
PackerOperationClass operationClass();
|
|
||||||
|
|
||||||
InitWorkspaceResult initWorkspace(InitWorkspaceRequest request);
|
InitWorkspaceResult initWorkspace(InitWorkspaceRequest request);
|
||||||
|
|
||||||
ListAssetsResult listAssets(ListAssetsRequest request);
|
ListAssetsResult listAssets(ListAssetsRequest request);
|
||||||
|
|
||||||
GetAssetDetailsResult getAssetDetails(GetAssetDetailsRequest request);
|
GetAssetDetailsResult getAssetDetails(GetAssetDetailsRequest request);
|
||||||
|
|
||||||
|
CreateAssetResult createAsset(CreateAssetRequest request);
|
||||||
}
|
}
|
||||||
@ -0,0 +1,47 @@
|
|||||||
|
package p.packer.assets;
|
||||||
|
|
||||||
|
import java.util.Locale;
|
||||||
|
|
||||||
|
public enum AssetFamilyCatalog {
|
||||||
|
IMAGE_BANK("image_bank", "Image Bank"),
|
||||||
|
PALETTE_BANK("palette_bank", "Palette Bank"),
|
||||||
|
SOUND_BANK("sound_bank", "Sound Bank"),
|
||||||
|
UNKNOWN("unknown", "Unknown");
|
||||||
|
|
||||||
|
private final String manifestType;
|
||||||
|
private final String displayName;
|
||||||
|
|
||||||
|
AssetFamilyCatalog(String manifestType, String displayName) {
|
||||||
|
this.manifestType = manifestType;
|
||||||
|
this.displayName = displayName;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String manifestType() {
|
||||||
|
return manifestType;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String displayName() {
|
||||||
|
return displayName;
|
||||||
|
}
|
||||||
|
|
||||||
|
public boolean matchesQuery(String normalizedQuery) {
|
||||||
|
return manifestType.contains(normalizedQuery)
|
||||||
|
|| displayName.toLowerCase(Locale.ROOT).contains(normalizedQuery);
|
||||||
|
}
|
||||||
|
|
||||||
|
public static AssetFamilyCatalog fromManifestType(String manifestType) {
|
||||||
|
if (manifestType == null) {
|
||||||
|
return UNKNOWN;
|
||||||
|
}
|
||||||
|
final String normalized = manifestType.trim().toLowerCase(Locale.ROOT);
|
||||||
|
for (AssetFamilyCatalog candidate : values()) {
|
||||||
|
if (candidate == UNKNOWN) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if (candidate.manifestType.equals(normalized)) {
|
||||||
|
return candidate;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return UNKNOWN;
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -0,0 +1,52 @@
|
|||||||
|
package p.packer.assets;
|
||||||
|
|
||||||
|
import java.util.Objects;
|
||||||
|
|
||||||
|
public record AssetReference(String value) {
|
||||||
|
private static final String ASSET_ID_PREFIX = "asset:id:";
|
||||||
|
private static final String ASSET_ROOT_PREFIX = "asset:root:";
|
||||||
|
|
||||||
|
public AssetReference {
|
||||||
|
value = Objects.requireNonNull(value, "value").trim();
|
||||||
|
if (value.isBlank()) {
|
||||||
|
throw new IllegalArgumentException("value must not be blank");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public static AssetReference of(String value) {
|
||||||
|
return new AssetReference(value);
|
||||||
|
}
|
||||||
|
|
||||||
|
public static AssetReference forAssetId(int assetId) {
|
||||||
|
if (assetId <= 0) {
|
||||||
|
throw new IllegalArgumentException("assetId must be positive");
|
||||||
|
}
|
||||||
|
return new AssetReference(ASSET_ID_PREFIX + assetId);
|
||||||
|
}
|
||||||
|
|
||||||
|
public static AssetReference forRelativeAssetRoot(String relativeAssetRoot) {
|
||||||
|
final String normalized = Objects.requireNonNull(relativeAssetRoot, "relativeAssetRoot").trim();
|
||||||
|
if (normalized.isBlank()) {
|
||||||
|
throw new IllegalArgumentException("relativeAssetRoot must not be blank");
|
||||||
|
}
|
||||||
|
return new AssetReference(ASSET_ROOT_PREFIX + normalized);
|
||||||
|
}
|
||||||
|
|
||||||
|
public boolean isAssetIdReference() {
|
||||||
|
return value.startsWith(ASSET_ID_PREFIX);
|
||||||
|
}
|
||||||
|
|
||||||
|
public boolean isAssetRootReference() {
|
||||||
|
return value.startsWith(ASSET_ROOT_PREFIX);
|
||||||
|
}
|
||||||
|
|
||||||
|
public String rawValue() {
|
||||||
|
if (isAssetIdReference()) {
|
||||||
|
return value.substring(ASSET_ID_PREFIX.length());
|
||||||
|
}
|
||||||
|
if (isAssetRootReference()) {
|
||||||
|
return value.substring(ASSET_ROOT_PREFIX.length());
|
||||||
|
}
|
||||||
|
return value;
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -0,0 +1,37 @@
|
|||||||
|
package p.packer.assets;
|
||||||
|
|
||||||
|
import java.util.Locale;
|
||||||
|
|
||||||
|
public enum OutputCodecCatalog {
|
||||||
|
NONE("NONE", "None"),
|
||||||
|
UNKNOWN("UNKNOWN", "Unknown");
|
||||||
|
|
||||||
|
private final String manifestValue;
|
||||||
|
private final String displayName;
|
||||||
|
|
||||||
|
OutputCodecCatalog(String manifestValue, String displayName) {
|
||||||
|
this.manifestValue = manifestValue;
|
||||||
|
this.displayName = displayName;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String manifestValue() {
|
||||||
|
return manifestValue;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String displayName() {
|
||||||
|
return displayName;
|
||||||
|
}
|
||||||
|
|
||||||
|
public static OutputCodecCatalog fromManifestValue(String manifestValue) {
|
||||||
|
if (manifestValue == null) {
|
||||||
|
return UNKNOWN;
|
||||||
|
}
|
||||||
|
final String normalized = manifestValue.trim().toUpperCase(Locale.ROOT);
|
||||||
|
for (OutputCodecCatalog candidate : values()) {
|
||||||
|
if (candidate.manifestValue.equals(normalized)) {
|
||||||
|
return candidate;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return UNKNOWN;
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -0,0 +1,68 @@
|
|||||||
|
package p.packer.assets;
|
||||||
|
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.Locale;
|
||||||
|
|
||||||
|
public enum OutputFormatCatalog {
|
||||||
|
TILES_INDEXED_V1(AssetFamilyCatalog.IMAGE_BANK, "TILES/indexed_v1", "TILES/indexed_v1"),
|
||||||
|
PALETTE_INDEXED_V1(AssetFamilyCatalog.PALETTE_BANK, "PALETTE/indexed_v1", "PALETTE/indexed_v1"),
|
||||||
|
SOUND_BANK_V1(AssetFamilyCatalog.SOUND_BANK, "SOUND/bank_v1", "SOUND/bank_v1"),
|
||||||
|
AUDIO_PCM_V1(AssetFamilyCatalog.SOUND_BANK, "AUDIO/pcm_v1", "AUDIO/pcm_v1"),
|
||||||
|
UNKNOWN(AssetFamilyCatalog.UNKNOWN, "unknown", "Unknown");
|
||||||
|
|
||||||
|
private final AssetFamilyCatalog assetFamily;
|
||||||
|
private final String manifestValue;
|
||||||
|
private final String displayName;
|
||||||
|
|
||||||
|
OutputFormatCatalog(AssetFamilyCatalog assetFamily, String manifestValue, String displayName) {
|
||||||
|
this.assetFamily = assetFamily;
|
||||||
|
this.manifestValue = manifestValue;
|
||||||
|
this.displayName = displayName;
|
||||||
|
}
|
||||||
|
|
||||||
|
public AssetFamilyCatalog assetFamily() {
|
||||||
|
return assetFamily;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String manifestValue() {
|
||||||
|
return manifestValue;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String displayName() {
|
||||||
|
return displayName;
|
||||||
|
}
|
||||||
|
|
||||||
|
public List<OutputCodecCatalog> availableCodecs() {
|
||||||
|
return this == UNKNOWN ? List.of() : List.of(OutputCodecCatalog.NONE);
|
||||||
|
}
|
||||||
|
|
||||||
|
public boolean supports(OutputCodecCatalog codec) {
|
||||||
|
return availableCodecs().contains(codec);
|
||||||
|
}
|
||||||
|
|
||||||
|
public static List<OutputFormatCatalog> supportedFor(AssetFamilyCatalog assetFamily) {
|
||||||
|
if (assetFamily == null || assetFamily == AssetFamilyCatalog.UNKNOWN) {
|
||||||
|
return List.of();
|
||||||
|
}
|
||||||
|
return java.util.Arrays.stream(values())
|
||||||
|
.filter(candidate -> candidate != UNKNOWN)
|
||||||
|
.filter(candidate -> candidate.assetFamily == assetFamily)
|
||||||
|
.toList();
|
||||||
|
}
|
||||||
|
|
||||||
|
public static OutputFormatCatalog fromManifestValue(String manifestValue) {
|
||||||
|
if (manifestValue == null) {
|
||||||
|
return UNKNOWN;
|
||||||
|
}
|
||||||
|
final String normalized = manifestValue.trim().toLowerCase(Locale.ROOT);
|
||||||
|
for (OutputFormatCatalog candidate : values()) {
|
||||||
|
if (candidate == UNKNOWN) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if (candidate.manifestValue.toLowerCase(Locale.ROOT).equals(normalized)) {
|
||||||
|
return candidate;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return UNKNOWN;
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,6 +1,6 @@
|
|||||||
package p.packer.api.assets;
|
package p.packer.assets;
|
||||||
|
|
||||||
import p.packer.api.diagnostics.PackerDiagnostic;
|
import p.packer.diagnostics.PackerDiagnostic;
|
||||||
|
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
@ -10,14 +10,18 @@ import java.util.Objects;
|
|||||||
public record PackerAssetDetails(
|
public record PackerAssetDetails(
|
||||||
PackerAssetSummary summary,
|
PackerAssetSummary summary,
|
||||||
String outputFormat,
|
String outputFormat,
|
||||||
String outputCodec,
|
OutputCodecCatalog outputCodec,
|
||||||
|
List<OutputCodecCatalog> availableOutputCodecs,
|
||||||
|
Map<OutputCodecCatalog, List<PackerCodecConfigurationField>> codecConfigurationFieldsByCodec,
|
||||||
Map<String, List<Path>> inputsByRole,
|
Map<String, List<Path>> inputsByRole,
|
||||||
List<PackerDiagnostic> diagnostics) {
|
List<PackerDiagnostic> diagnostics) {
|
||||||
|
|
||||||
public PackerAssetDetails {
|
public PackerAssetDetails {
|
||||||
Objects.requireNonNull(summary, "summary");
|
Objects.requireNonNull(summary, "summary");
|
||||||
outputFormat = Objects.requireNonNullElse(outputFormat, "unknown").trim();
|
outputFormat = Objects.requireNonNullElse(outputFormat, "unknown").trim();
|
||||||
outputCodec = Objects.requireNonNullElse(outputCodec, "unknown").trim();
|
outputCodec = Objects.requireNonNullElse(outputCodec, OutputCodecCatalog.UNKNOWN);
|
||||||
|
availableOutputCodecs = List.copyOf(Objects.requireNonNull(availableOutputCodecs, "availableOutputCodecs"));
|
||||||
|
codecConfigurationFieldsByCodec = Map.copyOf(Objects.requireNonNull(codecConfigurationFieldsByCodec, "codecConfigurationFieldsByCodec"));
|
||||||
inputsByRole = Map.copyOf(Objects.requireNonNull(inputsByRole, "inputsByRole"));
|
inputsByRole = Map.copyOf(Objects.requireNonNull(inputsByRole, "inputsByRole"));
|
||||||
diagnostics = List.copyOf(Objects.requireNonNull(diagnostics, "diagnostics"));
|
diagnostics = List.copyOf(Objects.requireNonNull(diagnostics, "diagnostics"));
|
||||||
}
|
}
|
||||||
@ -1,4 +1,4 @@
|
|||||||
package p.packer.api.assets;
|
package p.packer.assets;
|
||||||
|
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
@ -1,4 +1,4 @@
|
|||||||
package p.packer.api.assets;
|
package p.packer.assets;
|
||||||
|
|
||||||
public enum PackerAssetState {
|
public enum PackerAssetState {
|
||||||
REGISTERED,
|
REGISTERED,
|
||||||
@ -1,23 +1,22 @@
|
|||||||
package p.packer.api.assets;
|
package p.packer.assets;
|
||||||
|
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
|
|
||||||
public record PackerAssetSummary(
|
public record PackerAssetSummary(
|
||||||
|
AssetReference assetReference,
|
||||||
PackerAssetIdentity identity,
|
PackerAssetIdentity identity,
|
||||||
PackerAssetState state,
|
PackerAssetState state,
|
||||||
PackerBuildParticipation buildParticipation,
|
PackerBuildParticipation buildParticipation,
|
||||||
String assetFamily,
|
AssetFamilyCatalog assetFamily,
|
||||||
boolean preloadEnabled,
|
boolean preloadEnabled,
|
||||||
boolean hasDiagnostics) {
|
boolean hasDiagnostics) {
|
||||||
|
|
||||||
public PackerAssetSummary {
|
public PackerAssetSummary {
|
||||||
|
Objects.requireNonNull(assetReference, "assetReference");
|
||||||
Objects.requireNonNull(identity, "identity");
|
Objects.requireNonNull(identity, "identity");
|
||||||
Objects.requireNonNull(state, "state");
|
Objects.requireNonNull(state, "state");
|
||||||
Objects.requireNonNull(buildParticipation, "buildParticipation");
|
Objects.requireNonNull(buildParticipation, "buildParticipation");
|
||||||
assetFamily = Objects.requireNonNullElse(assetFamily, "unknown").trim();
|
assetFamily = Objects.requireNonNullElse(assetFamily, AssetFamilyCatalog.UNKNOWN);
|
||||||
if (assetFamily.isBlank()) {
|
|
||||||
assetFamily = "unknown";
|
|
||||||
}
|
|
||||||
if (state == PackerAssetState.REGISTERED && identity.assetId() == null) {
|
if (state == PackerAssetState.REGISTERED && identity.assetId() == null) {
|
||||||
throw new IllegalArgumentException("registered asset must expose assetId");
|
throw new IllegalArgumentException("registered asset must expose assetId");
|
||||||
}
|
}
|
||||||
@ -1,4 +1,4 @@
|
|||||||
package p.packer.api.assets;
|
package p.packer.assets;
|
||||||
|
|
||||||
public enum PackerBuildParticipation {
|
public enum PackerBuildParticipation {
|
||||||
INCLUDED,
|
INCLUDED,
|
||||||
@ -0,0 +1,24 @@
|
|||||||
|
package p.packer.assets;
|
||||||
|
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.Objects;
|
||||||
|
|
||||||
|
public record PackerCodecConfigurationField(
|
||||||
|
String key,
|
||||||
|
String label,
|
||||||
|
PackerCodecConfigurationFieldType fieldType,
|
||||||
|
String value,
|
||||||
|
boolean required,
|
||||||
|
List<String> options) {
|
||||||
|
|
||||||
|
public PackerCodecConfigurationField {
|
||||||
|
key = Objects.requireNonNull(key, "key").trim();
|
||||||
|
label = Objects.requireNonNull(label, "label").trim();
|
||||||
|
Objects.requireNonNull(fieldType, "fieldType");
|
||||||
|
value = Objects.requireNonNullElse(value, "").trim();
|
||||||
|
options = List.copyOf(Objects.requireNonNull(options, "options"));
|
||||||
|
if (key.isBlank() || label.isBlank()) {
|
||||||
|
throw new IllegalArgumentException("codec configuration field key and label must not be blank");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -0,0 +1,8 @@
|
|||||||
|
package p.packer.assets;
|
||||||
|
|
||||||
|
public enum PackerCodecConfigurationFieldType {
|
||||||
|
TEXT,
|
||||||
|
BOOLEAN,
|
||||||
|
INTEGER,
|
||||||
|
ENUM
|
||||||
|
}
|
||||||
@ -1,4 +1,4 @@
|
|||||||
package p.packer.api.diagnostics;
|
package p.packer.diagnostics;
|
||||||
|
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
@ -1,4 +1,4 @@
|
|||||||
package p.packer.api.diagnostics;
|
package p.packer.diagnostics;
|
||||||
|
|
||||||
public enum PackerDiagnosticCategory {
|
public enum PackerDiagnosticCategory {
|
||||||
STRUCTURAL,
|
STRUCTURAL,
|
||||||
@ -1,4 +1,4 @@
|
|||||||
package p.packer.api.diagnostics;
|
package p.packer.diagnostics;
|
||||||
|
|
||||||
public enum PackerDiagnosticSeverity {
|
public enum PackerDiagnosticSeverity {
|
||||||
INFO,
|
INFO,
|
||||||
@ -1,4 +1,4 @@
|
|||||||
package p.packer.api.events;
|
package p.packer.events;
|
||||||
|
|
||||||
import java.time.Instant;
|
import java.time.Instant;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
@ -0,0 +1,8 @@
|
|||||||
|
package p.packer.events;
|
||||||
|
|
||||||
|
public enum PackerEventKind {
|
||||||
|
ASSET_DISCOVERED,
|
||||||
|
DIAGNOSTICS_UPDATED,
|
||||||
|
ACTION_APPLIED,
|
||||||
|
ACTION_FAILED
|
||||||
|
}
|
||||||
@ -0,0 +1,6 @@
|
|||||||
|
package p.packer.events;
|
||||||
|
|
||||||
|
@FunctionalInterface
|
||||||
|
public interface PackerEventSink {
|
||||||
|
void publish(PackerEvent event);
|
||||||
|
}
|
||||||
@ -1,4 +1,4 @@
|
|||||||
package p.packer.api.events;
|
package p.packer.events;
|
||||||
|
|
||||||
public record PackerProgress(
|
public record PackerProgress(
|
||||||
double value,
|
double value,
|
||||||
@ -1,4 +1,4 @@
|
|||||||
package p.packer.foundation;
|
package p.packer.exceptions;
|
||||||
|
|
||||||
public final class PackerRegistryException extends RuntimeException {
|
public final class PackerRegistryException extends RuntimeException {
|
||||||
public PackerRegistryException(String message) {
|
public PackerRegistryException(String message) {
|
||||||
@ -0,0 +1,33 @@
|
|||||||
|
package p.packer.messages;
|
||||||
|
|
||||||
|
import p.packer.PackerProjectContext;
|
||||||
|
import p.packer.assets.AssetFamilyCatalog;
|
||||||
|
import p.packer.assets.OutputCodecCatalog;
|
||||||
|
import p.packer.assets.OutputFormatCatalog;
|
||||||
|
|
||||||
|
import java.util.Objects;
|
||||||
|
|
||||||
|
public record CreateAssetRequest(
|
||||||
|
PackerProjectContext project,
|
||||||
|
String assetRoot,
|
||||||
|
String assetName,
|
||||||
|
AssetFamilyCatalog assetFamily,
|
||||||
|
OutputFormatCatalog outputFormat,
|
||||||
|
OutputCodecCatalog outputCodec,
|
||||||
|
boolean preloadEnabled) {
|
||||||
|
|
||||||
|
public CreateAssetRequest {
|
||||||
|
Objects.requireNonNull(project, "project");
|
||||||
|
assetRoot = Objects.requireNonNull(assetRoot, "assetRoot").trim();
|
||||||
|
assetName = Objects.requireNonNull(assetName, "assetName").trim();
|
||||||
|
assetFamily = Objects.requireNonNullElse(assetFamily, AssetFamilyCatalog.UNKNOWN);
|
||||||
|
outputFormat = Objects.requireNonNullElse(outputFormat, OutputFormatCatalog.UNKNOWN);
|
||||||
|
outputCodec = Objects.requireNonNullElse(outputCodec, OutputCodecCatalog.UNKNOWN);
|
||||||
|
if (assetRoot.isBlank()) {
|
||||||
|
throw new IllegalArgumentException("assetRoot must not be blank");
|
||||||
|
}
|
||||||
|
if (assetName.isBlank()) {
|
||||||
|
throw new IllegalArgumentException("assetName must not be blank");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -0,0 +1,14 @@
|
|||||||
|
package p.packer.messages;
|
||||||
|
|
||||||
|
import p.packer.PackerOperationStatus;
|
||||||
|
import p.packer.assets.AssetReference;
|
||||||
|
|
||||||
|
import java.nio.file.Path;
|
||||||
|
|
||||||
|
public record CreateAssetResult(
|
||||||
|
PackerOperationStatus status,
|
||||||
|
String summary,
|
||||||
|
AssetReference assetReference,
|
||||||
|
Path assetRoot,
|
||||||
|
Path manifestPath) {
|
||||||
|
}
|
||||||
@ -0,0 +1,16 @@
|
|||||||
|
package p.packer.messages;
|
||||||
|
|
||||||
|
import p.packer.PackerProjectContext;
|
||||||
|
import p.packer.assets.AssetReference;
|
||||||
|
|
||||||
|
import java.util.Objects;
|
||||||
|
|
||||||
|
public record GetAssetDetailsRequest(
|
||||||
|
PackerProjectContext project,
|
||||||
|
AssetReference assetReference) {
|
||||||
|
|
||||||
|
public GetAssetDetailsRequest {
|
||||||
|
Objects.requireNonNull(project, "project");
|
||||||
|
Objects.requireNonNull(assetReference, "assetReference");
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,8 +1,8 @@
|
|||||||
package p.packer.api.workspace;
|
package p.packer.messages;
|
||||||
|
|
||||||
import p.packer.api.PackerOperationStatus;
|
import p.packer.PackerOperationStatus;
|
||||||
import p.packer.api.assets.PackerAssetDetails;
|
import p.packer.assets.PackerAssetDetails;
|
||||||
import p.packer.api.diagnostics.PackerDiagnostic;
|
import p.packer.diagnostics.PackerDiagnostic;
|
||||||
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
@ -1,6 +1,6 @@
|
|||||||
package p.packer.api.workspace;
|
package p.packer.messages;
|
||||||
|
|
||||||
import p.packer.api.PackerProjectContext;
|
import p.packer.PackerProjectContext;
|
||||||
|
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
|
|
||||||
@ -1,7 +1,7 @@
|
|||||||
package p.packer.api.workspace;
|
package p.packer.messages;
|
||||||
|
|
||||||
import p.packer.api.PackerOperationStatus;
|
import p.packer.PackerOperationStatus;
|
||||||
import p.packer.api.diagnostics.PackerDiagnostic;
|
import p.packer.diagnostics.PackerDiagnostic;
|
||||||
|
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
@ -1,6 +1,6 @@
|
|||||||
package p.packer.api.workspace;
|
package p.packer.messages;
|
||||||
|
|
||||||
import p.packer.api.PackerProjectContext;
|
import p.packer.PackerProjectContext;
|
||||||
|
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
|
|
||||||
@ -1,8 +1,8 @@
|
|||||||
package p.packer.api.workspace;
|
package p.packer.messages;
|
||||||
|
|
||||||
import p.packer.api.PackerOperationStatus;
|
import p.packer.PackerOperationStatus;
|
||||||
import p.packer.api.assets.PackerAssetSummary;
|
import p.packer.assets.PackerAssetSummary;
|
||||||
import p.packer.api.diagnostics.PackerDiagnostic;
|
import p.packer.diagnostics.PackerDiagnostic;
|
||||||
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
@ -4,4 +4,5 @@ plugins {
|
|||||||
|
|
||||||
dependencies {
|
dependencies {
|
||||||
implementation(project(":prometeu-infra"))
|
implementation(project(":prometeu-infra"))
|
||||||
|
implementation(project(":prometeu-packer:prometeu-packer-api"))
|
||||||
}
|
}
|
||||||
@ -0,0 +1,48 @@
|
|||||||
|
package p.packer;
|
||||||
|
|
||||||
|
import p.packer.events.PackerEventSink;
|
||||||
|
import p.packer.services.*;
|
||||||
|
|
||||||
|
import java.io.Closeable;
|
||||||
|
import java.util.Objects;
|
||||||
|
|
||||||
|
public final class Packer implements Closeable {
|
||||||
|
private final PackerWorkspaceService workspaceService;
|
||||||
|
private final PackerRuntimeRegistry runtimeRegistry;
|
||||||
|
private final PackerProjectWriteCoordinator writeCoordinator;
|
||||||
|
|
||||||
|
private Packer(
|
||||||
|
PackerWorkspaceService workspaceService,
|
||||||
|
PackerRuntimeRegistry runtimeRegistry,
|
||||||
|
PackerProjectWriteCoordinator writeCoordinator) {
|
||||||
|
this.workspaceService = Objects.requireNonNull(workspaceService, "workspaceService");
|
||||||
|
this.runtimeRegistry = Objects.requireNonNull(runtimeRegistry, "runtimeRegistry");
|
||||||
|
this.writeCoordinator = Objects.requireNonNull(writeCoordinator, "writeCoordinator");
|
||||||
|
}
|
||||||
|
|
||||||
|
public static Packer bootstrap(PackerEventSink eventSink) {
|
||||||
|
final PackerEventSink resolvedEventSink = Objects.requireNonNull(eventSink, "eventSink");
|
||||||
|
final PackerWorkspaceFoundation workspaceFoundation = new PackerWorkspaceFoundation();
|
||||||
|
final PackerAssetDeclarationParser declarationParser = new PackerAssetDeclarationParser();
|
||||||
|
final PackerRuntimeRegistry runtimeRegistry = new PackerRuntimeRegistry(
|
||||||
|
new PackerRuntimeLoader(workspaceFoundation, declarationParser));
|
||||||
|
final PackerAssetDetailsService assetDetailsService = new PackerAssetDetailsService(runtimeRegistry);
|
||||||
|
final PackerProjectWriteCoordinator writeCoordinator = new PackerProjectWriteCoordinator();
|
||||||
|
return new Packer(new FileSystemPackerWorkspaceService(
|
||||||
|
workspaceFoundation,
|
||||||
|
assetDetailsService,
|
||||||
|
runtimeRegistry,
|
||||||
|
writeCoordinator,
|
||||||
|
resolvedEventSink), runtimeRegistry, writeCoordinator);
|
||||||
|
}
|
||||||
|
|
||||||
|
public PackerWorkspaceService workspaceService() {
|
||||||
|
return workspaceService;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void close() {
|
||||||
|
writeCoordinator.close();
|
||||||
|
runtimeRegistry.disposeAll();
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,4 +1,7 @@
|
|||||||
package p.packer.declarations;
|
package p.packer.models;
|
||||||
|
|
||||||
|
import p.packer.assets.AssetFamilyCatalog;
|
||||||
|
import p.packer.assets.OutputCodecCatalog;
|
||||||
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
@ -6,23 +9,25 @@ import java.util.Objects;
|
|||||||
|
|
||||||
public record PackerAssetDeclaration(
|
public record PackerAssetDeclaration(
|
||||||
int schemaVersion,
|
int schemaVersion,
|
||||||
|
String assetUuid,
|
||||||
String name,
|
String name,
|
||||||
String type,
|
AssetFamilyCatalog assetFamily,
|
||||||
Map<String, List<String>> inputsByRole,
|
Map<String, List<String>> inputsByRole,
|
||||||
String outputFormat,
|
String outputFormat,
|
||||||
String outputCodec,
|
OutputCodecCatalog outputCodec,
|
||||||
boolean preloadEnabled) {
|
boolean preloadEnabled) {
|
||||||
|
|
||||||
public PackerAssetDeclaration {
|
public PackerAssetDeclaration {
|
||||||
if (schemaVersion <= 0) {
|
if (schemaVersion <= 0) {
|
||||||
throw new IllegalArgumentException("schemaVersion must be positive");
|
throw new IllegalArgumentException("schemaVersion must be positive");
|
||||||
}
|
}
|
||||||
|
assetUuid = Objects.requireNonNull(assetUuid, "assetUuid").trim();
|
||||||
name = Objects.requireNonNull(name, "name").trim();
|
name = Objects.requireNonNull(name, "name").trim();
|
||||||
type = Objects.requireNonNull(type, "type").trim();
|
assetFamily = Objects.requireNonNull(assetFamily, "assetFamily");
|
||||||
inputsByRole = Map.copyOf(Objects.requireNonNull(inputsByRole, "inputsByRole"));
|
inputsByRole = Map.copyOf(Objects.requireNonNull(inputsByRole, "inputsByRole"));
|
||||||
outputFormat = Objects.requireNonNull(outputFormat, "outputFormat").trim();
|
outputFormat = Objects.requireNonNull(outputFormat, "outputFormat").trim();
|
||||||
outputCodec = Objects.requireNonNull(outputCodec, "outputCodec").trim();
|
outputCodec = Objects.requireNonNull(outputCodec, "outputCodec");
|
||||||
if (name.isBlank() || type.isBlank() || outputFormat.isBlank() || outputCodec.isBlank()) {
|
if (assetUuid.isBlank() || name.isBlank() || outputFormat.isBlank()) {
|
||||||
throw new IllegalArgumentException("declaration fields must not be blank");
|
throw new IllegalArgumentException("declaration fields must not be blank");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -1,6 +1,6 @@
|
|||||||
package p.packer.declarations;
|
package p.packer.models;
|
||||||
|
|
||||||
import p.packer.api.diagnostics.PackerDiagnostic;
|
import p.packer.diagnostics.PackerDiagnostic;
|
||||||
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
@ -1,4 +1,4 @@
|
|||||||
package p.packer.foundation;
|
package p.packer.models;
|
||||||
|
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
|
|
||||||
@ -1,4 +1,4 @@
|
|||||||
package p.packer.foundation;
|
package p.packer.models;
|
||||||
|
|
||||||
import java.util.Comparator;
|
import java.util.Comparator;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
@ -0,0 +1,19 @@
|
|||||||
|
package p.packer.models;
|
||||||
|
|
||||||
|
import java.nio.file.Path;
|
||||||
|
import java.util.Objects;
|
||||||
|
import java.util.Optional;
|
||||||
|
|
||||||
|
public record PackerRuntimeAsset(
|
||||||
|
Path assetRoot,
|
||||||
|
Path manifestPath,
|
||||||
|
Optional<PackerRegistryEntry> registryEntry,
|
||||||
|
PackerAssetDeclarationParseResult parsedDeclaration) {
|
||||||
|
|
||||||
|
public PackerRuntimeAsset {
|
||||||
|
assetRoot = Objects.requireNonNull(assetRoot, "assetRoot").toAbsolutePath().normalize();
|
||||||
|
manifestPath = Objects.requireNonNull(manifestPath, "manifestPath").toAbsolutePath().normalize();
|
||||||
|
registryEntry = Objects.requireNonNull(registryEntry, "registryEntry");
|
||||||
|
parsedDeclaration = Objects.requireNonNull(parsedDeclaration, "parsedDeclaration");
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -0,0 +1,18 @@
|
|||||||
|
package p.packer.models;
|
||||||
|
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.Objects;
|
||||||
|
|
||||||
|
public record PackerRuntimeSnapshot(
|
||||||
|
long generation,
|
||||||
|
PackerRegistryState registry,
|
||||||
|
List<PackerRuntimeAsset> assets) {
|
||||||
|
|
||||||
|
public PackerRuntimeSnapshot {
|
||||||
|
if (generation <= 0L) {
|
||||||
|
throw new IllegalArgumentException("generation must be positive");
|
||||||
|
}
|
||||||
|
registry = Objects.requireNonNull(registry, "registry");
|
||||||
|
assets = List.copyOf(Objects.requireNonNull(assets, "assets"));
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,9 +1,11 @@
|
|||||||
package p.packer.foundation;
|
package p.packer.services;
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||||
import p.packer.api.PackerProjectContext;
|
import p.packer.PackerProjectContext;
|
||||||
|
import p.packer.models.PackerRegistryEntry;
|
||||||
|
import p.packer.models.PackerRegistryState;
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.nio.file.Files;
|
import java.nio.file.Files;
|
||||||
@ -29,7 +31,7 @@ public final class FileSystemPackerRegistryRepository implements PackerRegistryR
|
|||||||
final RegistryDocument document = MAPPER.readValue(registryPath.toFile(), RegistryDocument.class);
|
final RegistryDocument document = MAPPER.readValue(registryPath.toFile(), RegistryDocument.class);
|
||||||
final int schemaVersion = document.schemaVersion <= 0 ? REGISTRY_SCHEMA_VERSION : document.schemaVersion;
|
final int schemaVersion = document.schemaVersion <= 0 ? REGISTRY_SCHEMA_VERSION : document.schemaVersion;
|
||||||
if (schemaVersion != REGISTRY_SCHEMA_VERSION) {
|
if (schemaVersion != REGISTRY_SCHEMA_VERSION) {
|
||||||
throw new PackerRegistryException("Unsupported registry schema_version: " + schemaVersion);
|
throw new p.packer.exceptions.PackerRegistryException("Unsupported registry schema_version: " + schemaVersion);
|
||||||
}
|
}
|
||||||
final List<PackerRegistryEntry> entries = new ArrayList<>();
|
final List<PackerRegistryEntry> entries = new ArrayList<>();
|
||||||
if (document.assets != null) {
|
if (document.assets != null) {
|
||||||
@ -53,7 +55,7 @@ public final class FileSystemPackerRegistryRepository implements PackerRegistryR
|
|||||||
nextAssetId,
|
nextAssetId,
|
||||||
entries.stream().sorted(Comparator.comparingInt(PackerRegistryEntry::assetId)).toList());
|
entries.stream().sorted(Comparator.comparingInt(PackerRegistryEntry::assetId)).toList());
|
||||||
} catch (IOException exception) {
|
} catch (IOException exception) {
|
||||||
throw new PackerRegistryException("Unable to load registry: " + registryPath, exception);
|
throw new p.packer.exceptions.PackerRegistryException("Unable to load registry: " + registryPath, exception);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -76,7 +78,7 @@ public final class FileSystemPackerRegistryRepository implements PackerRegistryR
|
|||||||
.toList();
|
.toList();
|
||||||
MAPPER.writerWithDefaultPrettyPrinter().writeValue(registryPath.toFile(), document);
|
MAPPER.writerWithDefaultPrettyPrinter().writeValue(registryPath.toFile(), document);
|
||||||
} catch (IOException exception) {
|
} catch (IOException exception) {
|
||||||
throw new PackerRegistryException("Unable to save registry: " + registryPath, exception);
|
throw new p.packer.exceptions.PackerRegistryException("Unable to save registry: " + registryPath, exception);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -90,25 +92,25 @@ public final class FileSystemPackerRegistryRepository implements PackerRegistryR
|
|||||||
final Set<String> roots = new HashSet<>();
|
final Set<String> roots = new HashSet<>();
|
||||||
for (PackerRegistryEntry entry : entries) {
|
for (PackerRegistryEntry entry : entries) {
|
||||||
if (!assetIds.add(entry.assetId())) {
|
if (!assetIds.add(entry.assetId())) {
|
||||||
throw new PackerRegistryException("Duplicate asset_id in registry: " + entry.assetId());
|
throw new p.packer.exceptions.PackerRegistryException("Duplicate asset_id in registry: " + entry.assetId());
|
||||||
}
|
}
|
||||||
if (!assetUuids.add(entry.assetUuid())) {
|
if (!assetUuids.add(entry.assetUuid())) {
|
||||||
throw new PackerRegistryException("Duplicate asset_uuid in registry: " + entry.assetUuid());
|
throw new p.packer.exceptions.PackerRegistryException("Duplicate asset_uuid in registry: " + entry.assetUuid());
|
||||||
}
|
}
|
||||||
if (!roots.add(entry.root())) {
|
if (!roots.add(entry.root())) {
|
||||||
throw new PackerRegistryException("Duplicate asset root in registry: " + entry.root());
|
throw new p.packer.exceptions.PackerRegistryException("Duplicate asset root in registry: " + entry.root());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private String normalizeRoot(String root) {
|
private String normalizeRoot(String root) {
|
||||||
if (root == null || root.isBlank()) {
|
if (root == null || root.isBlank()) {
|
||||||
throw new PackerRegistryException("Registry asset root must not be blank");
|
throw new p.packer.exceptions.PackerRegistryException("Registry asset root must not be blank");
|
||||||
}
|
}
|
||||||
final String normalized = root.trim().replace('\\', '/');
|
final String normalized = root.trim().replace('\\', '/');
|
||||||
final Path normalizedPath = Path.of(normalized).normalize();
|
final Path normalizedPath = Path.of(normalized).normalize();
|
||||||
if (normalizedPath.isAbsolute() || normalizedPath.startsWith("..")) {
|
if (normalizedPath.isAbsolute() || normalizedPath.startsWith("..")) {
|
||||||
throw new PackerRegistryException("Registry asset root is outside the trusted assets boundary: " + normalized);
|
throw new p.packer.exceptions.PackerRegistryException("Registry asset root is outside the trusted assets boundary: " + normalized);
|
||||||
}
|
}
|
||||||
return normalized;
|
return normalized;
|
||||||
}
|
}
|
||||||
@ -0,0 +1,324 @@
|
|||||||
|
package p.packer.services;
|
||||||
|
|
||||||
|
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||||
|
import p.packer.PackerOperationStatus;
|
||||||
|
import p.packer.PackerProjectContext;
|
||||||
|
import p.packer.assets.AssetFamilyCatalog;
|
||||||
|
import p.packer.assets.AssetReference;
|
||||||
|
import p.packer.assets.OutputFormatCatalog;
|
||||||
|
import p.packer.assets.PackerAssetIdentity;
|
||||||
|
import p.packer.assets.PackerAssetState;
|
||||||
|
import p.packer.assets.PackerAssetSummary;
|
||||||
|
import p.packer.assets.PackerBuildParticipation;
|
||||||
|
import p.packer.diagnostics.PackerDiagnostic;
|
||||||
|
import p.packer.diagnostics.PackerDiagnosticCategory;
|
||||||
|
import p.packer.diagnostics.PackerDiagnosticSeverity;
|
||||||
|
import p.packer.events.PackerEventKind;
|
||||||
|
import p.packer.events.PackerEventSink;
|
||||||
|
import p.packer.events.PackerProgress;
|
||||||
|
import p.packer.messages.CreateAssetRequest;
|
||||||
|
import p.packer.messages.CreateAssetResult;
|
||||||
|
import p.packer.messages.GetAssetDetailsRequest;
|
||||||
|
import p.packer.messages.GetAssetDetailsResult;
|
||||||
|
import p.packer.messages.InitWorkspaceRequest;
|
||||||
|
import p.packer.messages.InitWorkspaceResult;
|
||||||
|
import p.packer.messages.ListAssetsRequest;
|
||||||
|
import p.packer.messages.ListAssetsResult;
|
||||||
|
import p.packer.PackerWorkspaceService;
|
||||||
|
import p.packer.models.PackerAssetDeclarationParseResult;
|
||||||
|
import p.packer.models.PackerRegistryEntry;
|
||||||
|
import p.packer.models.PackerRegistryState;
|
||||||
|
import p.packer.models.PackerRuntimeAsset;
|
||||||
|
import p.packer.models.PackerRuntimeSnapshot;
|
||||||
|
|
||||||
|
import java.io.IOException;
|
||||||
|
import java.nio.file.Files;
|
||||||
|
import java.nio.file.Path;
|
||||||
|
import java.util.*;
|
||||||
|
import java.util.stream.Stream;
|
||||||
|
|
||||||
|
public final class FileSystemPackerWorkspaceService implements PackerWorkspaceService {
|
||||||
|
private static final ObjectMapper MAPPER = new ObjectMapper();
|
||||||
|
private final PackerWorkspaceFoundation workspaceFoundation;
|
||||||
|
private final PackerAssetDetailsService detailsService;
|
||||||
|
private final PackerRuntimeRegistry runtimeRegistry;
|
||||||
|
private final PackerProjectWriteCoordinator writeCoordinator;
|
||||||
|
private final PackerEventSink eventSink;
|
||||||
|
|
||||||
|
public FileSystemPackerWorkspaceService(
|
||||||
|
PackerWorkspaceFoundation workspaceFoundation,
|
||||||
|
PackerAssetDetailsService detailsService,
|
||||||
|
PackerRuntimeRegistry runtimeRegistry,
|
||||||
|
PackerProjectWriteCoordinator writeCoordinator,
|
||||||
|
PackerEventSink eventSink) {
|
||||||
|
this.workspaceFoundation = Objects.requireNonNull(workspaceFoundation, "workspaceFoundation");
|
||||||
|
this.detailsService = Objects.requireNonNull(detailsService, "detailsService");
|
||||||
|
this.runtimeRegistry = Objects.requireNonNull(runtimeRegistry, "runtimeRegistry");
|
||||||
|
this.writeCoordinator = Objects.requireNonNull(writeCoordinator, "writeCoordinator");
|
||||||
|
this.eventSink = Objects.requireNonNull(eventSink, "eventSink");
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public InitWorkspaceResult initWorkspace(InitWorkspaceRequest request) {
|
||||||
|
final InitWorkspaceResult result = workspaceFoundation.initWorkspace(request);
|
||||||
|
runtimeRegistry.refresh(Objects.requireNonNull(request, "request").project());
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public ListAssetsResult listAssets(ListAssetsRequest request) {
|
||||||
|
final PackerProjectContext project = Objects.requireNonNull(request, "request").project();
|
||||||
|
final PackerRuntimeSnapshot snapshot = runtimeRegistry.getOrLoad(project).snapshot();
|
||||||
|
final PackerOperationEventEmitter events = new PackerOperationEventEmitter(project, eventSink);
|
||||||
|
final PackerRegistryState registry = snapshot.registry();
|
||||||
|
final Map<Path, PackerRegistryEntry> registryByRoot = new HashMap<>();
|
||||||
|
for (PackerRegistryEntry entry : registry.assets()) {
|
||||||
|
registryByRoot.put(PackerWorkspacePaths.assetRoot(project, entry.root()), entry);
|
||||||
|
}
|
||||||
|
|
||||||
|
final List<PackerAssetSummary> assets = new ArrayList<>();
|
||||||
|
final List<PackerDiagnostic> diagnostics = new ArrayList<>();
|
||||||
|
final Set<Path> discoveredRoots = new HashSet<>();
|
||||||
|
|
||||||
|
final List<PackerRuntimeAsset> runtimeAssets = snapshot.assets();
|
||||||
|
final int total = runtimeAssets.size();
|
||||||
|
for (int index = 0; index < runtimeAssets.size(); index += 1) {
|
||||||
|
final PackerRuntimeAsset runtimeAsset = runtimeAssets.get(index);
|
||||||
|
final Path assetRoot = runtimeAsset.assetRoot();
|
||||||
|
final Path assetManifestPath = runtimeAsset.manifestPath();
|
||||||
|
discoveredRoots.add(assetRoot);
|
||||||
|
final PackerRegistryEntry registryEntry = registryByRoot.get(assetRoot);
|
||||||
|
final PackerAssetDeclarationParseResult parsed = runtimeAsset.parsedDeclaration();
|
||||||
|
diagnostics.addAll(parsed.diagnostics());
|
||||||
|
diagnostics.addAll(identityMismatchDiagnostics(registryEntry, parsed, assetManifestPath));
|
||||||
|
final PackerAssetSummary summary = buildSummary(project, assetRoot, registryEntry, parsed);
|
||||||
|
assets.add(summary);
|
||||||
|
events.emit(
|
||||||
|
PackerEventKind.ASSET_DISCOVERED,
|
||||||
|
"Discovered asset: " + summary.identity().assetName(),
|
||||||
|
new PackerProgress(total == 0 ? 1.0d : (index + 1) / (double) total, false),
|
||||||
|
List.of(summary.identity().assetName()));
|
||||||
|
}
|
||||||
|
|
||||||
|
for (PackerRegistryEntry entry : registry.assets()) {
|
||||||
|
final Path registeredRoot = PackerWorkspacePaths.assetRoot(project, entry.root());
|
||||||
|
if (!discoveredRoots.contains(registeredRoot)) {
|
||||||
|
diagnostics.add(new PackerDiagnostic(
|
||||||
|
PackerDiagnosticSeverity.ERROR,
|
||||||
|
PackerDiagnosticCategory.STRUCTURAL,
|
||||||
|
"Registered asset root is missing asset.json: " + entry.root(),
|
||||||
|
registeredRoot.resolve("asset.json"),
|
||||||
|
true));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
assets.sort(Comparator
|
||||||
|
.comparing((PackerAssetSummary asset) -> asset.identity().assetRoot().toString(), String.CASE_INSENSITIVE_ORDER)
|
||||||
|
.thenComparing(summary -> summary.identity().assetName(), String.CASE_INSENSITIVE_ORDER));
|
||||||
|
final PackerOperationStatus status = diagnostics.stream().anyMatch(PackerDiagnostic::blocking)
|
||||||
|
? PackerOperationStatus.PARTIAL
|
||||||
|
: PackerOperationStatus.SUCCESS;
|
||||||
|
if (!diagnostics.isEmpty()) {
|
||||||
|
events.emit(PackerEventKind.DIAGNOSTICS_UPDATED, "Asset scan diagnostics updated.", List.of());
|
||||||
|
}
|
||||||
|
return new ListAssetsResult(
|
||||||
|
status,
|
||||||
|
"Packer asset snapshot ready.",
|
||||||
|
assets,
|
||||||
|
diagnostics);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public GetAssetDetailsResult getAssetDetails(GetAssetDetailsRequest request) {
|
||||||
|
return detailsService.getAssetDetails(request);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public CreateAssetResult createAsset(CreateAssetRequest request) {
|
||||||
|
final CreateAssetRequest safeRequest = Objects.requireNonNull(request, "request");
|
||||||
|
final PackerProjectContext project = safeRequest.project();
|
||||||
|
final PackerOperationEventEmitter events = new PackerOperationEventEmitter(project, eventSink);
|
||||||
|
|
||||||
|
return writeCoordinator.execute(project, () -> createAssetInWriteLane(safeRequest, events));
|
||||||
|
}
|
||||||
|
|
||||||
|
private CreateAssetResult createAssetInWriteLane(
|
||||||
|
CreateAssetRequest request,
|
||||||
|
PackerOperationEventEmitter events) {
|
||||||
|
final PackerProjectContext project = request.project();
|
||||||
|
workspaceFoundation.initWorkspace(new InitWorkspaceRequest(project));
|
||||||
|
|
||||||
|
final String relativeAssetRoot = normalizeRelativeAssetRoot(request.assetRoot());
|
||||||
|
if (relativeAssetRoot == null) {
|
||||||
|
return failureResult(events, "Asset root must stay inside assets/ and use a non-blank relative path.", null, null, List.of());
|
||||||
|
}
|
||||||
|
|
||||||
|
if (request.assetFamily() == AssetFamilyCatalog.UNKNOWN) {
|
||||||
|
return failureResult(events, "Asset type is required.", null, null, List.of());
|
||||||
|
}
|
||||||
|
if (request.outputFormat() == OutputFormatCatalog.UNKNOWN) {
|
||||||
|
return failureResult(events, "Output format is required.", null, null, List.of());
|
||||||
|
}
|
||||||
|
if (request.outputFormat().assetFamily() != request.assetFamily()) {
|
||||||
|
return failureResult(events, "Output format is not supported for the selected asset type.", null, null, List.of(request.assetName()));
|
||||||
|
}
|
||||||
|
if (!request.outputFormat().supports(request.outputCodec())) {
|
||||||
|
return failureResult(events, "Output codec is not supported for the selected output format.", null, null, List.of(request.assetName()));
|
||||||
|
}
|
||||||
|
|
||||||
|
final Path assetRoot = PackerWorkspacePaths.assetRoot(project, relativeAssetRoot);
|
||||||
|
if (!assetRoot.startsWith(PackerWorkspacePaths.assetsRoot(project))) {
|
||||||
|
return failureResult(events, "Asset root must stay inside assets/.", null, null, List.of());
|
||||||
|
}
|
||||||
|
|
||||||
|
final Path manifestPath = assetRoot.resolve("asset.json");
|
||||||
|
try {
|
||||||
|
final PackerRegistryState registry = workspaceFoundation.loadRegistry(project);
|
||||||
|
final boolean rootAlreadyRegistered = registry.assets().stream()
|
||||||
|
.map(entry -> PackerWorkspacePaths.assetRoot(project, entry.root()))
|
||||||
|
.anyMatch(assetRoot::equals);
|
||||||
|
if (rootAlreadyRegistered) {
|
||||||
|
return failureResult(events, "Asset root is already registered.", assetRoot, manifestPath, List.of(relativeAssetRoot));
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Files.isRegularFile(manifestPath)) {
|
||||||
|
return failureResult(events, "Asset root already contains asset.json.", assetRoot, manifestPath, List.of(relativeAssetRoot));
|
||||||
|
}
|
||||||
|
if (Files.isDirectory(assetRoot)) {
|
||||||
|
try (Stream<Path> children = Files.list(assetRoot)) {
|
||||||
|
if (children.findAny().isPresent()) {
|
||||||
|
return failureResult(events, "Asset root already exists and is not empty.", assetRoot, manifestPath, List.of(relativeAssetRoot));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
final PackerRegistryEntry entry = workspaceFoundation.allocateIdentity(project, registry, assetRoot);
|
||||||
|
Files.createDirectories(assetRoot);
|
||||||
|
writeManifest(manifestPath, request, entry.assetUuid());
|
||||||
|
final PackerRegistryState updated = workspaceFoundation.appendAllocatedEntry(registry, entry);
|
||||||
|
workspaceFoundation.saveRegistry(project, updated);
|
||||||
|
runtimeRegistry.refresh(project);
|
||||||
|
final CreateAssetResult result = new CreateAssetResult(
|
||||||
|
PackerOperationStatus.SUCCESS,
|
||||||
|
"Asset created: " + relativeAssetRoot,
|
||||||
|
AssetReference.forAssetId(entry.assetId()),
|
||||||
|
assetRoot,
|
||||||
|
manifestPath);
|
||||||
|
events.emit(PackerEventKind.ACTION_APPLIED, result.summary(), List.of(relativeAssetRoot));
|
||||||
|
return result;
|
||||||
|
} catch (IOException exception) {
|
||||||
|
return failureResult(
|
||||||
|
events,
|
||||||
|
"Unable to create asset: " + exception.getMessage(),
|
||||||
|
assetRoot,
|
||||||
|
manifestPath,
|
||||||
|
List.of(relativeAssetRoot));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private CreateAssetResult failureResult(
|
||||||
|
PackerOperationEventEmitter events,
|
||||||
|
String summary,
|
||||||
|
Path assetRoot,
|
||||||
|
Path manifestPath,
|
||||||
|
List<String> affectedAssets) {
|
||||||
|
final CreateAssetResult result = new CreateAssetResult(
|
||||||
|
PackerOperationStatus.FAILED,
|
||||||
|
summary,
|
||||||
|
null,
|
||||||
|
assetRoot,
|
||||||
|
manifestPath);
|
||||||
|
events.emit(PackerEventKind.ACTION_FAILED, result.summary(), affectedAssets);
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
private void writeManifest(Path manifestPath, CreateAssetRequest request, String assetUuid) throws IOException {
|
||||||
|
final Map<String, Object> manifest = new LinkedHashMap<>();
|
||||||
|
manifest.put("schema_version", 1);
|
||||||
|
manifest.put("asset_uuid", assetUuid);
|
||||||
|
manifest.put("name", request.assetName());
|
||||||
|
manifest.put("type", request.assetFamily().manifestType());
|
||||||
|
manifest.put("inputs", Map.of());
|
||||||
|
manifest.put("output", Map.of(
|
||||||
|
"format", request.outputFormat().manifestValue(),
|
||||||
|
"codec", request.outputCodec().manifestValue()));
|
||||||
|
manifest.put("preload", Map.of("enabled", request.preloadEnabled()));
|
||||||
|
MAPPER.writerWithDefaultPrettyPrinter().writeValue(manifestPath.toFile(), manifest);
|
||||||
|
}
|
||||||
|
|
||||||
|
private String normalizeRelativeAssetRoot(String candidate) {
|
||||||
|
final String raw = Objects.requireNonNullElse(candidate, "").trim().replace('\\', '/');
|
||||||
|
if (raw.isBlank()) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
final Path normalized = Path.of(raw).normalize();
|
||||||
|
if (normalized.isAbsolute() || normalized.startsWith("..")) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
final String value = normalized.toString().replace('\\', '/');
|
||||||
|
return value.isBlank() ? null : value;
|
||||||
|
}
|
||||||
|
|
||||||
|
private PackerAssetSummary buildSummary(
|
||||||
|
PackerProjectContext project,
|
||||||
|
Path assetRoot,
|
||||||
|
PackerRegistryEntry registryEntry,
|
||||||
|
PackerAssetDeclarationParseResult parsed) {
|
||||||
|
final String assetName = parsed.declaration() != null
|
||||||
|
? parsed.declaration().name()
|
||||||
|
: assetRoot.getFileName().toString();
|
||||||
|
final AssetFamilyCatalog assetFamily = parsed.declaration() != null
|
||||||
|
? parsed.declaration().assetFamily()
|
||||||
|
: AssetFamilyCatalog.UNKNOWN;
|
||||||
|
final boolean preload = parsed.declaration() != null && parsed.declaration().preloadEnabled();
|
||||||
|
final boolean hasDiagnostics = !parsed.diagnostics().isEmpty();
|
||||||
|
final PackerAssetState state = registryEntry == null
|
||||||
|
? PackerAssetState.UNREGISTERED
|
||||||
|
: PackerAssetState.REGISTERED;
|
||||||
|
final PackerBuildParticipation buildParticipation = state == PackerAssetState.REGISTERED
|
||||||
|
? (registryEntry.includedInBuild() ? PackerBuildParticipation.INCLUDED : PackerBuildParticipation.EXCLUDED)
|
||||||
|
: PackerBuildParticipation.EXCLUDED;
|
||||||
|
return new PackerAssetSummary(
|
||||||
|
assetReferenceFor(project, assetRoot, registryEntry),
|
||||||
|
new PackerAssetIdentity(
|
||||||
|
registryEntry == null ? null : registryEntry.assetId(),
|
||||||
|
registryEntry == null
|
||||||
|
? parsed.declaration() == null ? null : parsed.declaration().assetUuid()
|
||||||
|
: registryEntry.assetUuid(),
|
||||||
|
assetName,
|
||||||
|
assetRoot),
|
||||||
|
state,
|
||||||
|
buildParticipation,
|
||||||
|
assetFamily,
|
||||||
|
preload,
|
||||||
|
hasDiagnostics);
|
||||||
|
}
|
||||||
|
|
||||||
|
private List<PackerDiagnostic> identityMismatchDiagnostics(
|
||||||
|
PackerRegistryEntry registryEntry,
|
||||||
|
PackerAssetDeclarationParseResult parsed,
|
||||||
|
Path manifestPath) {
|
||||||
|
if (registryEntry == null || parsed.declaration() == null || registryEntry.assetUuid().equals(parsed.declaration().assetUuid())) {
|
||||||
|
return List.of();
|
||||||
|
}
|
||||||
|
return List.of(new PackerDiagnostic(
|
||||||
|
PackerDiagnosticSeverity.ERROR,
|
||||||
|
PackerDiagnosticCategory.STRUCTURAL,
|
||||||
|
"Field 'asset_uuid' does not match the registered asset identity.",
|
||||||
|
manifestPath,
|
||||||
|
true));
|
||||||
|
}
|
||||||
|
|
||||||
|
private AssetReference assetReferenceFor(
|
||||||
|
PackerProjectContext project,
|
||||||
|
Path assetRoot,
|
||||||
|
PackerRegistryEntry registryEntry) {
|
||||||
|
if (registryEntry != null) {
|
||||||
|
return AssetReference.forAssetId(registryEntry.assetId());
|
||||||
|
}
|
||||||
|
return AssetReference.forRelativeAssetRoot(PackerWorkspacePaths.assetsRoot(project)
|
||||||
|
.relativize(assetRoot.toAbsolutePath().normalize())
|
||||||
|
.toString()
|
||||||
|
.replace('\\', '/'));
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,10 +1,14 @@
|
|||||||
package p.packer.declarations;
|
package p.packer.services;
|
||||||
|
|
||||||
import com.fasterxml.jackson.databind.JsonNode;
|
import com.fasterxml.jackson.databind.JsonNode;
|
||||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||||
import p.packer.api.diagnostics.PackerDiagnostic;
|
import p.packer.assets.AssetFamilyCatalog;
|
||||||
import p.packer.api.diagnostics.PackerDiagnosticCategory;
|
import p.packer.assets.OutputCodecCatalog;
|
||||||
import p.packer.api.diagnostics.PackerDiagnosticSeverity;
|
import p.packer.diagnostics.PackerDiagnostic;
|
||||||
|
import p.packer.diagnostics.PackerDiagnosticCategory;
|
||||||
|
import p.packer.diagnostics.PackerDiagnosticSeverity;
|
||||||
|
import p.packer.models.PackerAssetDeclaration;
|
||||||
|
import p.packer.models.PackerAssetDeclarationParseResult;
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
@ -35,11 +39,12 @@ public final class PackerAssetDeclarationParser {
|
|||||||
}
|
}
|
||||||
|
|
||||||
final Integer schemaVersion = requiredInt(root, "schema_version", diagnostics, manifestPath);
|
final Integer schemaVersion = requiredInt(root, "schema_version", diagnostics, manifestPath);
|
||||||
|
final String assetUuid = requiredText(root, "asset_uuid", diagnostics, manifestPath);
|
||||||
final String name = requiredText(root, "name", diagnostics, manifestPath);
|
final String name = requiredText(root, "name", diagnostics, manifestPath);
|
||||||
final String type = requiredText(root, "type", diagnostics, manifestPath);
|
final AssetFamilyCatalog assetFamily = requiredAssetFamily(root, diagnostics, manifestPath);
|
||||||
final Map<String, List<String>> inputsByRole = requiredInputs(root.path("inputs"), diagnostics, manifestPath);
|
final Map<String, List<String>> inputsByRole = requiredInputs(root.path("inputs"), diagnostics, manifestPath);
|
||||||
final String outputFormat = requiredText(root.path("output"), "format", diagnostics, manifestPath);
|
final String outputFormat = requiredText(root.path("output"), "format", diagnostics, manifestPath);
|
||||||
final String outputCodec = requiredText(root.path("output"), "codec", diagnostics, manifestPath);
|
final OutputCodecCatalog outputCodec = requiredOutputCodec(root.path("output"), diagnostics, manifestPath);
|
||||||
final Boolean preloadEnabled = requiredBoolean(root.path("preload"), "enabled", diagnostics, manifestPath);
|
final Boolean preloadEnabled = requiredBoolean(root.path("preload"), "enabled", diagnostics, manifestPath);
|
||||||
|
|
||||||
if (schemaVersion != null && schemaVersion != SUPPORTED_SCHEMA_VERSION) {
|
if (schemaVersion != null && schemaVersion != SUPPORTED_SCHEMA_VERSION) {
|
||||||
@ -58,8 +63,9 @@ public final class PackerAssetDeclarationParser {
|
|||||||
return new PackerAssetDeclarationParseResult(
|
return new PackerAssetDeclarationParseResult(
|
||||||
new PackerAssetDeclaration(
|
new PackerAssetDeclaration(
|
||||||
schemaVersion,
|
schemaVersion,
|
||||||
|
assetUuid,
|
||||||
name,
|
name,
|
||||||
type,
|
assetFamily,
|
||||||
inputsByRole,
|
inputsByRole,
|
||||||
outputFormat,
|
outputFormat,
|
||||||
outputCodec,
|
outputCodec,
|
||||||
@ -85,6 +91,24 @@ public final class PackerAssetDeclarationParser {
|
|||||||
return field.asText().trim();
|
return field.asText().trim();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private AssetFamilyCatalog requiredAssetFamily(JsonNode node, List<PackerDiagnostic> diagnostics, Path manifestPath) {
|
||||||
|
final String manifestType = requiredText(node, "type", diagnostics, manifestPath);
|
||||||
|
if (manifestType == null) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
final AssetFamilyCatalog assetFamily = AssetFamilyCatalog.fromManifestType(manifestType);
|
||||||
|
if (assetFamily == AssetFamilyCatalog.UNKNOWN) {
|
||||||
|
diagnostics.add(new PackerDiagnostic(
|
||||||
|
PackerDiagnosticSeverity.ERROR,
|
||||||
|
PackerDiagnosticCategory.STRUCTURAL,
|
||||||
|
"Field 'type' must be one of: image_bank, palette_bank, sound_bank.",
|
||||||
|
manifestPath,
|
||||||
|
true));
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
return assetFamily;
|
||||||
|
}
|
||||||
|
|
||||||
private Boolean requiredBoolean(JsonNode node, String fieldName, List<PackerDiagnostic> diagnostics, Path manifestPath) {
|
private Boolean requiredBoolean(JsonNode node, String fieldName, List<PackerDiagnostic> diagnostics, Path manifestPath) {
|
||||||
final JsonNode field = node.path(fieldName);
|
final JsonNode field = node.path(fieldName);
|
||||||
if (!field.isBoolean()) {
|
if (!field.isBoolean()) {
|
||||||
@ -94,6 +118,24 @@ public final class PackerAssetDeclarationParser {
|
|||||||
return field.booleanValue();
|
return field.booleanValue();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private OutputCodecCatalog requiredOutputCodec(JsonNode node, List<PackerDiagnostic> diagnostics, Path manifestPath) {
|
||||||
|
final String codecValue = requiredText(node, "codec", diagnostics, manifestPath);
|
||||||
|
if (codecValue == null) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
final OutputCodecCatalog outputCodec = OutputCodecCatalog.fromManifestValue(codecValue);
|
||||||
|
if (outputCodec == OutputCodecCatalog.UNKNOWN) {
|
||||||
|
diagnostics.add(new PackerDiagnostic(
|
||||||
|
PackerDiagnosticSeverity.ERROR,
|
||||||
|
PackerDiagnosticCategory.STRUCTURAL,
|
||||||
|
"Field 'codec' must be one of: NONE.",
|
||||||
|
manifestPath,
|
||||||
|
true));
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
return outputCodec;
|
||||||
|
}
|
||||||
|
|
||||||
private Map<String, List<String>> requiredInputs(JsonNode inputsNode, List<PackerDiagnostic> diagnostics, Path manifestPath) {
|
private Map<String, List<String>> requiredInputs(JsonNode inputsNode, List<PackerDiagnostic> diagnostics, Path manifestPath) {
|
||||||
if (!inputsNode.isObject()) {
|
if (!inputsNode.isObject()) {
|
||||||
diagnostics.add(missingOrInvalid("inputs", "object of input roles", manifestPath));
|
diagnostics.add(missingOrInvalid("inputs", "object of input roles", manifestPath));
|
||||||
@ -0,0 +1,249 @@
|
|||||||
|
package p.packer.services;
|
||||||
|
|
||||||
|
import p.packer.PackerOperationStatus;
|
||||||
|
import p.packer.PackerProjectContext;
|
||||||
|
import p.packer.assets.AssetFamilyCatalog;
|
||||||
|
import p.packer.assets.AssetReference;
|
||||||
|
import p.packer.assets.OutputCodecCatalog;
|
||||||
|
import p.packer.assets.PackerAssetDetails;
|
||||||
|
import p.packer.assets.PackerAssetIdentity;
|
||||||
|
import p.packer.assets.PackerAssetState;
|
||||||
|
import p.packer.assets.PackerAssetSummary;
|
||||||
|
import p.packer.assets.PackerBuildParticipation;
|
||||||
|
import p.packer.diagnostics.PackerDiagnostic;
|
||||||
|
import p.packer.diagnostics.PackerDiagnosticCategory;
|
||||||
|
import p.packer.diagnostics.PackerDiagnosticSeverity;
|
||||||
|
import p.packer.messages.GetAssetDetailsRequest;
|
||||||
|
import p.packer.messages.GetAssetDetailsResult;
|
||||||
|
import p.packer.models.PackerAssetDeclaration;
|
||||||
|
import p.packer.models.PackerAssetDeclarationParseResult;
|
||||||
|
import p.packer.models.PackerRegistryEntry;
|
||||||
|
import p.packer.models.PackerRuntimeAsset;
|
||||||
|
import p.packer.models.PackerRuntimeSnapshot;
|
||||||
|
|
||||||
|
import java.nio.file.Path;
|
||||||
|
import java.util.ArrayList;
|
||||||
|
import java.util.LinkedHashMap;
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.Map;
|
||||||
|
import java.util.Objects;
|
||||||
|
import java.util.Optional;
|
||||||
|
|
||||||
|
public final class PackerAssetDetailsService {
|
||||||
|
private final PackerRuntimeRegistry runtimeRegistry;
|
||||||
|
|
||||||
|
public PackerAssetDetailsService(PackerRuntimeRegistry runtimeRegistry) {
|
||||||
|
this.runtimeRegistry = Objects.requireNonNull(runtimeRegistry, "runtimeRegistry");
|
||||||
|
}
|
||||||
|
|
||||||
|
public GetAssetDetailsResult getAssetDetails(GetAssetDetailsRequest request) {
|
||||||
|
final PackerProjectContext project = Objects.requireNonNull(request, "request").project();
|
||||||
|
final PackerRuntimeSnapshot snapshot = runtimeRegistry.getOrLoad(project).snapshot();
|
||||||
|
final ResolvedAssetReference resolved = resolveReference(project, snapshot, request.assetReference());
|
||||||
|
final List<PackerDiagnostic> diagnostics = new ArrayList<>(resolved.diagnostics());
|
||||||
|
|
||||||
|
if (resolved.runtimeAsset().isEmpty()) {
|
||||||
|
diagnostics.add(new PackerDiagnostic(
|
||||||
|
PackerDiagnosticSeverity.ERROR,
|
||||||
|
PackerDiagnosticCategory.STRUCTURAL,
|
||||||
|
"asset.json was not found for the requested asset root.",
|
||||||
|
resolved.assetRoot().resolve("asset.json"),
|
||||||
|
true));
|
||||||
|
return failureResult(project, request.assetReference(), resolved, diagnostics);
|
||||||
|
}
|
||||||
|
|
||||||
|
final PackerRuntimeAsset runtimeAsset = resolved.runtimeAsset().get();
|
||||||
|
final Path manifestPath = runtimeAsset.manifestPath();
|
||||||
|
final PackerAssetDeclarationParseResult parsed = runtimeAsset.parsedDeclaration();
|
||||||
|
diagnostics.addAll(parsed.diagnostics());
|
||||||
|
if (!parsed.valid()) {
|
||||||
|
return failureResult(project, request.assetReference(), resolved, diagnostics);
|
||||||
|
}
|
||||||
|
|
||||||
|
final PackerAssetDeclaration declaration = parsed.declaration();
|
||||||
|
diagnostics.addAll(identityMismatchDiagnostics(resolved.registryEntry(), declaration, manifestPath));
|
||||||
|
final PackerOutputContractCatalog.OutputContractDefinition outputContract = PackerOutputContractCatalog.definitionFor(
|
||||||
|
declaration.outputFormat(),
|
||||||
|
declaration.outputCodec());
|
||||||
|
final PackerAssetState state = resolved.registryEntry().isPresent()
|
||||||
|
? PackerAssetState.REGISTERED
|
||||||
|
: PackerAssetState.UNREGISTERED;
|
||||||
|
final PackerAssetSummary summary = new PackerAssetSummary(
|
||||||
|
canonicalReference(project, resolved.assetRoot(), resolved.registryEntry()),
|
||||||
|
new PackerAssetIdentity(
|
||||||
|
resolved.registryEntry().map(PackerRegistryEntry::assetId).orElse(null),
|
||||||
|
resolved.registryEntry().map(PackerRegistryEntry::assetUuid).orElse(declaration.assetUuid()),
|
||||||
|
declaration.name(),
|
||||||
|
resolved.assetRoot()),
|
||||||
|
state,
|
||||||
|
resolved.registryEntry().map(entry -> entry.includedInBuild()
|
||||||
|
? PackerBuildParticipation.INCLUDED
|
||||||
|
: PackerBuildParticipation.EXCLUDED).orElse(PackerBuildParticipation.EXCLUDED),
|
||||||
|
declaration.assetFamily(),
|
||||||
|
declaration.preloadEnabled(),
|
||||||
|
!diagnostics.isEmpty());
|
||||||
|
final PackerAssetDetails details = new PackerAssetDetails(
|
||||||
|
summary,
|
||||||
|
declaration.outputFormat(),
|
||||||
|
declaration.outputCodec(),
|
||||||
|
outputContract.availableCodecs(),
|
||||||
|
outputContract.codecConfigurationFieldsByCodec(),
|
||||||
|
resolveInputs(resolved.assetRoot(), declaration.inputsByRole()),
|
||||||
|
diagnostics);
|
||||||
|
return new GetAssetDetailsResult(
|
||||||
|
diagnostics.stream().anyMatch(PackerDiagnostic::blocking) ? PackerOperationStatus.PARTIAL : PackerOperationStatus.SUCCESS,
|
||||||
|
"Asset details resolved from runtime snapshot.",
|
||||||
|
details,
|
||||||
|
diagnostics);
|
||||||
|
}
|
||||||
|
|
||||||
|
private GetAssetDetailsResult failureResult(
|
||||||
|
PackerProjectContext project,
|
||||||
|
AssetReference requestedReference,
|
||||||
|
ResolvedAssetReference resolved,
|
||||||
|
List<PackerDiagnostic> diagnostics) {
|
||||||
|
final PackerAssetState state = resolved.registryEntry().isPresent()
|
||||||
|
? PackerAssetState.REGISTERED
|
||||||
|
: PackerAssetState.UNREGISTERED;
|
||||||
|
final PackerAssetSummary summary = new PackerAssetSummary(
|
||||||
|
canonicalReferenceOrRequested(project, requestedReference, resolved),
|
||||||
|
new PackerAssetIdentity(
|
||||||
|
resolved.registryEntry().map(PackerRegistryEntry::assetId).orElse(null),
|
||||||
|
resolved.registryEntry().map(PackerRegistryEntry::assetUuid).orElse(null),
|
||||||
|
resolved.assetRoot().getFileName().toString(),
|
||||||
|
resolved.assetRoot()),
|
||||||
|
state,
|
||||||
|
resolved.registryEntry().map(entry -> entry.includedInBuild()
|
||||||
|
? PackerBuildParticipation.INCLUDED
|
||||||
|
: PackerBuildParticipation.EXCLUDED).orElse(PackerBuildParticipation.EXCLUDED),
|
||||||
|
AssetFamilyCatalog.UNKNOWN,
|
||||||
|
false,
|
||||||
|
true);
|
||||||
|
final PackerAssetDetails details = new PackerAssetDetails(
|
||||||
|
summary,
|
||||||
|
"unknown",
|
||||||
|
OutputCodecCatalog.UNKNOWN,
|
||||||
|
List.of(OutputCodecCatalog.NONE),
|
||||||
|
Map.of(OutputCodecCatalog.NONE, List.of()),
|
||||||
|
Map.of(),
|
||||||
|
diagnostics);
|
||||||
|
return new GetAssetDetailsResult(
|
||||||
|
PackerOperationStatus.FAILED,
|
||||||
|
"Asset declaration is invalid or unreadable.",
|
||||||
|
details,
|
||||||
|
diagnostics);
|
||||||
|
}
|
||||||
|
|
||||||
|
private Map<String, List<Path>> resolveInputs(Path assetRoot, Map<String, List<String>> inputsByRole) {
|
||||||
|
final Map<String, List<Path>> resolved = new LinkedHashMap<>();
|
||||||
|
inputsByRole.forEach((role, inputs) -> resolved.put(
|
||||||
|
role,
|
||||||
|
inputs.stream().map(input -> assetRoot.resolve(input).toAbsolutePath().normalize()).toList()));
|
||||||
|
return Map.copyOf(resolved);
|
||||||
|
}
|
||||||
|
|
||||||
|
private List<PackerDiagnostic> identityMismatchDiagnostics(
|
||||||
|
Optional<PackerRegistryEntry> registryEntry,
|
||||||
|
PackerAssetDeclaration declaration,
|
||||||
|
Path manifestPath) {
|
||||||
|
if (registryEntry.isEmpty() || registryEntry.get().assetUuid().equals(declaration.assetUuid())) {
|
||||||
|
return List.of();
|
||||||
|
}
|
||||||
|
return List.of(new PackerDiagnostic(
|
||||||
|
PackerDiagnosticSeverity.ERROR,
|
||||||
|
PackerDiagnosticCategory.STRUCTURAL,
|
||||||
|
"Field 'asset_uuid' does not match the registered asset identity.",
|
||||||
|
manifestPath,
|
||||||
|
true));
|
||||||
|
}
|
||||||
|
|
||||||
|
private ResolvedAssetReference resolveReference(PackerProjectContext project, PackerRuntimeSnapshot snapshot, AssetReference assetReference) {
|
||||||
|
final PackerRegistryLookup lookup = new PackerRegistryLookup();
|
||||||
|
final String reference = Objects.requireNonNull(assetReference, "assetReference").rawValue();
|
||||||
|
final Optional<PackerRegistryEntry> byId = parseAssetId(reference).flatMap(assetId -> lookup.findByAssetId(snapshot.registry(), assetId));
|
||||||
|
if (byId.isPresent()) {
|
||||||
|
final Path assetRoot = PackerWorkspacePaths.assetRoot(project, byId.get().root());
|
||||||
|
return new ResolvedAssetReference(assetRoot, byId, findRuntimeAsset(snapshot, assetRoot), List.of());
|
||||||
|
}
|
||||||
|
|
||||||
|
final Optional<PackerRegistryEntry> byUuid = lookup.findByAssetUuid(snapshot.registry(), reference);
|
||||||
|
if (byUuid.isPresent()) {
|
||||||
|
final Path assetRoot = PackerWorkspacePaths.assetRoot(project, byUuid.get().root());
|
||||||
|
return new ResolvedAssetReference(assetRoot, byUuid, findRuntimeAsset(snapshot, assetRoot), List.of());
|
||||||
|
}
|
||||||
|
|
||||||
|
final Path candidateRoot = PackerWorkspacePaths.assetRoot(project, reference);
|
||||||
|
final Optional<PackerRuntimeAsset> runtimeAsset = findRuntimeAsset(snapshot, candidateRoot);
|
||||||
|
if (runtimeAsset.isPresent()) {
|
||||||
|
return new ResolvedAssetReference(candidateRoot, lookup.findByRoot(project, snapshot.registry(), candidateRoot), runtimeAsset, List.of());
|
||||||
|
}
|
||||||
|
|
||||||
|
final Optional<PackerRegistryEntry> registryEntry = lookup.findByRoot(project, snapshot.registry(), candidateRoot);
|
||||||
|
if (registryEntry.isPresent()) {
|
||||||
|
return new ResolvedAssetReference(candidateRoot, registryEntry, Optional.empty(), List.of());
|
||||||
|
}
|
||||||
|
|
||||||
|
return new ResolvedAssetReference(
|
||||||
|
candidateRoot,
|
||||||
|
Optional.empty(),
|
||||||
|
Optional.empty(),
|
||||||
|
List.of(new PackerDiagnostic(
|
||||||
|
PackerDiagnosticSeverity.ERROR,
|
||||||
|
PackerDiagnosticCategory.STRUCTURAL,
|
||||||
|
"Requested asset reference could not be resolved.",
|
||||||
|
candidateRoot,
|
||||||
|
true)));
|
||||||
|
}
|
||||||
|
|
||||||
|
private AssetReference canonicalReference(
|
||||||
|
PackerProjectContext project,
|
||||||
|
Path assetRoot,
|
||||||
|
Optional<PackerRegistryEntry> registryEntry) {
|
||||||
|
if (registryEntry.isPresent()) {
|
||||||
|
return AssetReference.forAssetId(registryEntry.get().assetId());
|
||||||
|
}
|
||||||
|
return AssetReference.forRelativeAssetRoot(PackerWorkspacePaths.assetsRoot(project)
|
||||||
|
.relativize(assetRoot.toAbsolutePath().normalize())
|
||||||
|
.toString()
|
||||||
|
.replace('\\', '/'));
|
||||||
|
}
|
||||||
|
|
||||||
|
private AssetReference canonicalReferenceOrRequested(
|
||||||
|
PackerProjectContext project,
|
||||||
|
AssetReference requestedReference,
|
||||||
|
ResolvedAssetReference resolved) {
|
||||||
|
if (resolved.registryEntry().isPresent() || resolved.runtimeAsset().isPresent()) {
|
||||||
|
return canonicalReference(project, resolved.assetRoot(), resolved.registryEntry());
|
||||||
|
}
|
||||||
|
return requestedReference;
|
||||||
|
}
|
||||||
|
|
||||||
|
private Optional<Integer> parseAssetId(String reference) {
|
||||||
|
try {
|
||||||
|
return Optional.of(Integer.parseInt(reference.trim()));
|
||||||
|
} catch (NumberFormatException ignored) {
|
||||||
|
return Optional.empty();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private Optional<PackerRuntimeAsset> findRuntimeAsset(PackerRuntimeSnapshot snapshot, Path assetRoot) {
|
||||||
|
final Path normalizedRoot = Objects.requireNonNull(assetRoot, "assetRoot").toAbsolutePath().normalize();
|
||||||
|
return snapshot.assets().stream()
|
||||||
|
.filter(candidate -> candidate.assetRoot().equals(normalizedRoot))
|
||||||
|
.findFirst();
|
||||||
|
}
|
||||||
|
|
||||||
|
private record ResolvedAssetReference(
|
||||||
|
Path assetRoot,
|
||||||
|
Optional<PackerRegistryEntry> registryEntry,
|
||||||
|
Optional<PackerRuntimeAsset> runtimeAsset,
|
||||||
|
List<PackerDiagnostic> diagnostics) {
|
||||||
|
|
||||||
|
private ResolvedAssetReference {
|
||||||
|
assetRoot = Objects.requireNonNull(assetRoot, "assetRoot").toAbsolutePath().normalize();
|
||||||
|
registryEntry = Objects.requireNonNull(registryEntry, "registryEntry");
|
||||||
|
runtimeAsset = Objects.requireNonNull(runtimeAsset, "runtimeAsset");
|
||||||
|
diagnostics = List.copyOf(Objects.requireNonNull(diagnostics, "diagnostics"));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,6 +1,8 @@
|
|||||||
package p.packer.foundation;
|
package p.packer.services;
|
||||||
|
|
||||||
import p.packer.api.PackerProjectContext;
|
import p.packer.PackerProjectContext;
|
||||||
|
import p.packer.models.PackerRegistryEntry;
|
||||||
|
import p.packer.models.PackerRegistryState;
|
||||||
|
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.util.Comparator;
|
import java.util.Comparator;
|
||||||
@ -19,7 +21,7 @@ public final class PackerIdentityAllocator {
|
|||||||
final String relativeRoot = PackerWorkspacePaths.relativeAssetRoot(project, normalizedAssetRoot);
|
final String relativeRoot = PackerWorkspacePaths.relativeAssetRoot(project, normalizedAssetRoot);
|
||||||
final boolean alreadyRegistered = state.assets().stream().anyMatch(entry -> entry.root().equals(relativeRoot));
|
final boolean alreadyRegistered = state.assets().stream().anyMatch(entry -> entry.root().equals(relativeRoot));
|
||||||
if (alreadyRegistered) {
|
if (alreadyRegistered) {
|
||||||
throw new PackerRegistryException("Asset root is already registered: " + relativeRoot);
|
throw new p.packer.exceptions.PackerRegistryException("Asset root is already registered: " + relativeRoot);
|
||||||
}
|
}
|
||||||
return new PackerRegistryEntry(state.nextAssetId(), UUID.randomUUID().toString(), relativeRoot);
|
return new PackerRegistryEntry(state.nextAssetId(), UUID.randomUUID().toString(), relativeRoot);
|
||||||
}
|
}
|
||||||
@ -1,10 +1,10 @@
|
|||||||
package p.packer.events;
|
package p.packer.services;
|
||||||
|
|
||||||
import p.packer.api.PackerProjectContext;
|
import p.packer.PackerProjectContext;
|
||||||
import p.packer.api.events.PackerEvent;
|
import p.packer.events.PackerEvent;
|
||||||
import p.packer.api.events.PackerEventKind;
|
import p.packer.events.PackerEventKind;
|
||||||
import p.packer.api.events.PackerEventSink;
|
import p.packer.events.PackerEventSink;
|
||||||
import p.packer.api.events.PackerProgress;
|
import p.packer.events.PackerProgress;
|
||||||
|
|
||||||
import java.time.Instant;
|
import java.time.Instant;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
@ -0,0 +1,62 @@
|
|||||||
|
package p.packer.services;
|
||||||
|
|
||||||
|
import p.packer.assets.OutputCodecCatalog;
|
||||||
|
import p.packer.assets.OutputFormatCatalog;
|
||||||
|
import java.util.LinkedHashSet;
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.Locale;
|
||||||
|
import java.util.Map;
|
||||||
|
import java.util.Objects;
|
||||||
|
import java.util.Set;
|
||||||
|
import p.packer.assets.PackerCodecConfigurationField;
|
||||||
|
|
||||||
|
final class PackerOutputContractCatalog {
|
||||||
|
private PackerOutputContractCatalog() {
|
||||||
|
}
|
||||||
|
|
||||||
|
static OutputContractDefinition definitionFor(String outputFormat, OutputCodecCatalog selectedCodec) {
|
||||||
|
final List<OutputCodecCatalog> availableCodecs = availableCodecsFor(outputFormat, selectedCodec);
|
||||||
|
return new OutputContractDefinition(
|
||||||
|
availableCodecs,
|
||||||
|
availableCodecs.stream().collect(java.util.stream.Collectors.toMap(
|
||||||
|
codec -> codec,
|
||||||
|
codec -> fieldsFor(outputFormat, codec),
|
||||||
|
(left, right) -> left,
|
||||||
|
java.util.LinkedHashMap::new)));
|
||||||
|
}
|
||||||
|
|
||||||
|
static List<OutputCodecCatalog> availableCodecsFor(String outputFormat, OutputCodecCatalog selectedCodec) {
|
||||||
|
final Set<OutputCodecCatalog> codecs = new LinkedHashSet<>();
|
||||||
|
final OutputFormatCatalog formatCatalog = OutputFormatCatalog.fromManifestValue(outputFormat);
|
||||||
|
if (formatCatalog != OutputFormatCatalog.UNKNOWN) {
|
||||||
|
codecs.addAll(formatCatalog.availableCodecs());
|
||||||
|
} else {
|
||||||
|
final String normalizedFormat = Objects.requireNonNullElse(outputFormat, "").trim().toUpperCase(Locale.ROOT);
|
||||||
|
if (normalizedFormat.startsWith("TILES/")
|
||||||
|
|| normalizedFormat.startsWith("PALETTE/")
|
||||||
|
|| normalizedFormat.startsWith("SOUND/")
|
||||||
|
|| normalizedFormat.startsWith("AUDIO/")) {
|
||||||
|
codecs.add(OutputCodecCatalog.NONE);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
final OutputCodecCatalog normalizedSelectedCodec = Objects.requireNonNullElse(selectedCodec, OutputCodecCatalog.UNKNOWN);
|
||||||
|
if (normalizedSelectedCodec != OutputCodecCatalog.UNKNOWN) {
|
||||||
|
codecs.add(normalizedSelectedCodec);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (codecs.isEmpty()) {
|
||||||
|
codecs.add(OutputCodecCatalog.NONE);
|
||||||
|
}
|
||||||
|
return List.copyOf(codecs);
|
||||||
|
}
|
||||||
|
|
||||||
|
private static List<PackerCodecConfigurationField> fieldsFor(String outputFormat, OutputCodecCatalog codec) {
|
||||||
|
return List.of();
|
||||||
|
}
|
||||||
|
|
||||||
|
record OutputContractDefinition(
|
||||||
|
List<OutputCodecCatalog> availableCodecs,
|
||||||
|
Map<OutputCodecCatalog, List<PackerCodecConfigurationField>> codecConfigurationFieldsByCodec) {
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -0,0 +1,47 @@
|
|||||||
|
package p.packer.services;
|
||||||
|
|
||||||
|
import p.packer.PackerProjectContext;
|
||||||
|
import p.packer.models.PackerRuntimeSnapshot;
|
||||||
|
|
||||||
|
import java.util.Objects;
|
||||||
|
import java.util.concurrent.atomic.AtomicBoolean;
|
||||||
|
import java.util.concurrent.atomic.AtomicReference;
|
||||||
|
|
||||||
|
public final class PackerProjectRuntime {
|
||||||
|
private final PackerProjectContext project;
|
||||||
|
private final AtomicReference<PackerRuntimeSnapshot> snapshot;
|
||||||
|
private final AtomicBoolean disposed = new AtomicBoolean(false);
|
||||||
|
|
||||||
|
public PackerProjectRuntime(PackerProjectContext project, PackerRuntimeSnapshot initialSnapshot) {
|
||||||
|
this.project = Objects.requireNonNull(project, "project");
|
||||||
|
this.snapshot = new AtomicReference<>(Objects.requireNonNull(initialSnapshot, "initialSnapshot"));
|
||||||
|
}
|
||||||
|
|
||||||
|
public PackerProjectContext project() {
|
||||||
|
return project;
|
||||||
|
}
|
||||||
|
|
||||||
|
public PackerRuntimeSnapshot snapshot() {
|
||||||
|
ensureActive();
|
||||||
|
return snapshot.get();
|
||||||
|
}
|
||||||
|
|
||||||
|
public void replaceSnapshot(PackerRuntimeSnapshot nextSnapshot) {
|
||||||
|
ensureActive();
|
||||||
|
snapshot.set(Objects.requireNonNull(nextSnapshot, "nextSnapshot"));
|
||||||
|
}
|
||||||
|
|
||||||
|
public boolean disposed() {
|
||||||
|
return disposed.get();
|
||||||
|
}
|
||||||
|
|
||||||
|
public void dispose() {
|
||||||
|
disposed.set(true);
|
||||||
|
}
|
||||||
|
|
||||||
|
private void ensureActive() {
|
||||||
|
if (disposed.get()) {
|
||||||
|
throw new IllegalStateException("Runtime has been disposed for project: " + project.projectId());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -0,0 +1,84 @@
|
|||||||
|
package p.packer.services;
|
||||||
|
|
||||||
|
import p.packer.PackerProjectContext;
|
||||||
|
|
||||||
|
import java.io.Closeable;
|
||||||
|
import java.nio.file.Path;
|
||||||
|
import java.util.Objects;
|
||||||
|
import java.util.concurrent.Callable;
|
||||||
|
import java.util.concurrent.ConcurrentHashMap;
|
||||||
|
import java.util.concurrent.ConcurrentMap;
|
||||||
|
import java.util.concurrent.ExecutionException;
|
||||||
|
import java.util.concurrent.ExecutorService;
|
||||||
|
import java.util.concurrent.Future;
|
||||||
|
import java.util.concurrent.LinkedBlockingQueue;
|
||||||
|
import java.util.concurrent.ThreadPoolExecutor;
|
||||||
|
import java.util.concurrent.TimeUnit;
|
||||||
|
import java.util.concurrent.atomic.AtomicInteger;
|
||||||
|
|
||||||
|
public final class PackerProjectWriteCoordinator implements Closeable {
|
||||||
|
private final ConcurrentMap<ProjectKey, ExecutorService> executors = new ConcurrentHashMap<>();
|
||||||
|
private final AtomicInteger nextThreadId = new AtomicInteger(1);
|
||||||
|
|
||||||
|
public <T> Future<T> submit(PackerProjectContext project, Callable<T> task) {
|
||||||
|
final PackerProjectContext safeProject = Objects.requireNonNull(project, "project");
|
||||||
|
final Callable<T> safeTask = Objects.requireNonNull(task, "task");
|
||||||
|
return executorFor(safeProject).submit(safeTask);
|
||||||
|
}
|
||||||
|
|
||||||
|
public <T> T execute(PackerProjectContext project, Callable<T> task) {
|
||||||
|
try {
|
||||||
|
return submit(project, task).get();
|
||||||
|
} catch (InterruptedException exception) {
|
||||||
|
Thread.currentThread().interrupt();
|
||||||
|
throw new IllegalStateException("Write execution was interrupted for project: " + project.projectId(), exception);
|
||||||
|
} catch (ExecutionException exception) {
|
||||||
|
final Throwable cause = exception.getCause();
|
||||||
|
if (cause instanceof RuntimeException runtimeException) {
|
||||||
|
throw runtimeException;
|
||||||
|
}
|
||||||
|
throw new IllegalStateException("Write execution failed for project: " + project.projectId(), cause);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void close() {
|
||||||
|
executors.values().forEach(executor -> {
|
||||||
|
executor.shutdownNow();
|
||||||
|
try {
|
||||||
|
executor.awaitTermination(5, TimeUnit.SECONDS);
|
||||||
|
} catch (InterruptedException exception) {
|
||||||
|
Thread.currentThread().interrupt();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
executors.clear();
|
||||||
|
}
|
||||||
|
|
||||||
|
private ExecutorService executorFor(PackerProjectContext project) {
|
||||||
|
final ProjectKey key = ProjectKey.from(project);
|
||||||
|
return executors.computeIfAbsent(key, ignored -> new ThreadPoolExecutor(
|
||||||
|
1,
|
||||||
|
1,
|
||||||
|
0L,
|
||||||
|
TimeUnit.MILLISECONDS,
|
||||||
|
new LinkedBlockingQueue<>(),
|
||||||
|
runnable -> {
|
||||||
|
final Thread thread = new Thread(
|
||||||
|
runnable,
|
||||||
|
"packer-write-" + project.projectId() + "-" + nextThreadId.getAndIncrement());
|
||||||
|
thread.setDaemon(true);
|
||||||
|
return thread;
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
private record ProjectKey(
|
||||||
|
String projectId,
|
||||||
|
Path rootPath) {
|
||||||
|
|
||||||
|
private static ProjectKey from(PackerProjectContext project) {
|
||||||
|
return new ProjectKey(
|
||||||
|
Objects.requireNonNull(project.projectId(), "projectId"),
|
||||||
|
Objects.requireNonNull(project.rootPath(), "rootPath").toAbsolutePath().normalize());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,6 +1,8 @@
|
|||||||
package p.packer.foundation;
|
package p.packer.services;
|
||||||
|
|
||||||
import p.packer.api.PackerProjectContext;
|
import p.packer.PackerProjectContext;
|
||||||
|
import p.packer.models.PackerRegistryEntry;
|
||||||
|
import p.packer.models.PackerRegistryState;
|
||||||
|
|
||||||
import java.nio.file.Files;
|
import java.nio.file.Files;
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
@ -31,7 +33,7 @@ public final class PackerRegistryLookup {
|
|||||||
Objects.requireNonNull(entry, "entry");
|
Objects.requireNonNull(entry, "entry");
|
||||||
final Path assetRoot = PackerWorkspacePaths.assetRoot(project, entry.root());
|
final Path assetRoot = PackerWorkspacePaths.assetRoot(project, entry.root());
|
||||||
if (!Files.isDirectory(assetRoot)) {
|
if (!Files.isDirectory(assetRoot)) {
|
||||||
throw new PackerRegistryException("Registered asset root does not exist: " + entry.root());
|
throw new p.packer.exceptions.PackerRegistryException("Registered asset root does not exist: " + entry.root());
|
||||||
}
|
}
|
||||||
return assetRoot;
|
return assetRoot;
|
||||||
}
|
}
|
||||||
@ -1,6 +1,7 @@
|
|||||||
package p.packer.foundation;
|
package p.packer.services;
|
||||||
|
|
||||||
import p.packer.api.PackerProjectContext;
|
import p.packer.PackerProjectContext;
|
||||||
|
import p.packer.models.PackerRegistryState;
|
||||||
|
|
||||||
public interface PackerRegistryRepository {
|
public interface PackerRegistryRepository {
|
||||||
PackerRegistryState load(PackerProjectContext project);
|
PackerRegistryState load(PackerProjectContext project);
|
||||||
@ -0,0 +1,70 @@
|
|||||||
|
package p.packer.services;
|
||||||
|
|
||||||
|
import p.packer.PackerProjectContext;
|
||||||
|
import p.packer.messages.InitWorkspaceRequest;
|
||||||
|
import p.packer.models.PackerRegistryEntry;
|
||||||
|
import p.packer.models.PackerRegistryState;
|
||||||
|
import p.packer.models.PackerRuntimeAsset;
|
||||||
|
import p.packer.models.PackerRuntimeSnapshot;
|
||||||
|
|
||||||
|
import java.io.IOException;
|
||||||
|
import java.nio.file.Files;
|
||||||
|
import java.nio.file.Path;
|
||||||
|
import java.util.ArrayList;
|
||||||
|
import java.util.Comparator;
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.Map;
|
||||||
|
import java.util.Objects;
|
||||||
|
import java.util.Optional;
|
||||||
|
import java.util.stream.Collectors;
|
||||||
|
import java.util.stream.Stream;
|
||||||
|
|
||||||
|
public final class PackerRuntimeLoader {
|
||||||
|
private final PackerWorkspaceFoundation workspaceFoundation;
|
||||||
|
private final PackerAssetDeclarationParser parser;
|
||||||
|
|
||||||
|
public PackerRuntimeLoader(
|
||||||
|
PackerWorkspaceFoundation workspaceFoundation,
|
||||||
|
PackerAssetDeclarationParser parser) {
|
||||||
|
this.workspaceFoundation = Objects.requireNonNull(workspaceFoundation, "workspaceFoundation");
|
||||||
|
this.parser = Objects.requireNonNull(parser, "parser");
|
||||||
|
}
|
||||||
|
|
||||||
|
public PackerRuntimeSnapshot load(PackerProjectContext project, long generation) {
|
||||||
|
final PackerProjectContext safeProject = Objects.requireNonNull(project, "project");
|
||||||
|
workspaceFoundation.initWorkspace(new InitWorkspaceRequest(safeProject));
|
||||||
|
|
||||||
|
final PackerRegistryState registry = workspaceFoundation.loadRegistry(safeProject);
|
||||||
|
final Map<Path, PackerRegistryEntry> registryByRoot = registry.assets().stream()
|
||||||
|
.collect(Collectors.toMap(
|
||||||
|
entry -> PackerWorkspacePaths.assetRoot(safeProject, entry.root()),
|
||||||
|
entry -> entry));
|
||||||
|
|
||||||
|
final List<PackerRuntimeAsset> assets = new ArrayList<>();
|
||||||
|
final Path assetsRoot = PackerWorkspacePaths.assetsRoot(safeProject);
|
||||||
|
if (Files.isDirectory(assetsRoot)) {
|
||||||
|
try (Stream<Path> paths = Files.find(assetsRoot, Integer.MAX_VALUE, (path, attrs) ->
|
||||||
|
attrs.isRegularFile() && path.getFileName().toString().equals("asset.json"))) {
|
||||||
|
final List<Path> manifests = paths
|
||||||
|
.map(path -> path.toAbsolutePath().normalize())
|
||||||
|
.sorted(Comparator.naturalOrder())
|
||||||
|
.toList();
|
||||||
|
for (Path manifestPath : manifests) {
|
||||||
|
final Path assetRoot = manifestPath.getParent();
|
||||||
|
final Optional<PackerRegistryEntry> registryEntry = Optional.ofNullable(registryByRoot.get(assetRoot));
|
||||||
|
assets.add(new PackerRuntimeAsset(
|
||||||
|
assetRoot,
|
||||||
|
manifestPath,
|
||||||
|
registryEntry,
|
||||||
|
parser.parse(manifestPath)));
|
||||||
|
}
|
||||||
|
} catch (IOException exception) {
|
||||||
|
throw new p.packer.exceptions.PackerRegistryException(
|
||||||
|
"Unable to build runtime snapshot for " + safeProject.projectId(),
|
||||||
|
exception);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return new PackerRuntimeSnapshot(generation, registry, assets);
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -0,0 +1,71 @@
|
|||||||
|
package p.packer.services;
|
||||||
|
|
||||||
|
import p.packer.PackerProjectContext;
|
||||||
|
|
||||||
|
import java.nio.file.Path;
|
||||||
|
import java.util.Objects;
|
||||||
|
import java.util.Optional;
|
||||||
|
import java.util.concurrent.ConcurrentHashMap;
|
||||||
|
import java.util.concurrent.ConcurrentMap;
|
||||||
|
import java.util.concurrent.atomic.AtomicLong;
|
||||||
|
|
||||||
|
public final class PackerRuntimeRegistry {
|
||||||
|
private final PackerRuntimeLoader loader;
|
||||||
|
private final ConcurrentMap<ProjectKey, PackerProjectRuntime> runtimes = new ConcurrentHashMap<>();
|
||||||
|
private final AtomicLong nextGeneration = new AtomicLong(1L);
|
||||||
|
|
||||||
|
public PackerRuntimeRegistry(PackerRuntimeLoader loader) {
|
||||||
|
this.loader = Objects.requireNonNull(loader, "loader");
|
||||||
|
}
|
||||||
|
|
||||||
|
public PackerProjectRuntime getOrLoad(PackerProjectContext project) {
|
||||||
|
final PackerProjectContext safeProject = Objects.requireNonNull(project, "project");
|
||||||
|
final ProjectKey key = ProjectKey.from(safeProject);
|
||||||
|
return runtimes.compute(key, (ignored, current) -> {
|
||||||
|
if (current != null && !current.disposed()) {
|
||||||
|
return current;
|
||||||
|
}
|
||||||
|
return new PackerProjectRuntime(safeProject, loader.load(safeProject, nextGeneration.getAndIncrement()));
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
public Optional<PackerProjectRuntime> find(PackerProjectContext project) {
|
||||||
|
return Optional.ofNullable(runtimes.get(ProjectKey.from(Objects.requireNonNull(project, "project"))));
|
||||||
|
}
|
||||||
|
|
||||||
|
public PackerProjectRuntime refresh(PackerProjectContext project) {
|
||||||
|
final PackerProjectContext safeProject = Objects.requireNonNull(project, "project");
|
||||||
|
final ProjectKey key = ProjectKey.from(safeProject);
|
||||||
|
return runtimes.compute(key, (ignored, current) -> {
|
||||||
|
final var snapshot = loader.load(safeProject, nextGeneration.getAndIncrement());
|
||||||
|
if (current == null || current.disposed()) {
|
||||||
|
return new PackerProjectRuntime(safeProject, snapshot);
|
||||||
|
}
|
||||||
|
current.replaceSnapshot(snapshot);
|
||||||
|
return current;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
public void dispose(PackerProjectContext project) {
|
||||||
|
final PackerProjectRuntime removed = runtimes.remove(ProjectKey.from(Objects.requireNonNull(project, "project")));
|
||||||
|
if (removed != null) {
|
||||||
|
removed.dispose();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public void disposeAll() {
|
||||||
|
runtimes.values().forEach(PackerProjectRuntime::dispose);
|
||||||
|
runtimes.clear();
|
||||||
|
}
|
||||||
|
|
||||||
|
private record ProjectKey(
|
||||||
|
String projectId,
|
||||||
|
Path rootPath) {
|
||||||
|
|
||||||
|
private static ProjectKey from(PackerProjectContext project) {
|
||||||
|
return new ProjectKey(
|
||||||
|
Objects.requireNonNull(project.projectId(), "projectId"),
|
||||||
|
Objects.requireNonNull(project.rootPath(), "rootPath").toAbsolutePath().normalize());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,9 +1,11 @@
|
|||||||
package p.packer.foundation;
|
package p.packer.services;
|
||||||
|
|
||||||
import p.packer.api.PackerOperationStatus;
|
import p.packer.PackerOperationStatus;
|
||||||
import p.packer.api.PackerProjectContext;
|
import p.packer.PackerProjectContext;
|
||||||
import p.packer.api.workspace.InitWorkspaceRequest;
|
import p.packer.messages.InitWorkspaceRequest;
|
||||||
import p.packer.api.workspace.InitWorkspaceResult;
|
import p.packer.messages.InitWorkspaceResult;
|
||||||
|
import p.packer.models.PackerRegistryEntry;
|
||||||
|
import p.packer.models.PackerRegistryState;
|
||||||
|
|
||||||
import java.nio.file.Files;
|
import java.nio.file.Files;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
@ -40,7 +42,7 @@ public final class PackerWorkspaceFoundation {
|
|||||||
PackerWorkspacePaths.registryPath(project),
|
PackerWorkspacePaths.registryPath(project),
|
||||||
List.of());
|
List.of());
|
||||||
} catch (Exception exception) {
|
} catch (Exception exception) {
|
||||||
throw new PackerRegistryException("Unable to initialize workspace for " + project.projectId(), exception);
|
throw new p.packer.exceptions.PackerRegistryException("Unable to initialize workspace for " + project.projectId(), exception);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1,6 +1,6 @@
|
|||||||
package p.packer.foundation;
|
package p.packer.services;
|
||||||
|
|
||||||
import p.packer.api.PackerProjectContext;
|
import p.packer.PackerProjectContext;
|
||||||
|
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
@ -0,0 +1,231 @@
|
|||||||
|
package p.packer.services;
|
||||||
|
|
||||||
|
import org.junit.jupiter.api.Test;
|
||||||
|
import org.junit.jupiter.api.io.TempDir;
|
||||||
|
import p.packer.PackerOperationStatus;
|
||||||
|
import p.packer.PackerProjectContext;
|
||||||
|
import p.packer.assets.AssetFamilyCatalog;
|
||||||
|
import p.packer.assets.OutputCodecCatalog;
|
||||||
|
import p.packer.assets.OutputFormatCatalog;
|
||||||
|
import p.packer.assets.AssetReference;
|
||||||
|
import p.packer.assets.PackerBuildParticipation;
|
||||||
|
import p.packer.assets.PackerAssetState;
|
||||||
|
import p.packer.events.PackerEvent;
|
||||||
|
import p.packer.events.PackerEventKind;
|
||||||
|
import p.packer.messages.CreateAssetRequest;
|
||||||
|
import p.packer.messages.CreateAssetResult;
|
||||||
|
import p.packer.messages.GetAssetDetailsRequest;
|
||||||
|
import p.packer.messages.ListAssetsRequest;
|
||||||
|
import p.packer.testing.PackerFixtureLocator;
|
||||||
|
|
||||||
|
import java.nio.file.Files;
|
||||||
|
import java.nio.file.Path;
|
||||||
|
import java.util.Comparator;
|
||||||
|
import java.util.HashSet;
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.concurrent.CopyOnWriteArrayList;
|
||||||
|
import java.util.concurrent.Executors;
|
||||||
|
import java.util.concurrent.Future;
|
||||||
|
|
||||||
|
import static org.junit.jupiter.api.Assertions.*;
|
||||||
|
|
||||||
|
final class FileSystemPackerWorkspaceServiceTest {
|
||||||
|
@TempDir
|
||||||
|
Path tempDir;
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void listsRegisteredAndUnregisteredAssetsFromWorkspaceScan() throws Exception {
|
||||||
|
final Path projectRoot = copyFixture("workspaces/read-mixed", tempDir.resolve("mixed"));
|
||||||
|
final FileSystemPackerWorkspaceService service = service();
|
||||||
|
|
||||||
|
final var result = service.listAssets(new ListAssetsRequest(project(projectRoot)));
|
||||||
|
|
||||||
|
assertEquals(PackerOperationStatus.SUCCESS, result.status());
|
||||||
|
assertEquals(2, result.assets().size());
|
||||||
|
assertTrue(result.assets().stream().anyMatch(asset -> asset.state() == PackerAssetState.REGISTERED));
|
||||||
|
assertTrue(result.assets().stream().anyMatch(asset -> asset.state() == PackerAssetState.UNREGISTERED));
|
||||||
|
assertTrue(result.assets().stream().allMatch(asset -> asset.identity().assetUuid() != null && !asset.identity().assetUuid().isBlank()));
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void surfacesMissingRegisteredRootAsStructuralDiagnostic() throws Exception {
|
||||||
|
final Path projectRoot = copyFixture("workspaces/read-missing-root", tempDir.resolve("missing-root"));
|
||||||
|
final FileSystemPackerWorkspaceService service = service();
|
||||||
|
|
||||||
|
final var result = service.listAssets(new ListAssetsRequest(project(projectRoot)));
|
||||||
|
|
||||||
|
assertEquals(PackerOperationStatus.PARTIAL, result.status());
|
||||||
|
assertTrue(result.diagnostics().stream().anyMatch(diagnostic -> diagnostic.message().contains("missing asset.json")));
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void includesInvalidDeclarationsInSnapshotWithDiagnostics() throws Exception {
|
||||||
|
final Path projectRoot = copyFixture("workspaces/read-invalid", tempDir.resolve("invalid"));
|
||||||
|
final FileSystemPackerWorkspaceService service = service();
|
||||||
|
|
||||||
|
final var result = service.listAssets(new ListAssetsRequest(project(projectRoot)));
|
||||||
|
|
||||||
|
assertEquals(1, result.assets().size());
|
||||||
|
assertEquals(PackerAssetState.UNREGISTERED, result.assets().getFirst().state());
|
||||||
|
assertEquals(PackerBuildParticipation.EXCLUDED, result.assets().getFirst().buildParticipation());
|
||||||
|
assertTrue(result.assets().getFirst().hasDiagnostics());
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void emitsDiscoveryAndDiagnosticsEventsDuringScan() throws Exception {
|
||||||
|
final Path projectRoot = copyFixture("workspaces/read-invalid", tempDir.resolve("events"));
|
||||||
|
final List<PackerEvent> events = new CopyOnWriteArrayList<>();
|
||||||
|
final FileSystemPackerWorkspaceService service = service(events::add);
|
||||||
|
|
||||||
|
service.listAssets(new ListAssetsRequest(project(projectRoot)));
|
||||||
|
|
||||||
|
assertTrue(events.stream().anyMatch(event -> event.kind() == PackerEventKind.ASSET_DISCOVERED));
|
||||||
|
assertTrue(events.stream().anyMatch(event -> event.kind() == PackerEventKind.DIAGNOSTICS_UPDATED));
|
||||||
|
assertTrue(events.stream().allMatch(event -> event.sequence() >= 0L));
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void createsRegisteredAssetAndWritesManifest() throws Exception {
|
||||||
|
final Path projectRoot = tempDir.resolve("created");
|
||||||
|
final List<PackerEvent> events = new CopyOnWriteArrayList<>();
|
||||||
|
final FileSystemPackerWorkspaceService service = service(events::add);
|
||||||
|
|
||||||
|
final var result = service.createAsset(new CreateAssetRequest(
|
||||||
|
project(projectRoot),
|
||||||
|
"ui/new-atlas",
|
||||||
|
"new_atlas",
|
||||||
|
AssetFamilyCatalog.IMAGE_BANK,
|
||||||
|
OutputFormatCatalog.TILES_INDEXED_V1,
|
||||||
|
OutputCodecCatalog.NONE,
|
||||||
|
true));
|
||||||
|
|
||||||
|
assertEquals(PackerOperationStatus.SUCCESS, result.status());
|
||||||
|
assertNotNull(result.assetReference());
|
||||||
|
assertTrue(Files.isRegularFile(projectRoot.resolve("assets/ui/new-atlas/asset.json")));
|
||||||
|
final String manifestJson = Files.readString(projectRoot.resolve("assets/ui/new-atlas/asset.json"));
|
||||||
|
assertTrue(manifestJson.contains("\"asset_uuid\""));
|
||||||
|
|
||||||
|
final var snapshot = service.listAssets(new ListAssetsRequest(project(projectRoot)));
|
||||||
|
assertEquals(1, snapshot.assets().size());
|
||||||
|
assertEquals(PackerAssetState.REGISTERED, snapshot.assets().getFirst().state());
|
||||||
|
assertEquals("new_atlas", snapshot.assets().getFirst().identity().assetName());
|
||||||
|
assertNotNull(snapshot.assets().getFirst().identity().assetUuid());
|
||||||
|
assertEquals(PackerBuildParticipation.INCLUDED, snapshot.assets().getFirst().buildParticipation());
|
||||||
|
assertTrue(events.stream().anyMatch(event -> event.kind() == PackerEventKind.ACTION_APPLIED));
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void returnsCreatedAssetThroughRuntimeBackedDetailsWithoutRescanMismatch() throws Exception {
|
||||||
|
final Path projectRoot = tempDir.resolve("created-details");
|
||||||
|
final FileSystemPackerWorkspaceService service = service();
|
||||||
|
|
||||||
|
final CreateAssetResult createResult = service.createAsset(new CreateAssetRequest(
|
||||||
|
project(projectRoot),
|
||||||
|
"ui/new-atlas",
|
||||||
|
"new_atlas",
|
||||||
|
AssetFamilyCatalog.IMAGE_BANK,
|
||||||
|
OutputFormatCatalog.TILES_INDEXED_V1,
|
||||||
|
OutputCodecCatalog.NONE,
|
||||||
|
true));
|
||||||
|
|
||||||
|
assertEquals(PackerOperationStatus.SUCCESS, createResult.status());
|
||||||
|
final AssetReference assetReference = createResult.assetReference();
|
||||||
|
assertNotNull(assetReference);
|
||||||
|
|
||||||
|
final var detailsResult = service.getAssetDetails(new GetAssetDetailsRequest(
|
||||||
|
project(projectRoot),
|
||||||
|
assetReference));
|
||||||
|
|
||||||
|
assertEquals(PackerOperationStatus.SUCCESS, detailsResult.status());
|
||||||
|
assertEquals(PackerAssetState.REGISTERED, detailsResult.details().summary().state());
|
||||||
|
assertEquals("new_atlas", detailsResult.details().summary().identity().assetName());
|
||||||
|
assertNotNull(detailsResult.details().summary().identity().assetUuid());
|
||||||
|
assertTrue(detailsResult.diagnostics().isEmpty());
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void rejectsUnsupportedFormatForSelectedFamily() {
|
||||||
|
final Path projectRoot = tempDir.resolve("unsupported");
|
||||||
|
final List<PackerEvent> events = new CopyOnWriteArrayList<>();
|
||||||
|
final FileSystemPackerWorkspaceService service = service(events::add);
|
||||||
|
|
||||||
|
final var result = service.createAsset(new CreateAssetRequest(
|
||||||
|
project(projectRoot),
|
||||||
|
"audio/bad",
|
||||||
|
"bad_asset",
|
||||||
|
AssetFamilyCatalog.IMAGE_BANK,
|
||||||
|
OutputFormatCatalog.SOUND_BANK_V1,
|
||||||
|
OutputCodecCatalog.NONE,
|
||||||
|
false));
|
||||||
|
|
||||||
|
assertEquals(PackerOperationStatus.FAILED, result.status());
|
||||||
|
assertNull(result.assetReference());
|
||||||
|
assertTrue(result.summary().contains("not supported"));
|
||||||
|
assertTrue(events.stream().anyMatch(event -> event.kind() == PackerEventKind.ACTION_FAILED));
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void serializesConcurrentCreateAssetRequestsPerProject() throws Exception {
|
||||||
|
final Path projectRoot = tempDir.resolve("concurrent-create");
|
||||||
|
final FileSystemPackerWorkspaceService service = service();
|
||||||
|
try (var executor = Executors.newFixedThreadPool(2)) {
|
||||||
|
final Future<CreateAssetResult> first = executor.submit(() -> service.createAsset(new CreateAssetRequest(
|
||||||
|
project(projectRoot),
|
||||||
|
"ui/atlas-a",
|
||||||
|
"atlas_a",
|
||||||
|
AssetFamilyCatalog.IMAGE_BANK,
|
||||||
|
OutputFormatCatalog.TILES_INDEXED_V1,
|
||||||
|
OutputCodecCatalog.NONE,
|
||||||
|
true)));
|
||||||
|
final Future<CreateAssetResult> second = executor.submit(() -> service.createAsset(new CreateAssetRequest(
|
||||||
|
project(projectRoot),
|
||||||
|
"ui/atlas-b",
|
||||||
|
"atlas_b",
|
||||||
|
AssetFamilyCatalog.IMAGE_BANK,
|
||||||
|
OutputFormatCatalog.TILES_INDEXED_V1,
|
||||||
|
OutputCodecCatalog.NONE,
|
||||||
|
false)));
|
||||||
|
|
||||||
|
assertEquals(PackerOperationStatus.SUCCESS, first.get().status());
|
||||||
|
assertEquals(PackerOperationStatus.SUCCESS, second.get().status());
|
||||||
|
}
|
||||||
|
|
||||||
|
final var snapshot = service.listAssets(new ListAssetsRequest(project(projectRoot)));
|
||||||
|
assertEquals(2, snapshot.assets().size());
|
||||||
|
assertEquals(2, new HashSet<>(snapshot.assets().stream().map(asset -> asset.identity().assetId()).toList()).size());
|
||||||
|
}
|
||||||
|
|
||||||
|
private PackerProjectContext project(Path root) {
|
||||||
|
return new PackerProjectContext("main", root);
|
||||||
|
}
|
||||||
|
|
||||||
|
private FileSystemPackerWorkspaceService service() {
|
||||||
|
return service(ignored -> {
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
private FileSystemPackerWorkspaceService service(p.packer.events.PackerEventSink eventSink) {
|
||||||
|
final var foundation = new p.packer.services.PackerWorkspaceFoundation();
|
||||||
|
final var parser = new p.packer.services.PackerAssetDeclarationParser();
|
||||||
|
final var runtimeRegistry = new p.packer.services.PackerRuntimeRegistry(new p.packer.services.PackerRuntimeLoader(foundation, parser));
|
||||||
|
final var detailsService = new p.packer.services.PackerAssetDetailsService(runtimeRegistry);
|
||||||
|
final var writeCoordinator = new p.packer.services.PackerProjectWriteCoordinator();
|
||||||
|
return new FileSystemPackerWorkspaceService(foundation, detailsService, runtimeRegistry, writeCoordinator, eventSink);
|
||||||
|
}
|
||||||
|
|
||||||
|
private Path copyFixture(String relativePath, Path targetRoot) throws Exception {
|
||||||
|
final Path sourceRoot = PackerFixtureLocator.fixtureRoot(relativePath);
|
||||||
|
try (var stream = Files.walk(sourceRoot)) {
|
||||||
|
for (Path source : stream.sorted(Comparator.naturalOrder()).toList()) {
|
||||||
|
final Path target = targetRoot.resolve(sourceRoot.relativize(source).toString());
|
||||||
|
if (Files.isDirectory(source)) {
|
||||||
|
Files.createDirectories(target);
|
||||||
|
} else {
|
||||||
|
Files.createDirectories(target.getParent());
|
||||||
|
Files.copy(source, target);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return targetRoot;
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,8 +1,10 @@
|
|||||||
package p.packer.declarations;
|
package p.packer.services;
|
||||||
|
|
||||||
import org.junit.jupiter.api.Test;
|
import org.junit.jupiter.api.Test;
|
||||||
import org.junit.jupiter.api.io.TempDir;
|
import org.junit.jupiter.api.io.TempDir;
|
||||||
import p.packer.api.diagnostics.PackerDiagnosticCategory;
|
import p.packer.assets.AssetFamilyCatalog;
|
||||||
|
import p.packer.assets.OutputCodecCatalog;
|
||||||
|
import p.packer.diagnostics.PackerDiagnosticCategory;
|
||||||
import p.packer.testing.PackerFixtureLocator;
|
import p.packer.testing.PackerFixtureLocator;
|
||||||
|
|
||||||
import java.nio.file.Files;
|
import java.nio.file.Files;
|
||||||
@ -23,10 +25,11 @@ final class PackerAssetDeclarationParserTest {
|
|||||||
assertTrue(result.valid());
|
assertTrue(result.valid());
|
||||||
assertNotNull(result.declaration());
|
assertNotNull(result.declaration());
|
||||||
assertEquals(1, result.declaration().schemaVersion());
|
assertEquals(1, result.declaration().schemaVersion());
|
||||||
|
assertEquals("fixture-uuid-1", result.declaration().assetUuid());
|
||||||
assertEquals("ui_atlas", result.declaration().name());
|
assertEquals("ui_atlas", result.declaration().name());
|
||||||
assertEquals("image_bank", result.declaration().type());
|
assertEquals(AssetFamilyCatalog.IMAGE_BANK, result.declaration().assetFamily());
|
||||||
assertEquals("TILES/indexed_v1", result.declaration().outputFormat());
|
assertEquals("TILES/indexed_v1", result.declaration().outputFormat());
|
||||||
assertEquals("RAW", result.declaration().outputCodec());
|
assertEquals(OutputCodecCatalog.NONE, result.declaration().outputCodec());
|
||||||
assertTrue(result.declaration().preloadEnabled());
|
assertTrue(result.declaration().preloadEnabled());
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -45,6 +48,7 @@ final class PackerAssetDeclarationParserTest {
|
|||||||
|
|
||||||
assertFalse(result.valid());
|
assertFalse(result.valid());
|
||||||
assertTrue(result.diagnostics().stream().anyMatch(diagnostic -> diagnostic.message().contains("name")));
|
assertTrue(result.diagnostics().stream().anyMatch(diagnostic -> diagnostic.message().contains("name")));
|
||||||
|
assertTrue(result.diagnostics().stream().anyMatch(diagnostic -> diagnostic.message().contains("asset_uuid")));
|
||||||
assertTrue(result.diagnostics().stream().anyMatch(diagnostic -> diagnostic.message().contains("format")));
|
assertTrue(result.diagnostics().stream().anyMatch(diagnostic -> diagnostic.message().contains("format")));
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -62,10 +66,11 @@ final class PackerAssetDeclarationParserTest {
|
|||||||
Files.writeString(manifest, """
|
Files.writeString(manifest, """
|
||||||
{
|
{
|
||||||
"schema_version": 1,
|
"schema_version": 1,
|
||||||
|
"asset_uuid": "uuid-outside",
|
||||||
"name": "bad_asset",
|
"name": "bad_asset",
|
||||||
"type": "image_bank",
|
"type": "image_bank",
|
||||||
"inputs": { "sprites": ["../outside.png"] },
|
"inputs": { "sprites": ["../outside.png"] },
|
||||||
"output": { "format": "TILES/indexed_v1", "codec": "RAW" },
|
"output": { "format": "TILES/indexed_v1", "codec": "NONE" },
|
||||||
"preload": { "enabled": true }
|
"preload": { "enabled": true }
|
||||||
}
|
}
|
||||||
""");
|
""");
|
||||||
@ -75,4 +80,25 @@ final class PackerAssetDeclarationParserTest {
|
|||||||
assertFalse(result.valid());
|
assertFalse(result.valid());
|
||||||
assertTrue(result.diagnostics().stream().anyMatch(diagnostic -> diagnostic.message().contains("untrusted path")));
|
assertTrue(result.diagnostics().stream().anyMatch(diagnostic -> diagnostic.message().contains("untrusted path")));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void rejectsUnknownAssetFamily() throws Exception {
|
||||||
|
final Path manifest = tempDir.resolve("asset.json");
|
||||||
|
Files.writeString(manifest, """
|
||||||
|
{
|
||||||
|
"schema_version": 1,
|
||||||
|
"asset_uuid": "uuid-video",
|
||||||
|
"name": "bad_asset",
|
||||||
|
"type": "video_bank",
|
||||||
|
"inputs": { "sprites": ["atlas.png"] },
|
||||||
|
"output": { "format": "TILES/indexed_v1", "codec": "NONE" },
|
||||||
|
"preload": { "enabled": true }
|
||||||
|
}
|
||||||
|
""");
|
||||||
|
|
||||||
|
final var result = parser.parse(manifest);
|
||||||
|
|
||||||
|
assertFalse(result.valid());
|
||||||
|
assertTrue(result.diagnostics().stream().anyMatch(diagnostic -> diagnostic.message().contains("Field 'type' must be one of")));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
@ -1,18 +1,20 @@
|
|||||||
package p.packer.declarations;
|
package p.packer.services;
|
||||||
|
|
||||||
import org.junit.jupiter.api.Test;
|
import org.junit.jupiter.api.Test;
|
||||||
import org.junit.jupiter.api.io.TempDir;
|
import org.junit.jupiter.api.io.TempDir;
|
||||||
import p.packer.api.PackerOperationStatus;
|
import p.packer.PackerOperationStatus;
|
||||||
import p.packer.api.PackerProjectContext;
|
import p.packer.PackerProjectContext;
|
||||||
import p.packer.api.assets.PackerBuildParticipation;
|
import p.packer.assets.AssetReference;
|
||||||
import p.packer.api.assets.PackerAssetState;
|
import p.packer.assets.PackerBuildParticipation;
|
||||||
import p.packer.api.workspace.GetAssetDetailsRequest;
|
import p.packer.assets.OutputCodecCatalog;
|
||||||
|
import p.packer.assets.PackerAssetState;
|
||||||
|
import p.packer.messages.GetAssetDetailsRequest;
|
||||||
import p.packer.testing.PackerFixtureLocator;
|
import p.packer.testing.PackerFixtureLocator;
|
||||||
|
|
||||||
import java.io.IOException;
|
|
||||||
import java.nio.file.Files;
|
import java.nio.file.Files;
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.util.Comparator;
|
import java.util.Comparator;
|
||||||
|
import java.util.List;
|
||||||
|
|
||||||
import static org.junit.jupiter.api.Assertions.*;
|
import static org.junit.jupiter.api.Assertions.*;
|
||||||
|
|
||||||
@ -23,37 +25,43 @@ final class PackerAssetDetailsServiceTest {
|
|||||||
@Test
|
@Test
|
||||||
void returnsRegisteredDetailsForRegisteredAssetReferenceById() throws Exception {
|
void returnsRegisteredDetailsForRegisteredAssetReferenceById() throws Exception {
|
||||||
final Path projectRoot = copyFixture("workspaces/managed-basic", tempDir.resolve("managed"));
|
final Path projectRoot = copyFixture("workspaces/managed-basic", tempDir.resolve("managed"));
|
||||||
final PackerAssetDetailsService service = new PackerAssetDetailsService();
|
final PackerAssetDetailsService service = service();
|
||||||
|
|
||||||
final var result = service.getAssetDetails(new GetAssetDetailsRequest(project(projectRoot), "1"));
|
final var result = service.getAssetDetails(new GetAssetDetailsRequest(project(projectRoot), AssetReference.forAssetId(1)));
|
||||||
|
|
||||||
assertEquals(PackerOperationStatus.SUCCESS, result.status());
|
assertEquals(PackerOperationStatus.SUCCESS, result.status());
|
||||||
assertEquals(PackerAssetState.REGISTERED, result.details().summary().state());
|
assertEquals(PackerAssetState.REGISTERED, result.details().summary().state());
|
||||||
assertEquals(PackerBuildParticipation.INCLUDED, result.details().summary().buildParticipation());
|
assertEquals(PackerBuildParticipation.INCLUDED, result.details().summary().buildParticipation());
|
||||||
|
assertEquals("fixture-uuid-1", result.details().summary().identity().assetUuid());
|
||||||
assertEquals("ui_atlas", result.details().summary().identity().assetName());
|
assertEquals("ui_atlas", result.details().summary().identity().assetName());
|
||||||
assertEquals("TILES/indexed_v1", result.details().outputFormat());
|
assertEquals("TILES/indexed_v1", result.details().outputFormat());
|
||||||
|
assertEquals(List.of(OutputCodecCatalog.NONE), result.details().availableOutputCodecs());
|
||||||
|
assertEquals(List.of(), result.details().codecConfigurationFieldsByCodec().get(OutputCodecCatalog.NONE));
|
||||||
assertTrue(result.diagnostics().isEmpty());
|
assertTrue(result.diagnostics().isEmpty());
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
void returnsUnregisteredDetailsForValidUnregisteredRootReference() throws Exception {
|
void returnsUnregisteredDetailsForValidUnregisteredRootReference() throws Exception {
|
||||||
final Path projectRoot = copyFixture("workspaces/orphan-valid", tempDir.resolve("orphan"));
|
final Path projectRoot = copyFixture("workspaces/orphan-valid", tempDir.resolve("orphan"));
|
||||||
final PackerAssetDetailsService service = new PackerAssetDetailsService();
|
final PackerAssetDetailsService service = service();
|
||||||
|
|
||||||
final var result = service.getAssetDetails(new GetAssetDetailsRequest(project(projectRoot), "orphans/ui_sounds"));
|
final var result = service.getAssetDetails(new GetAssetDetailsRequest(project(projectRoot), AssetReference.forRelativeAssetRoot("orphans/ui_sounds")));
|
||||||
|
|
||||||
assertEquals(PackerOperationStatus.SUCCESS, result.status());
|
assertEquals(PackerOperationStatus.SUCCESS, result.status());
|
||||||
assertEquals(PackerAssetState.UNREGISTERED, result.details().summary().state());
|
assertEquals(PackerAssetState.UNREGISTERED, result.details().summary().state());
|
||||||
assertEquals(PackerBuildParticipation.EXCLUDED, result.details().summary().buildParticipation());
|
assertEquals(PackerBuildParticipation.EXCLUDED, result.details().summary().buildParticipation());
|
||||||
|
assertEquals("orphan-uuid-1", result.details().summary().identity().assetUuid());
|
||||||
assertEquals("ui_sounds", result.details().summary().identity().assetName());
|
assertEquals("ui_sounds", result.details().summary().identity().assetName());
|
||||||
|
assertEquals(List.of(OutputCodecCatalog.NONE), result.details().availableOutputCodecs());
|
||||||
|
assertEquals(List.of(), result.details().codecConfigurationFieldsByCodec().get(OutputCodecCatalog.NONE));
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
void returnsInvalidDetailsForInvalidDeclaration() throws Exception {
|
void returnsInvalidDetailsForInvalidDeclaration() throws Exception {
|
||||||
final Path projectRoot = copyFixture("workspaces/invalid-missing-fields", tempDir.resolve("invalid"));
|
final Path projectRoot = copyFixture("workspaces/invalid-missing-fields", tempDir.resolve("invalid"));
|
||||||
final PackerAssetDetailsService service = new PackerAssetDetailsService();
|
final PackerAssetDetailsService service = service();
|
||||||
|
|
||||||
final var result = service.getAssetDetails(new GetAssetDetailsRequest(project(projectRoot), "bad"));
|
final var result = service.getAssetDetails(new GetAssetDetailsRequest(project(projectRoot), AssetReference.forRelativeAssetRoot("bad")));
|
||||||
|
|
||||||
assertEquals(PackerOperationStatus.FAILED, result.status());
|
assertEquals(PackerOperationStatus.FAILED, result.status());
|
||||||
assertEquals(PackerAssetState.UNREGISTERED, result.details().summary().state());
|
assertEquals(PackerAssetState.UNREGISTERED, result.details().summary().state());
|
||||||
@ -63,9 +71,9 @@ final class PackerAssetDetailsServiceTest {
|
|||||||
|
|
||||||
@Test
|
@Test
|
||||||
void returnsFailureWhenReferenceCannotBeResolved() {
|
void returnsFailureWhenReferenceCannotBeResolved() {
|
||||||
final PackerAssetDetailsService service = new PackerAssetDetailsService();
|
final PackerAssetDetailsService service = service();
|
||||||
|
|
||||||
final var result = service.getAssetDetails(new GetAssetDetailsRequest(project(tempDir.resolve("empty")), "missing/root"));
|
final var result = service.getAssetDetails(new GetAssetDetailsRequest(project(tempDir.resolve("empty")), AssetReference.forRelativeAssetRoot("missing/root")));
|
||||||
|
|
||||||
assertEquals(PackerOperationStatus.FAILED, result.status());
|
assertEquals(PackerOperationStatus.FAILED, result.status());
|
||||||
assertEquals(PackerAssetState.UNREGISTERED, result.details().summary().state());
|
assertEquals(PackerAssetState.UNREGISTERED, result.details().summary().state());
|
||||||
@ -77,6 +85,12 @@ final class PackerAssetDetailsServiceTest {
|
|||||||
return new PackerProjectContext("main", root);
|
return new PackerProjectContext("main", root);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private PackerAssetDetailsService service() {
|
||||||
|
final var foundation = new p.packer.services.PackerWorkspaceFoundation();
|
||||||
|
final var parser = new PackerAssetDeclarationParser();
|
||||||
|
return new PackerAssetDetailsService(new PackerRuntimeRegistry(new PackerRuntimeLoader(foundation, parser)));
|
||||||
|
}
|
||||||
|
|
||||||
private Path copyFixture(String relativePath, Path targetRoot) throws Exception {
|
private Path copyFixture(String relativePath, Path targetRoot) throws Exception {
|
||||||
final Path sourceRoot = PackerFixtureLocator.fixtureRoot(relativePath);
|
final Path sourceRoot = PackerFixtureLocator.fixtureRoot(relativePath);
|
||||||
try (var stream = Files.walk(sourceRoot)) {
|
try (var stream = Files.walk(sourceRoot)) {
|
||||||
@ -0,0 +1,53 @@
|
|||||||
|
package p.packer.services;
|
||||||
|
|
||||||
|
import org.junit.jupiter.api.AfterEach;
|
||||||
|
import org.junit.jupiter.api.Test;
|
||||||
|
import org.junit.jupiter.api.io.TempDir;
|
||||||
|
import p.packer.PackerProjectContext;
|
||||||
|
|
||||||
|
import java.nio.file.Path;
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.concurrent.CopyOnWriteArrayList;
|
||||||
|
import java.util.concurrent.CountDownLatch;
|
||||||
|
import java.util.concurrent.Future;
|
||||||
|
import java.util.concurrent.TimeUnit;
|
||||||
|
|
||||||
|
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||||
|
import static org.junit.jupiter.api.Assertions.assertTrue;
|
||||||
|
|
||||||
|
final class PackerProjectWriteCoordinatorTest {
|
||||||
|
@TempDir
|
||||||
|
Path tempDir;
|
||||||
|
|
||||||
|
private final PackerProjectWriteCoordinator coordinator = new PackerProjectWriteCoordinator();
|
||||||
|
|
||||||
|
@AfterEach
|
||||||
|
void cleanup() {
|
||||||
|
coordinator.close();
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void serializesSameProjectWritesInSubmissionOrder() throws Exception {
|
||||||
|
final PackerProjectContext project = new PackerProjectContext("main", tempDir.resolve("main"));
|
||||||
|
final List<String> trace = new CopyOnWriteArrayList<>();
|
||||||
|
final CountDownLatch firstStarted = new CountDownLatch(1);
|
||||||
|
|
||||||
|
final Future<String> first = coordinator.submit(project, () -> {
|
||||||
|
trace.add("first-start");
|
||||||
|
firstStarted.countDown();
|
||||||
|
Thread.sleep(100L);
|
||||||
|
trace.add("first-end");
|
||||||
|
return "first";
|
||||||
|
});
|
||||||
|
|
||||||
|
assertTrue(firstStarted.await(2, TimeUnit.SECONDS));
|
||||||
|
final Future<String> second = coordinator.submit(project, () -> {
|
||||||
|
trace.add("second");
|
||||||
|
return "second";
|
||||||
|
});
|
||||||
|
|
||||||
|
assertEquals("first", first.get(2, TimeUnit.SECONDS));
|
||||||
|
assertEquals("second", second.get(2, TimeUnit.SECONDS));
|
||||||
|
assertEquals(List.of("first-start", "first-end", "second"), trace);
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -0,0 +1,104 @@
|
|||||||
|
package p.packer.services;
|
||||||
|
|
||||||
|
import org.junit.jupiter.api.Test;
|
||||||
|
import org.junit.jupiter.api.io.TempDir;
|
||||||
|
import p.packer.PackerProjectContext;
|
||||||
|
import p.packer.testing.PackerFixtureLocator;
|
||||||
|
|
||||||
|
import java.nio.file.Files;
|
||||||
|
import java.nio.file.Path;
|
||||||
|
import java.util.Comparator;
|
||||||
|
|
||||||
|
import static org.junit.jupiter.api.Assertions.*;
|
||||||
|
|
||||||
|
final class PackerRuntimeRegistryTest {
|
||||||
|
@TempDir
|
||||||
|
Path tempDir;
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void bootstrapsOneRuntimePerProjectAndReusesItUntilRefresh() throws Exception {
|
||||||
|
final Path projectRoot = copyFixture("workspaces/managed-basic", tempDir.resolve("managed"));
|
||||||
|
final PackerRuntimeRegistry registry = runtimeRegistry();
|
||||||
|
final PackerProjectContext project = project(projectRoot);
|
||||||
|
|
||||||
|
final PackerProjectRuntime first = registry.getOrLoad(project);
|
||||||
|
final PackerProjectRuntime second = registry.getOrLoad(project);
|
||||||
|
|
||||||
|
assertSame(first, second);
|
||||||
|
assertEquals(1L, first.snapshot().generation());
|
||||||
|
assertEquals(1, first.snapshot().registry().assets().size());
|
||||||
|
assertEquals(1, first.snapshot().assets().size());
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void refreshRebuildsSnapshotWithoutReplacingRuntimeInstance() throws Exception {
|
||||||
|
final Path projectRoot = copyFixture("workspaces/managed-basic", tempDir.resolve("refresh"));
|
||||||
|
final PackerRuntimeRegistry registry = runtimeRegistry();
|
||||||
|
final PackerProjectContext project = project(projectRoot);
|
||||||
|
final PackerProjectRuntime runtime = registry.getOrLoad(project);
|
||||||
|
|
||||||
|
Files.createDirectories(projectRoot.resolve("assets/audio/ui_sounds"));
|
||||||
|
Files.writeString(projectRoot.resolve("assets/audio/ui_sounds/asset.json"), """
|
||||||
|
{
|
||||||
|
"schema_version": 1,
|
||||||
|
"asset_uuid": "runtime-refresh-uuid",
|
||||||
|
"name": "ui_sounds",
|
||||||
|
"type": "audio.bank",
|
||||||
|
"inputs": {},
|
||||||
|
"output": {
|
||||||
|
"format": "SOUND/bank_v1",
|
||||||
|
"codec": "none"
|
||||||
|
},
|
||||||
|
"preload": {
|
||||||
|
"enabled": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
""");
|
||||||
|
|
||||||
|
final PackerProjectRuntime refreshed = registry.refresh(project);
|
||||||
|
|
||||||
|
assertSame(runtime, refreshed);
|
||||||
|
assertTrue(refreshed.snapshot().generation() > 1L);
|
||||||
|
assertEquals(2, refreshed.snapshot().assets().size());
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void disposeMarksRuntimeInactiveAndRemovesItFromRegistry() throws Exception {
|
||||||
|
final Path projectRoot = copyFixture("workspaces/managed-basic", tempDir.resolve("dispose"));
|
||||||
|
final PackerRuntimeRegistry registry = runtimeRegistry();
|
||||||
|
final PackerProjectContext project = project(projectRoot);
|
||||||
|
final PackerProjectRuntime runtime = registry.getOrLoad(project);
|
||||||
|
|
||||||
|
registry.dispose(project);
|
||||||
|
|
||||||
|
assertTrue(runtime.disposed());
|
||||||
|
assertTrue(registry.find(project).isEmpty());
|
||||||
|
assertThrows(IllegalStateException.class, runtime::snapshot);
|
||||||
|
}
|
||||||
|
|
||||||
|
private PackerRuntimeRegistry runtimeRegistry() {
|
||||||
|
final var foundation = new PackerWorkspaceFoundation();
|
||||||
|
final var parser = new PackerAssetDeclarationParser();
|
||||||
|
return new PackerRuntimeRegistry(new PackerRuntimeLoader(foundation, parser));
|
||||||
|
}
|
||||||
|
|
||||||
|
private PackerProjectContext project(Path root) {
|
||||||
|
return new PackerProjectContext("main", root);
|
||||||
|
}
|
||||||
|
|
||||||
|
private Path copyFixture(String relativePath, Path targetRoot) throws Exception {
|
||||||
|
final Path sourceRoot = PackerFixtureLocator.fixtureRoot(relativePath);
|
||||||
|
try (var stream = Files.walk(sourceRoot)) {
|
||||||
|
for (Path source : stream.sorted(Comparator.naturalOrder()).toList()) {
|
||||||
|
final Path target = targetRoot.resolve(sourceRoot.relativize(source).toString());
|
||||||
|
if (Files.isDirectory(source)) {
|
||||||
|
Files.createDirectories(target);
|
||||||
|
} else {
|
||||||
|
Files.createDirectories(target.getParent());
|
||||||
|
Files.copy(source, target);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return targetRoot;
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,9 +1,11 @@
|
|||||||
package p.packer.foundation;
|
package p.packer.services;
|
||||||
|
|
||||||
import org.junit.jupiter.api.Test;
|
import org.junit.jupiter.api.Test;
|
||||||
import org.junit.jupiter.api.io.TempDir;
|
import org.junit.jupiter.api.io.TempDir;
|
||||||
import p.packer.api.PackerProjectContext;
|
import p.packer.PackerProjectContext;
|
||||||
import p.packer.api.workspace.InitWorkspaceRequest;
|
import p.packer.messages.InitWorkspaceRequest;
|
||||||
|
import p.packer.models.PackerRegistryEntry;
|
||||||
|
import p.packer.models.PackerRegistryState;
|
||||||
|
|
||||||
import java.nio.file.Files;
|
import java.nio.file.Files;
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
@ -89,7 +91,7 @@ final class PackerWorkspaceFoundationTest {
|
|||||||
""");
|
""");
|
||||||
final FileSystemPackerRegistryRepository repository = new FileSystemPackerRegistryRepository();
|
final FileSystemPackerRegistryRepository repository = new FileSystemPackerRegistryRepository();
|
||||||
|
|
||||||
final PackerRegistryException exception = assertThrows(PackerRegistryException.class, () -> repository.load(project(projectRoot)));
|
final p.packer.exceptions.PackerRegistryException exception = assertThrows(p.packer.exceptions.PackerRegistryException.class, () -> repository.load(project(projectRoot)));
|
||||||
|
|
||||||
assertTrue(exception.getMessage().contains("Duplicate asset root"));
|
assertTrue(exception.getMessage().contains("Duplicate asset root"));
|
||||||
}
|
}
|
||||||
@ -101,7 +103,7 @@ final class PackerWorkspaceFoundationTest {
|
|||||||
Files.writeString(projectRoot.resolve("assets/.prometeu/index.json"), "{ nope ");
|
Files.writeString(projectRoot.resolve("assets/.prometeu/index.json"), "{ nope ");
|
||||||
final FileSystemPackerRegistryRepository repository = new FileSystemPackerRegistryRepository();
|
final FileSystemPackerRegistryRepository repository = new FileSystemPackerRegistryRepository();
|
||||||
|
|
||||||
final PackerRegistryException exception = assertThrows(PackerRegistryException.class, () -> repository.load(project(projectRoot)));
|
final p.packer.exceptions.PackerRegistryException exception = assertThrows(p.packer.exceptions.PackerRegistryException.class, () -> repository.load(project(projectRoot)));
|
||||||
|
|
||||||
assertTrue(exception.getMessage().contains("Unable to load registry"));
|
assertTrue(exception.getMessage().contains("Unable to load registry"));
|
||||||
}
|
}
|
||||||
@ -119,7 +121,7 @@ final class PackerWorkspaceFoundationTest {
|
|||||||
""");
|
""");
|
||||||
final FileSystemPackerRegistryRepository repository = new FileSystemPackerRegistryRepository();
|
final FileSystemPackerRegistryRepository repository = new FileSystemPackerRegistryRepository();
|
||||||
|
|
||||||
final PackerRegistryException exception = assertThrows(PackerRegistryException.class, () -> repository.load(project(projectRoot)));
|
final p.packer.exceptions.PackerRegistryException exception = assertThrows(p.packer.exceptions.PackerRegistryException.class, () -> repository.load(project(projectRoot)));
|
||||||
|
|
||||||
assertTrue(exception.getMessage().contains("Unsupported registry schema_version"));
|
assertTrue(exception.getMessage().contains("Unsupported registry schema_version"));
|
||||||
}
|
}
|
||||||
@ -139,7 +141,7 @@ final class PackerWorkspaceFoundationTest {
|
|||||||
""");
|
""");
|
||||||
final FileSystemPackerRegistryRepository repository = new FileSystemPackerRegistryRepository();
|
final FileSystemPackerRegistryRepository repository = new FileSystemPackerRegistryRepository();
|
||||||
|
|
||||||
final PackerRegistryException exception = assertThrows(PackerRegistryException.class, () -> repository.load(project(projectRoot)));
|
final p.packer.exceptions.PackerRegistryException exception = assertThrows(p.packer.exceptions.PackerRegistryException.class, () -> repository.load(project(projectRoot)));
|
||||||
|
|
||||||
assertTrue(exception.getMessage().contains("trusted assets boundary"));
|
assertTrue(exception.getMessage().contains("trusted assets boundary"));
|
||||||
}
|
}
|
||||||
@ -164,7 +166,7 @@ final class PackerWorkspaceFoundationTest {
|
|||||||
assertEquals(projectRoot.resolve("assets/ui/atlas").toAbsolutePath().normalize(), lookup.resolveExistingRoot(project, state.assets().getFirst()));
|
assertEquals(projectRoot.resolve("assets/ui/atlas").toAbsolutePath().normalize(), lookup.resolveExistingRoot(project, state.assets().getFirst()));
|
||||||
|
|
||||||
Files.delete(projectRoot.resolve("assets/ui/atlas"));
|
Files.delete(projectRoot.resolve("assets/ui/atlas"));
|
||||||
final PackerRegistryException exception = assertThrows(PackerRegistryException.class, () -> lookup.resolveExistingRoot(project, state.assets().getFirst()));
|
final p.packer.exceptions.PackerRegistryException exception = assertThrows(p.packer.exceptions.PackerRegistryException.class, () -> lookup.resolveExistingRoot(project, state.assets().getFirst()));
|
||||||
assertTrue(exception.getMessage().contains("does not exist"));
|
assertTrue(exception.getMessage().contains("does not exist"));
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1,5 +1,6 @@
|
|||||||
{
|
{
|
||||||
"schema_version": 9,
|
"schema_version": 9,
|
||||||
|
"asset_uuid": "future-uuid-1",
|
||||||
"name": "future_asset",
|
"name": "future_asset",
|
||||||
"type": "image_bank",
|
"type": "image_bank",
|
||||||
"inputs": {
|
"inputs": {
|
||||||
@ -7,7 +8,7 @@
|
|||||||
},
|
},
|
||||||
"output": {
|
"output": {
|
||||||
"format": "TILES/indexed_v1",
|
"format": "TILES/indexed_v1",
|
||||||
"codec": "RAW"
|
"codec": "NONE"
|
||||||
},
|
},
|
||||||
"preload": {
|
"preload": {
|
||||||
"enabled": true
|
"enabled": true
|
||||||
@ -1,5 +1,6 @@
|
|||||||
{
|
{
|
||||||
"schema_version": 1,
|
"schema_version": 1,
|
||||||
|
"asset_uuid": "fixture-uuid-1",
|
||||||
"name": "ui_atlas",
|
"name": "ui_atlas",
|
||||||
"type": "image_bank",
|
"type": "image_bank",
|
||||||
"inputs": {
|
"inputs": {
|
||||||
@ -7,7 +8,7 @@
|
|||||||
},
|
},
|
||||||
"output": {
|
"output": {
|
||||||
"format": "TILES/indexed_v1",
|
"format": "TILES/indexed_v1",
|
||||||
"codec": "RAW"
|
"codec": "NONE"
|
||||||
},
|
},
|
||||||
"preload": {
|
"preload": {
|
||||||
"enabled": true
|
"enabled": true
|
||||||
@ -1,5 +1,6 @@
|
|||||||
{
|
{
|
||||||
"schema_version": 1,
|
"schema_version": 1,
|
||||||
|
"asset_uuid": "orphan-uuid-1",
|
||||||
"name": "ui_sounds",
|
"name": "ui_sounds",
|
||||||
"type": "sound_bank",
|
"type": "sound_bank",
|
||||||
"inputs": {
|
"inputs": {
|
||||||
@ -7,7 +8,7 @@
|
|||||||
},
|
},
|
||||||
"output": {
|
"output": {
|
||||||
"format": "SOUND/bank_v1",
|
"format": "SOUND/bank_v1",
|
||||||
"codec": "RAW"
|
"codec": "NONE"
|
||||||
},
|
},
|
||||||
"preload": {
|
"preload": {
|
||||||
"enabled": false
|
"enabled": false
|
||||||
@ -1,5 +1,6 @@
|
|||||||
{
|
{
|
||||||
"schema_version": 1,
|
"schema_version": 1,
|
||||||
|
"asset_uuid": "orphan-uuid-2",
|
||||||
"name": "ui_sounds",
|
"name": "ui_sounds",
|
||||||
"type": "sound_bank",
|
"type": "sound_bank",
|
||||||
"inputs": {
|
"inputs": {
|
||||||
@ -7,7 +8,7 @@
|
|||||||
},
|
},
|
||||||
"output": {
|
"output": {
|
||||||
"format": "SOUND/bank_v1",
|
"format": "SOUND/bank_v1",
|
||||||
"codec": "RAW"
|
"codec": "NONE"
|
||||||
},
|
},
|
||||||
"preload": {
|
"preload": {
|
||||||
"enabled": false
|
"enabled": false
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user