7.9 KiB
PR-3.8 — GC Smoke and Stress Tests
Briefing
We need confidence that the GC behaves correctly under simple and stressed conditions.
Target
- Add deterministic smoke and stress tests for the GC.
Work items
-
Add tests:
- Simple allocation and collection cycle.
- Many short-lived objects.
- Cyclic references.
-
Ensure tests are deterministic.
Acceptance checklist
- Smoke tests pass.
- Stress tests pass.
- No nondeterministic failures.
cargo testpasses.
Tests
- New GC-specific tests.
Junie instructions
You MAY:
- Add deterministic tests.
You MUST NOT:
- Introduce random or timing-dependent tests.
- Modify GC semantics to satisfy tests.
If unclear:
- Ask before changing test scenarios.
PR-4.1 — Define Canonical Stack Effect Table for Core ISA
Briefing
The verifier must rely on a single canonical definition of stack effects for every opcode. This PR introduces the formal stack effect table for the new core ISA.
Target
- Provide a single authoritative stack effect definition per opcode.
- Ensure verifier and tooling use the same source of truth.
Work items
-
Introduce a stack effect table or equivalent structure:
- Input slot count.
- Output slot count.
-
Attach stack effect metadata to each opcode.
-
Ensure the table is used by the verifier entry points.
-
Document stack effects in code comments (English).
Acceptance checklist
- Every opcode has a defined stack effect.
- Verifier uses the canonical table.
- No duplicated or ad-hoc stack effect logic remains.
cargo testpasses.
Tests
- Add unit test ensuring all opcodes have stack effects defined.
Junie instructions
You MAY:
- Introduce new metadata tables for opcodes.
- Refactor verifier code to use the canonical table.
You MUST NOT:
- Change opcode semantics.
- Introduce new instructions.
If unclear:
- Ask before assigning stack effects.
PR-4.2 — Basic Stack Safety Verification (Underflow/Overflow)
Briefing
The verifier must ensure the stack never underflows or exceeds defined limits during execution.
Target
- Implement stack safety checks across the control-flow graph.
Work items
-
Simulate stack depth across instructions.
-
Detect:
- Stack underflow.
- Stack overflow beyond declared limits.
-
Emit appropriate verifier errors.
Acceptance checklist
- Underflow cases are rejected.
- Overflow cases are rejected.
- Valid programs pass.
cargo testpasses.
Tests
-
Add tests:
- Program with underflow → verifier error.
- Program exceeding stack limit → verifier error.
- Valid program → passes.
Junie instructions
You MAY:
- Add stack depth simulation.
- Introduce verifier error types.
You MUST NOT:
- Change runtime stack implementation.
- Introduce dynamic stack resizing.
If unclear:
- Ask before choosing stack limit rules.
PR-4.3 — Control Flow and Jump Target Verification
Briefing
The verifier must ensure all control flow transfers are valid and do not jump into the middle of instructions or outside function boundaries.
Target
- Validate all jump targets.
- Reject invalid or unsafe control flow.
Work items
-
Use canonical layout utilities to identify instruction boundaries.
-
Verify:
- Jump targets land on valid instruction boundaries.
- Targets are within the function range.
-
Reject invalid targets with a verifier error.
Acceptance checklist
- Invalid jump targets are rejected.
- Valid programs pass verification.
- No reliance on runtime traps for these cases.
cargo testpasses.
Tests
-
Add tests:
- Jump to middle of instruction → verifier error.
- Jump outside function → verifier error.
- Valid jump → passes.
Junie instructions
You MAY:
- Reuse layout utilities for boundary checks.
- Add verifier error cases.
You MUST NOT:
- Modify instruction encoding.
- Introduce new trap codes.
If unclear:
- Ask before defining jump rules.
PR-4.4 — Function Boundary and Terminator Verification
Briefing
Functions must follow canonical entry and exit rules. The verifier must enforce valid terminators and boundaries.
Target
- Ensure functions start and end correctly.
- Ensure terminator instructions are valid.
Work items
-
Verify function entry points are valid instruction boundaries.
-
Verify function exit instructions:
- Proper
RETusage. - Proper
FRAME_SYNCplacement (as per spec).
- Proper
-
Reject functions without valid termination.
Acceptance checklist
- All functions have valid entry and exit.
- Invalid termination is rejected.
cargo testpasses.
Tests
-
Add tests:
- Function without terminator → verifier error.
- Properly terminated function → passes.
Junie instructions
You MAY:
- Add boundary validation logic.
You MUST NOT:
- Redefine function semantics.
- Change terminator opcode behavior.
If unclear:
- Ask before enforcing termination rules.
PR-4.5 — Multi-Return (ret_slots) Verification
Briefing
The new ABI supports multi-return via ret_slots. The verifier must ensure the stack shape matches the declared return slot count.
Target
- Validate return slot counts at call and return sites.
Work items
-
For each function:
- Read declared
ret_slots.
- Read declared
-
At
RET:- Ensure stack contains exactly the required number of slots.
-
At call sites:
- Ensure caller expects correct number of return slots.
Acceptance checklist
- Mismatched return slot counts are rejected.
- Correct programs pass.
cargo testpasses.
Tests
-
Add tests:
- Too few return slots → verifier error.
- Too many return slots → verifier error.
- Correct return slots → passes.
Junie instructions
You MAY:
- Use function metadata to validate returns.
You MUST NOT:
- Change calling convention semantics.
- Modify function metadata layout.
If unclear:
- Ask before implementing slot rules.
PR-4.6 — Verifier Error Model Consolidation
Briefing
Verifier errors must be deterministic, structured, and clearly separated from runtime traps.
Target
- Introduce a coherent verifier error model.
Work items
-
Define a
VerifierErrorenum covering:- Stack underflow.
- Stack overflow.
- Invalid jump target.
- Invalid function boundary.
- Return slot mismatch.
-
Ensure verifier returns structured errors.
-
Update tests to expect structured errors.
Acceptance checklist
- Verifier errors are structured and deterministic.
- No reliance on runtime traps for verifier failures.
cargo testpasses.
Tests
- Update existing tests to assert specific verifier errors.
Junie instructions
You MAY:
- Introduce a new verifier error enum.
- Refactor error returns.
You MUST NOT:
- Map verifier errors to runtime traps.
- Change runtime trap behavior.
If unclear:
- Ask before merging or renaming error categories.
PR-4.7 — Verifier Golden Test Suite
Briefing
We need a stable suite of valid and invalid bytecode samples to ensure verifier correctness over time.
Target
- Introduce golden tests for the verifier.
Work items
-
Add a set of small bytecode samples:
- Valid programs.
- Invalid programs for each verifier error.
-
Implement golden tests asserting:
- Successful verification.
- Specific error kinds for invalid programs.
Acceptance checklist
- Golden tests exist for all verifier error categories.
- Tests are deterministic.
cargo testpasses.
Tests
- New golden tests only.
Junie instructions
You MAY:
- Add deterministic golden tests.
You MUST NOT:
- Modify verifier logic to fit tests without understanding the cause.
- Introduce random or timing-based tests.
If unclear:
- Ask before changing test expectations.