Skip to content

Conversation

@aaronvg
Copy link
Contributor

@aaronvg aaronvg commented Jan 13, 2026

  • add error recovery in parser in a couple more spots
  • fix duplicate diagnostics

Note

Improves correctness and DX across parsing and HIR diagnostics.

  • Parser: handle nested generics by splitting >> via expect_greater; add recovery for invalid blocks (classs Foo { ... }) and invalid type aliases (typpe Name = ...); better tuple/type-expr recovery with clear messages (e.g., path identifiers in types); detect old-style function syntax and emit a single helpful error; integrated in top-level parsing flows.
  • HIR: duplicate-name validation now derives precise name spans from CST (get_item_name_span) and tracks item kind; emits an error for both the first definition and each duplicate with correct locations; updated tests/snapshots and added a focused repro.

Written by Cursor Bugbot for commit 81efa88. This will update automatically on new commits. Configure here.

@vercel
Copy link

vercel bot commented Jan 13, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
beps Ready Ready Preview, Comment Jan 13, 2026 2:48am
promptfiddle Ready Ready Preview, Comment Jan 13, 2026 2:48am

@codspeed-hq
Copy link

codspeed-hq bot commented Jan 13, 2026

Merging this PR will improve performance by 10.72%

⚠️ Unknown Walltime execution environment detected

Using the Walltime instrument on standard Hosted Runners will lead to inconsistent data.

For the most accurate results, we recommend using CodSpeed Macro Runners: bare-metal machines fine-tuned for performance measurement consistency.

⚡ 1 improved benchmark
✅ 14 untouched benchmarks
⏩ 35 skipped benchmarks1

Performance Changes

Mode Benchmark BASE HEAD Efficiency
WallTime bench_incremental_rename_type 243.8 µs 220.2 µs +10.72%

Comparing aaron/parser5 (81efa88) with canary (70dc924)

Open in CodSpeed

Footnotes

  1. 35 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

first_path: path.clone(),
second: existing.span,
second_path: existing.path.clone(),
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Swapped spans produce semantically incorrect duplicate error labels

Medium Severity

In the first error emitted when a duplicate is detected, the first and second fields are swapped. The error format uses second for the "redefined here" label and first for the "first defined in..." label. The code sets first: span (the duplicate) and second: existing.span (the original), causing the original definition to be labeled as "redefined here" and the duplicate to be labeled as "first defined" - semantically backwards. The test expectations confirm this: line 1 (original) shows "redefined here" while line 5 (duplicate) shows "first defined".

Fix in Cursor Fix in Web

self.error_unexpected_token("'>'".to_string());
false
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stale pending_greaters state corrupts subsequent generic parsing

Medium Severity

The pending_greaters counter tracks pending > tokens from split >> tokens for nested generics. However, it's never reset at type or field boundaries. If a user writes a malformed type like map<K, V>> (with an extra >), the >> is consumed and pending_greaters is set to 1. Since there's no outer generic to consume the pending count, it persists. When parsing the next generic type (e.g., in another field), expect_greater() sees pending_greaters > 0, decrements it, and returns true without consuming the actual > token. This leaves the > in the token stream, causing cascading parse errors.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants