Skip to content

Add a cache for active orders #615

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: v28.x
Choose a base branch
from

Conversation

ValarDragon
Copy link
Member

@ValarDragon ValarDragon commented Feb 14, 2025

Add a primitive cache for Active Orders

keyed by pool ID + user address

invalidates on pool update. (can optimize in future)

uses detailed struct atm, should switch to being a more raw struct. - EDIT now uses a raw struct

Summary by CodeRabbit

  • New Features
    • Introduced a caching mechanism to speed up the retrieval of active orders.
    • Enhanced performance by first checking cached data and automatically refreshing the cache when market states change.

Copy link
Contributor

coderabbitai bot commented Feb 14, 2025

Walkthrough

The pull request introduces a new file that implements a thread-safe Least Recently Used (LRU) cache tailored for active orders in the order book system. It defines composite key types, cache entries, and methods (get, set, invalidate, clear) to handle caching. Additionally, the order book use case is updated to integrate this caching mechanism, checking the cache before fetching orders from the chain and ensuring cache invalidation when pool states change.

Changes

File(s) Change Summary
orderbook/.../active_orders_cache.go New file implementing a thread-safe LRU cache for active orders. Added types (cacheKey, activeOrdersCacheEntry, activeOrdersCache) and methods (newActiveOrdersCache, get, set, invalidatePool, clear).
orderbook/.../orderbook_usecase.go Integrated the cache into the use case by adding a new field (activeOrdersCache) and constant (defaultCacheSize). Modified New, GetActiveOrders, and ProcessPool to utilize caching and invalidate stale entries. Added new method getRawActiveOrders.

Sequence Diagram(s)

sequenceDiagram
    participant C as Client
    participant OUC as OrderbookUseCase
    participant AOC as ActiveOrdersCache
    participant Chain as Blockchain

    C->>OUC: Request active orders (poolID, userAddress)
    OUC->>AOC: get(poolID, userAddress)
    alt Cache Hit
        AOC-->>OUC: Return cached active orders
        OUC-->>C: Return active orders from cache
    else Cache Miss
        AOC-->>OUC: No cache entry found
        OUC->>Chain: Fetch active orders
        Chain-->>OUC: Return active orders
        OUC->>AOC: set(poolID, userAddress, activeOrders)
        OUC-->>C: Return active orders from chain
    end
Loading
sequenceDiagram
    participant OUC as OrderbookUseCase
    participant AOC as ActiveOrdersCache

    OUC->>AOC: invalidatePool(poolID)
    AOC-->>OUC: Confirm removal of pool cache entries
Loading

Poem

I'm a rabbit on the run,
Hopping through the code for fun.
Caching orders with a clever pace,
Leaving stale data no trace.
With each hop, improvements shine—
A code garden so divine!
🐰🌟 Happy hops in every line!

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
orderbook/usecase/active_orders_cache.go (2)

31-42: Consider validating cache size inputs.

Currently, newActiveOrdersCache doesn’t validate that the size parameter is positive or within a sensible range. If someone accidentally passes a very large or negative size, it might cause memory or runtime issues.

You could add a quick sanity check:

 func newActiveOrdersCache(size int) (*activeOrdersCache, error) {
+	if size <= 0 {
+		return nil, fmt.Errorf("cache size should be a positive integer, got %d", size)
+	}

86-93: Add a safety check or event for cache clearing.

While clear() neatly purges the LRU and resets poolEntries, consider logging or signaling that the cache has been wiped. This could help you diagnose unexpected flushes in production or track usage metrics.

orderbook/usecase/orderbook_usecase.go (2)

30-41: Monitor memory usage for large cache sizes.

Defining defaultCacheSize at 100,000 entries may become memory-intensive under high load. Keep an eye on your system’s memory usage in production and consider offering a configurable cache size through environment variables or config.


247-270: Concurrently fetching active orders with caching.

The concurrency model here leverages the channel-based architecture effectively. However, if the number of orderbook pools grows large, this could spawn many goroutines. Consider a worker group to limit concurrency if you anticipate thousands of pools.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8a77240 and ba99d49.

📒 Files selected for processing (2)
  • orderbook/usecase/active_orders_cache.go (1 hunks)
  • orderbook/usecase/orderbook_usecase.go (5 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: build
  • GitHub Check: Run linter
  • GitHub Check: Summary
🔇 Additional comments (4)
orderbook/usecase/active_orders_cache.go (2)

23-29: Good use of mutex to ensure thread-safety.

The activeOrdersCache struct properly synchronizes access using mu. The approach of storing poolEntries in a dedicated map is straightforward and allows bulk invalidation by pool. This design choice nicely addresses concurrency needs.


44-84: Well-structured read/write locking.

The get, set, and invalidatePool methods consistently use RLock/RUnlock for reads and Lock/Unlock for writes. This aligns with best practices for highly concurrent caches, minimizing contention while maintaining correctness.

orderbook/usecase/orderbook_usecase.go (2)

52-57: Graceful fallback to nil cache.

Excellent handling of cache creation errors. Logging the failure and setting the cache to nil ensures the application continues functioning even if caching fails. This fallback strategy is user-friendly.


94-98: Bulk invalidation on pool updates is appropriate.

Calling o.activeOrdersCache.invalidatePool(poolID) upon state changes is a clear way to ensure stale data doesn’t linger. This approach preserves correctness without excessive complexity.

@@ -549,7 +551,7 @@ func (s *OrderbookUsecaseTestSuite) TestGetActiveOrders() {
poolsUsecase.GetAllCanonicalOrderbookPoolIDsFunc = s.GetAllCanonicalOrderbookPoolIDsFunc(
nil,
s.NewCanonicalOrderBooksResult(1, "A"),
s.NewCanonicalOrderBooksResult(1, "B"),
s.NewCanonicalOrderBooksResult(2, "B"),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pre-existing error, started causing issue post-cache since cache is keyed by pool ID


// Call the method under test
orders := usecase.GetActiveOrdersStream(ctx, tc.address)

// Wait for the ticker to push the orders
if tc.expectedCallCount > 1 {
usecase.SetFetchActiveOrdersEveryDuration(tc.tickerDuration)
time.Sleep(tc.tickerDuration)
time.Sleep(tc.tickerDuration + time.Millisecond*10)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We always had a race condition here before

@@ -697,7 +703,7 @@ func (s *OrderbookUsecaseTestSuite) TestProcessOrderBookActiveOrders() {
order: newLimitOrder().WithOrderbookAddress("A"),
ownerAddress: "osmo1h5la3t4y8cljl34lsqdszklvcn053u4ryz9qr78v64rsxezyxwlsdelsdr",
expectedError: nil,
expectedOrders: nil,
expectedOrders: []orderbookdomain.LimitOrder{},
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ITs an internal method, so doesn't matter

Copy link

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (6)
orderbook/usecase/active_orders_cache.go (2)

85-92: Consider removing the unused clear method.

The clear method is not used in the codebase according to the static analysis.

🧰 Tools
🪛 GitHub Check: Run linter

[failure] 86-86:
func (*activeOrdersCache).clear is unused (unused)


85-92: Consider removing or documenting the unused method.

The clear method is currently unused. Consider either:

  1. Removing it if it's not needed
  2. Adding a comment explaining its intended future use
  3. Adding tests that utilize this method
🧰 Tools
🪛 GitHub Check: Run linter

[failure] 86-86:
func (*activeOrdersCache).clear is unused (unused)

orderbook/usecase/orderbook_usecase.go (4)

305-325: Add error logging for cache operations.

While the cache usage is correct, consider adding error logging when cache operations fail to help with debugging and monitoring cache behavior.

 func (o *OrderbookUseCaseImpl) getRawActiveOrders(ctx context.Context, orderbook domain.CanonicalOrderBooksResult, ownerAddress string) ([]orderbookdomain.Order, error) {
 	if o.activeOrdersCache != nil {
 		if entry, found := o.activeOrdersCache.get(orderbook.PoolID, ownerAddress); found {
+			o.logger.Debug("Cache hit for active orders", 
+				zap.Uint64("pool_id", orderbook.PoolID),
+				zap.String("user_address", ownerAddress))
 			return entry.Orders, nil
 		}
+		o.logger.Debug("Cache miss for active orders",
+			zap.Uint64("pool_id", orderbook.PoolID),
+			zap.String("user_address", ownerAddress))
 	}

 	rawOrders, err := o.getRawActiveOrdersFromChain(ctx, orderbook, ownerAddress)
 	if err != nil {
 		return nil, err
 	}

 	// Cache the raw orders if successful
 	if o.activeOrdersCache != nil {
 		o.activeOrdersCache.set(orderbook.PoolID, ownerAddress, activeOrdersCacheEntry{
 			Orders: rawOrders,
 		})
+		o.logger.Debug("Cached active orders",
+			zap.Uint64("pool_id", orderbook.PoolID),
+			zap.String("user_address", ownerAddress),
+			zap.Int("order_count", len(rawOrders)))
 	}

 	return rawOrders, nil
 }

52-57: Enhance error logging with more context.

Consider adding the cache size to the error message for better debugging.

-		logger.Error("failed to create active orders cache", zap.Error(err))
+		logger.Error("failed to create active orders cache", zap.Int("size", defaultCacheSize), zap.Error(err))

94-98: Consider optimizing cache invalidation strategy.

The current implementation invalidates all cache entries for a pool when its state changes. As noted in the comment, if active orders time is important, consider implementing a more granular invalidation strategy.


327-349: Document the nil return case for zero count.

Consider adding a comment explaining that nil is returned when count is 0, as this is a valid case rather than an error condition.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ba99d49 and d8abd6b.

⛔ Files ignored due to path filters (2)
  • orderbook/usecase/export_test.go is excluded by !**/*_test.go
  • orderbook/usecase/orderbook_usecase_test.go is excluded by !**/*_test.go
📒 Files selected for processing (2)
  • orderbook/usecase/active_orders_cache.go (1 hunks)
  • orderbook/usecase/orderbook_usecase.go (8 hunks)
🧰 Additional context used
🪛 GitHub Check: Run linter
orderbook/usecase/active_orders_cache.go

[failure] 86-86:
func (*activeOrdersCache).clear is unused (unused)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Summary
🔇 Additional comments (20)
orderbook/usecase/active_orders_cache.go (13)

11-20: LGTM! Well-designed composite key and cache entry types.

The composite key design effectively combines pool ID and user address for unique identification, while the cache entry type maintains a clean and focused structure.


22-41: LGTM! Thread-safe cache implementation with efficient bulk invalidation support.

The implementation includes:

  • Proper thread safety using RWMutex
  • Efficient tracking of pool entries for bulk invalidation
  • Appropriate error handling during initialization

43-66: LGTM! Thread-safe get and set operations with proper pool entry tracking.

The implementation correctly handles thread safety and maintains the pool entries map for efficient invalidation.


68-83: LGTM! Efficient pool-wide cache invalidation.

The implementation correctly handles thread safety and efficiently removes all entries associated with a specific pool.


11-15: LGTM!

The composite key type is well-designed with appropriate field types and documentation.


17-20: LGTM!

The cache entry type is well-designed with appropriate field type and documentation.


22-28: LGTM!

The cache type is well-designed with appropriate thread safety mechanisms and efficient bulk invalidation support.


30-83: LGTM!

The cache methods are well-implemented with proper thread safety and error handling.


11-20: LGTM! Well-designed cache key and entry types.

The composite key structure efficiently combines pool ID and user address, while the entry type maintains a clean, focused design.


22-41: LGTM! Thread-safe cache implementation with efficient bulk invalidation support.

The implementation includes:

  • Thread safety using RWMutex
  • Efficient pool-based invalidation tracking
  • Proper error handling during initialization

43-66: LGTM! Thread-safe get/set operations with proper tracking.

The implementation correctly:

  • Uses appropriate read/write locks
  • Maintains pool entry tracking for invalidation

68-83: LGTM! Efficient pool-based invalidation.

The implementation correctly handles bulk invalidation of all entries associated with a pool.


85-92: Consider if the clear method is needed.

The static analysis indicates this method is currently unused. If it's intended for future use or testing, consider documenting its purpose.

🧰 Tools
🪛 GitHub Check: Run linter

[failure] 86-86:
func (*activeOrdersCache).clear is unused (unused)

orderbook/usecase/orderbook_usecase.go (7)

30-30: LGTM! Robust cache integration with graceful fallback.

The implementation properly handles cache initialization failures by continuing without the cache, ensuring the system remains operational even if caching is unavailable.

Also applies to: 52-57


94-98: LGTM! Appropriate cache invalidation on pool state changes.

The implementation correctly invalidates cache entries when pool state changes, maintaining cache consistency.


30-30: LGTM!

The cache field and size constant are well-defined with appropriate types and values.

Also applies to: 41-41


94-98: LGTM!

The cache invalidation is correctly implemented with appropriate nil checks and placement.


305-325: LGTM!

The cache usage is well-implemented with proper error handling and nil checks.


30-30: LGTM! Robust cache integration with graceful fallback.

The implementation correctly:

  • Handles cache initialization failures
  • Continues operation without cache when needed

Also applies to: 52-57


305-325: LGTM! Well-implemented cache interaction pattern.

The implementation follows best practices:

  • Checks cache before expensive chain operations
  • Updates cache only after successful fetches

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant