Skip to content

feat: Add tests for in app rating scenarios#4128

Open
amit429 wants to merge 19 commits intoFYLE-in-app-ratingfrom
FYLE-in-app-rating-tests
Open

feat: Add tests for in app rating scenarios#4128
amit429 wants to merge 19 commits intoFYLE-in-app-ratingfrom
FYLE-in-app-rating-tests

Conversation

@amit429
Copy link
Contributor

@amit429 amit429 commented Mar 10, 2026

Clickup

https://app.clickup.com

Code Coverage

Please add code coverage here

UI Preview

Please add screenshots for UI changes

Summary by CodeRabbit

  • Tests
    • Added a comprehensive test suite covering the app rating flow: eligibility checks, age and expense validations, prompt history retrieval/persistence, cooldown logic for native prompts and dismissals, pre-prompt popover behavior and UI configuration, full prompt attempt/show flows, tracking/timing of notifications, and resilient handling of failures to ensure stability.

@github-actions github-actions bot added the size/L Large PR label Mar 10, 2026
@github-actions
Copy link

PR description must contain a link to a ClickUp (case-insensitive)

@coderabbitai
Copy link

coderabbitai bot commented Mar 10, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

A comprehensive Jasmine/Karma test suite was added for AppRatingService, covering service instantiation, eligibility checks, age and expense logic, prompt history retrieval and cooldowns, full prompt flows (pre-prompt popover and native prompt), interaction recording, and error propagation with extensive mocked dependencies. (48 words)

Changes

Cohort / File(s) Summary
AppRatingService Test Suite
src/app/core/services/app-rating.service.spec.ts
Adds an extensive set of unit tests validating service creation, checkEligibility, isUserOldEnough, hasEnoughExpenses, getPromptHistory, cooldown checks (isNativePromptCooldownMet, isDismissalCooldownMet), full prompt flows (attemptRatingPrompt, pre-prompt popover, native prompt), interaction recording, error propagation handling, and integrations with multiple mocked services and a mocked PopupAlertComponent.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

I test with style, a thunderous swing,
Mocks line up — they dance, they sing,
Prompts bow down, histories align,
Jasmine chants, the green tests shine,
AppRating stands tall — Rajni approves.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description contains only template placeholders with no actual content provided. Required sections like ClickUp link, code coverage details, and UI preview remain as placeholder text. Fill in the template with actual content: provide the real ClickUp issue link, add code coverage metrics or results, and clarify if UI preview is applicable for test-only changes.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly summarizes the main change: adding comprehensive tests for in-app rating scenarios, which directly matches the raw summary showing a new test suite for AppRatingService.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch FYLE-in-app-rating-tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@amit429 amit429 requested review from mvaishnavi and removed request for Chethan-Fyle and arjunaj5 March 10, 2026 06:38
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/app/core/services/app-rating.service.spec.ts`:
- Around line 85-91: The test currently asserts a hardcoded true for
service.checkEligibility() instead of exercising its real logic; update the spec
to mock the collaborator/helper methods that checkEligibility() depends on (use
the same mocked services/spies used elsewhere in the suite) and drive those
mocks to produce both eligible and ineligible scenarios, then assert the
observable emits true or false accordingly; replace the single hardcoded
expect(result).toBeTrue() with separate test cases that set the collaborators'
return values (via spies/mocks) and verify checkEligibility() emits the correct
boolean for each predicate combination.
- Around line 339-401: The tests never stub or assert the native API call; add a
spy for AppReview.requestReview() in the test setup and use it in each relevant
spec: for the "rate" cases have spyOn(AppReview,
'requestReview').and.returnValue(Promise.resolve()) and assert
AppReview.requestReview was called when popover resolves to { action: 'rate' }
(to cover triggerNativeReview()/attemptRatingPrompt()), and for the failure spec
replace the async test with fakeAsync, stub AppReview.requestReview to return
Promise.reject(new Error('fail')), call service.attemptRatingPrompt(), then use
tick() to flush the promise and assert the error path still records
nativePrompts; reference triggerNativeReview(), attemptRatingPrompt(), and
AppReview.requestReview() when making these changes.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: f3d933a0-d7a7-41cb-b823-e2ade1c9c6bb

📥 Commits

Reviewing files that changed from the base of the PR and between d10dcb3 and 0830f29.

📒 Files selected for processing (1)
  • src/app/core/services/app-rating.service.spec.ts

Comment on lines +339 to +401
it('should trigger native review and record nativePrompts when user taps "Leave a rating"', fakeAsync(() => {
popoverSpy.onWillDismiss.and.resolveTo({ data: { action: 'rate' }, role: undefined });
featureConfigService.getConfiguration.and.returnValue(
of({ value: { nativePrompts: [], dismissals: [] } } as FeatureConfig<AppRatingHistory>),
);

service.attemptRatingPrompt();
tick();

expect(popoverSpy.present).toHaveBeenCalledTimes(1);
expect(trackingService.eventTrack).toHaveBeenCalledWith('In App Rating Accepted', {});
expect(featureConfigService.saveConfiguration).toHaveBeenCalledTimes(1);
const savedValue = featureConfigService.saveConfiguration.calls.mostRecent().args[0].value as AppRatingHistory;
expect(savedValue.nativePrompts.length).toBe(1);
expect(savedValue.dismissals.length).toBe(0);
}));

it('should record dismissal when user taps "Not now"', fakeAsync(() => {
popoverSpy.onWillDismiss.and.resolveTo({ data: { action: 'dismiss' }, role: undefined });
featureConfigService.getConfiguration.and.returnValue(
of({ value: { nativePrompts: [], dismissals: [] } } as FeatureConfig<AppRatingHistory>),
);

service.attemptRatingPrompt();
tick();

expect(popoverSpy.present).toHaveBeenCalledTimes(1);
expect(trackingService.eventTrack).toHaveBeenCalledWith('In App Rating Dismissed', {});
expect(featureConfigService.saveConfiguration).toHaveBeenCalledTimes(1);
const savedValue = featureConfigService.saveConfiguration.calls.mostRecent().args[0].value as AppRatingHistory;
expect(savedValue.nativePrompts.length).toBe(0);
expect(savedValue.dismissals.length).toBe(1);
}));

it('should record dismissal when popover is dismissed without data', fakeAsync(() => {
popoverSpy.onWillDismiss.and.resolveTo({ data: undefined, role: undefined });
featureConfigService.getConfiguration.and.returnValue(
of({ value: { nativePrompts: [], dismissals: [] } } as FeatureConfig<AppRatingHistory>),
);

service.attemptRatingPrompt();
tick();

expect(trackingService.eventTrack).toHaveBeenCalledWith('In App Rating Dismissed', {});
expect(featureConfigService.saveConfiguration).toHaveBeenCalledTimes(1);
const savedValue = featureConfigService.saveConfiguration.calls.mostRecent().args[0].value as AppRatingHistory;
expect(savedValue.dismissals.length).toBe(1);
}));

it('should handle native review failure gracefully and still record nativePrompts', async () => {
popoverSpy.onWillDismiss.and.resolveTo({ data: { action: 'rate' }, role: undefined });
featureConfigService.getConfiguration.and.returnValue(
of({ value: { nativePrompts: [], dismissals: [] } } as FeatureConfig<AppRatingHistory>),
);

service.attemptRatingPrompt();
await new Promise((resolve) => setTimeout(resolve, 500));

expect(trackingService.eventTrack).toHaveBeenCalledWith('In App Rating Accepted', {});
expect(featureConfigService.saveConfiguration).toHaveBeenCalledTimes(1);
const savedValue = featureConfigService.saveConfiguration.calls.mostRecent().args[0].value as AppRatingHistory;
expect(savedValue.nativePrompts.length).toBe(1);
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Native review trigger used inside AppRatingService =="
rg -n -C2 '\b(requestReview|requestAppReview|openStoreListing|StoreReview|AppRate)\b' \
  src/app/core/services/app-rating.service.ts

echo
echo "== Existing spies/mocks for that trigger in the spec =="
rg -n -C2 'spyOn|createSpyObj|rejectWith|Promise\.reject|throwError' \
  src/app/core/services/app-rating.service.spec.ts

echo
echo "== Real wall-clock waits in the spec =="
rg -n -C2 'setTimeout\(resolve,\s*500\)' \
  src/app/core/services/app-rating.service.spec.ts

Repository: fylein/fyle-mobile-app

Length of output: 4176


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== AppReview import and usage in the spec file =="
rg -n -C3 'AppReview|appReview' src/app/core/services/app-rating.service.spec.ts

echo
echo "== Full beforeEach setup in spec to check for AppReview spy =="
sed -n '33,80p' src/app/core/services/app-rating.service.spec.ts

echo
echo "== The failure test case in full context =="
sed -n '380,405p' src/app/core/services/app-rating.service.spec.ts

echo
echo "== triggerNativeReview implementation in service =="
sed -n '201,210p' src/app/core/services/app-rating.service.ts

Repository: fylein/fyle-mobile-app

Length of output: 115


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== beforeEach setup (lines 33-80) =="
sed -n '33,80p' src/app/core/services/app-rating.service.spec.ts

echo
echo "== The failure test case (lines 380-405) =="
sed -n '380,405p' src/app/core/services/app-rating.service.spec.ts

echo
echo "== triggerNativeReview implementation (lines 201-210) =="
sed -n '201,210p' src/app/core/services/app-rating.service.ts

echo
echo "== Check imports at top of spec file =="
head -20 src/app/core/services/app-rating.service.spec.ts

Repository: fylein/fyle-mobile-app

Length of output: 6085


The native review API, brother—it dances on stage but takes no bow.

The villain here is simple: AppReview.requestReview() walks the halls of triggerNativeReview() (line 203), yet your test chamber holds no spy to witness its grandeur. Line 387 announces a "failure" test with style—should handle native review failure gracefully—but then betrays that promise. No mock rejection flows through AppReview.requestReview(), and line 395's setTimeout(resolve, 500) is mere wall-clock theater, not the deterministic command that fakeAsync and tick() demand.

The earlier tests (lines 339–375) perform their duty to tracking and history, but they never confirm the native review call itself was invoked. The error handler in the service (lines 204–208) waits for a test to prove it, yet none arrives.

Stub AppReview.requestReview() in your setup, assert it fires on "rate" action, and make it reject under fakeAsync/tick() to vindicate the failure test. Until then, these specs are shadows dancing on the wall—impressive in form, hollow in substance.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/core/services/app-rating.service.spec.ts` around lines 339 - 401, The
tests never stub or assert the native API call; add a spy for
AppReview.requestReview() in the test setup and use it in each relevant spec:
for the "rate" cases have spyOn(AppReview,
'requestReview').and.returnValue(Promise.resolve()) and assert
AppReview.requestReview was called when popover resolves to { action: 'rate' }
(to cover triggerNativeReview()/attemptRatingPrompt()), and for the failure spec
replace the async test with fakeAsync, stub AppReview.requestReview to return
Promise.reject(new Error('fail')), call service.attemptRatingPrompt(), then use
tick() to flush the promise and assert the error path still records
nativePrompts; reference triggerNativeReview(), attemptRatingPrompt(), and
AppReview.requestReview() when making these changes.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
src/app/core/services/app-rating.service.spec.ts (1)

85-91: ⚠️ Potential issue | 🟠 Major

Don't codify the temporary checkEligibility() bypass.

This spec blesses today's "all checks commented out" shortcut, boss. Please drive the real gates through their spies and assert both eligible and ineligible outcomes, otherwise this suite breaks the moment the production checks return.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/core/services/app-rating.service.spec.ts` around lines 85 - 91, The
current spec hardcodes a passing "testing mode" bypass for
service.checkEligibility(); remove that temporary test and replace it with two
focused tests that drive the real eligibility logic via spies/mocks: one test
should spy on the service's dependencies (e.g., any injected services used by
checkEligibility()) to simulate conditions that should make checkEligibility()
emit true and assert expect(result).toBeTrue(), and a second test should spy
those same dependencies to simulate ineligible conditions and assert
expect(result).toBeFalse(); ensure both tests subscribe to
service.checkEligibility() (or use async/await with firstValueFrom) and properly
restore or reset spies between tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/app/core/services/app-rating.service.spec.ts`:
- Around line 107-118: Add a deterministic test for the 30-day cutoff: create a
test in app-rating.service.spec.ts that freezes the clock (e.g.,
jasmine.clock()/equivalent), builds an ExtendedOrgUser based on apiEouRes with
ou.created_at set to new Date() minus 30 days, asserts
service.isUserOldEnough(eou) is true, and then unfreezes/restores the clock;
reference the existing isUserOldEnough function and apiEouRes so the new test
mirrors the 29/31-day cases but validates the exact 30-day boundary.
- Around line 219-230: Add two edge-case specs that assert exact-boundary
behavior: for isNativePromptCooldownMet add a test that sets a nativePrompts
entry to exactly 180 days ago (new Date().setDate(new Date().getDate() - 180))
and expect service.isNativePromptCooldownMet(history) to be true; for
isDismissalCooldownMet (or the dismissal-related test block around the 257-268
region) add a test that sets a dismissals entry to exactly 60 days ago and
expect service.isDismissalCooldownMet(history) to be true. Reference the
existing AppRatingHistory type and the methods isNativePromptCooldownMet and
isDismissalCooldownMet when adding these specs so the tests cover the >=
boundary behavior precisely.
- Around line 428-440: The test currently only asserts that no exception is
thrown but doesn't prove the code reached the failing saveConfiguration path;
after calling service.attemptRatingPrompt() and advancing timers (tick()), add
an assertion that featureConfigService.saveConfiguration was invoked (e.g.
expect(featureConfigService.saveConfiguration).toHaveBeenCalled() or
toHaveBeenCalledTimes(1)) so the spec verifies the swallowed-error branch of
attemptRatingPrompt() actually executed.
- Around line 133-145: The tests currently only assert count mapping for
service.hasEnoughExpenses() but do not ensure the correct expense filter is
used; update the specs that stub expensesService.getExpenseStats so they both
(a) return the mocked Observable for the exact filtered query you expect the
production code to request (e.g. include the same filter object used by
hasEnoughExpenses) and (b) assert that expensesService.getExpenseStats was
called with that specific filter (use
expect(expensesService.getExpenseStats).toHaveBeenCalledWith(expectedFilter))
before asserting the boolean result; reference the
expensesService.getExpenseStats spy and the hasEnoughExpenses method to locate
where to add the pinned filter and call assertion.

---

Duplicate comments:
In `@src/app/core/services/app-rating.service.spec.ts`:
- Around line 85-91: The current spec hardcodes a passing "testing mode" bypass
for service.checkEligibility(); remove that temporary test and replace it with
two focused tests that drive the real eligibility logic via spies/mocks: one
test should spy on the service's dependencies (e.g., any injected services used
by checkEligibility()) to simulate conditions that should make
checkEligibility() emit true and assert expect(result).toBeTrue(), and a second
test should spy those same dependencies to simulate ineligible conditions and
assert expect(result).toBeFalse(); ensure both tests subscribe to
service.checkEligibility() (or use async/await with firstValueFrom) and properly
restore or reset spies between tests.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 7155d305-b98d-481c-8183-41ff668c8358

📥 Commits

Reviewing files that changed from the base of the PR and between 0830f29 and 2f4e4ae.

📒 Files selected for processing (1)
  • src/app/core/services/app-rating.service.spec.ts

Comment on lines +107 to +118
it('should return false when user was created exactly 29 days ago', () => {
const date = new Date();
date.setDate(date.getDate() - 29);
const eou = { ...apiEouRes, ou: { ...apiEouRes.ou, created_at: date } } as ExtendedOrgUser;
expect(service.isUserOldEnough(eou)).toBeFalse();
});

it('should return true when user was created exactly 31 days ago', () => {
const date = new Date();
date.setDate(date.getDate() - 31);
const eou = { ...apiEouRes, ou: { ...apiEouRes.ou, created_at: date } } as ExtendedOrgUser;
expect(service.isUserOldEnough(eou)).toBeTrue();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Cover the exact 30-day cutoff.

The contract here is >= 30 days, Thalaiva. Testing 29 and 31 days misses the real edge, so a > regression can slip through; please freeze the clock and add an exact 30-day case.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/core/services/app-rating.service.spec.ts` around lines 107 - 118, Add
a deterministic test for the 30-day cutoff: create a test in
app-rating.service.spec.ts that freezes the clock (e.g.,
jasmine.clock()/equivalent), builds an ExtendedOrgUser based on apiEouRes with
ou.created_at set to new Date() minus 30 days, asserts
service.isUserOldEnough(eou) is true, and then unfreezes/restores the clock;
reference the existing isUserOldEnough function and apiEouRes so the new test
mirrors the 29/31-day cases but validates the exact 30-day boundary.

Comment on lines +133 to +145
it('should return true when expense count is >= 5', (done) => {
expensesService.getExpenseStats.and.returnValue(of({ data: { count: 10, total_amount: 5000 } }));
service.hasEnoughExpenses().subscribe((result) => {
expect(result).toBeTrue();
done();
});
});

it('should return true when expense count is exactly 5', (done) => {
expensesService.getExpenseStats.and.returnValue(of({ data: { count: 5, total_amount: 1000 } }));
service.hasEnoughExpenses().subscribe((result) => {
expect(result).toBeTrue();
done();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Pin the getExpenseStats() filter too.

These specs only verify count-to-boolean mapping. If the state query changes, the suite still passes while the rating prompt starts counting the wrong expense set.

🧪 Suggested assertion
       service.hasEnoughExpenses().subscribe((result) => {
         expect(result).toBeTrue();
+        expect(expensesService.getExpenseStats).toHaveBeenCalledOnceWith({
+          state: 'in.(COMPLETE,APPROVER_PENDING,APPROVED,PAYMENT_PENDING,PAYMENT_PROCESSING,PAID)',
+        });
         done();
       });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
it('should return true when expense count is >= 5', (done) => {
expensesService.getExpenseStats.and.returnValue(of({ data: { count: 10, total_amount: 5000 } }));
service.hasEnoughExpenses().subscribe((result) => {
expect(result).toBeTrue();
done();
});
});
it('should return true when expense count is exactly 5', (done) => {
expensesService.getExpenseStats.and.returnValue(of({ data: { count: 5, total_amount: 1000 } }));
service.hasEnoughExpenses().subscribe((result) => {
expect(result).toBeTrue();
done();
it('should return true when expense count is >= 5', (done) => {
expensesService.getExpenseStats.and.returnValue(of({ data: { count: 10, total_amount: 5000 } }));
service.hasEnoughExpenses().subscribe((result) => {
expect(result).toBeTrue();
expect(expensesService.getExpenseStats).toHaveBeenCalledOnceWith({
state: 'in.(COMPLETE,APPROVER_PENDING,APPROVED,PAYMENT_PENDING,PAYMENT_PROCESSING,PAID)',
});
done();
});
});
it('should return true when expense count is exactly 5', (done) => {
expensesService.getExpenseStats.and.returnValue(of({ data: { count: 5, total_amount: 1000 } }));
service.hasEnoughExpenses().subscribe((result) => {
expect(result).toBeTrue();
expect(expensesService.getExpenseStats).toHaveBeenCalledOnceWith({
state: 'in.(COMPLETE,APPROVER_PENDING,APPROVED,PAYMENT_PENDING,PAYMENT_PROCESSING,PAID)',
});
done();
});
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/core/services/app-rating.service.spec.ts` around lines 133 - 145, The
tests currently only assert count mapping for service.hasEnoughExpenses() but do
not ensure the correct expense filter is used; update the specs that stub
expensesService.getExpenseStats so they both (a) return the mocked Observable
for the exact filtered query you expect the production code to request (e.g.
include the same filter object used by hasEnoughExpenses) and (b) assert that
expensesService.getExpenseStats was called with that specific filter (use
expect(expensesService.getExpenseStats).toHaveBeenCalledWith(expectedFilter))
before asserting the boolean result; reference the
expensesService.getExpenseStats spy and the hasEnoughExpenses method to locate
where to add the pinned filter and call assertion.

Comment on lines +219 to +230
it('should return true when last native prompt was more than 180 days ago', () => {
const oldDate = new Date();
oldDate.setDate(oldDate.getDate() - 200);
const history: AppRatingHistory = { nativePrompts: [oldDate.toISOString()], dismissals: [] };
expect(service.isNativePromptCooldownMet(history)).toBeTrue();
});

it('should return false when last native prompt was less than 180 days ago', () => {
const recentDate = new Date();
recentDate.setDate(recentDate.getDate() - 30);
const history: AppRatingHistory = { nativePrompts: [recentDate.toISOString()], dismissals: [] };
expect(service.isNativePromptCooldownMet(history)).toBeFalse();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add the exact cooldown edges as separate specs.

Right now you only test both sides of the window, boss. Exact 180-day and exact 60-day cases are the real contract boundaries, and they catch > vs >= slips cleanly.

Also applies to: 257-268

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/core/services/app-rating.service.spec.ts` around lines 219 - 230, Add
two edge-case specs that assert exact-boundary behavior: for
isNativePromptCooldownMet add a test that sets a nativePrompts entry to exactly
180 days ago (new Date().setDate(new Date().getDate() - 180)) and expect
service.isNativePromptCooldownMet(history) to be true; for
isDismissalCooldownMet (or the dismissal-related test block around the 257-268
region) add a test that sets a dismissals entry to exactly 60 days ago and
expect service.isDismissalCooldownMet(history) to be true. Reference the
existing AppRatingHistory type and the methods isNativePromptCooldownMet and
isDismissalCooldownMet when adding these specs so the tests cover the >=
boundary behavior precisely.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/app/core/services/app-rating.service.spec.ts`:
- Around line 133-147: The first test for hasEnoughExpenses() duplicates the
"exactly 10" case; update the test that intends to verify the ">= 10" behavior
by changing the mocked expensesService.getExpenseStats return value to use a
count greater than 10 (e.g., count: 15) and keep the expectation
service.hasEnoughExpenses() toBeTrue(), referencing the existing test that calls
expensesService.getExpenseStats and subscribes to service.hasEnoughExpenses() so
the two specs exercise distinct scenarios.
- Around line 409-424: Remove the redundant stub of
featureConfigService.saveConfiguration in the test: the call to
featureConfigService.saveConfiguration.and.returnValue(of(undefined)) inside the
'should create popover with correct component and props' spec is already defined
in the beforeEach setup, so delete that line to avoid duplicate stubbing; keep
the rest of the test that calls service.attemptRatingPrompt(), inspects
popoverController.create, and asserts component/componentProps.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 3b334b53-7780-4f01-8252-8c597bdb7224

📥 Commits

Reviewing files that changed from the base of the PR and between 2f4e4ae and 19b9501.

📒 Files selected for processing (1)
  • src/app/core/services/app-rating.service.spec.ts

Comment on lines +133 to +147
it('should return true when expense count is >= 10', (done) => {
expensesService.getExpenseStats.and.returnValue(of({ data: { count: 10, total_amount: 5000 } }));
service.hasEnoughExpenses().subscribe((result) => {
expect(result).toBeTrue();
done();
});
});

it('should return true when expense count is exactly 10', (done) => {
expensesService.getExpenseStats.and.returnValue(of({ data: { count: 10, total_amount: 1000 } }));
service.hasEnoughExpenses().subscribe((result) => {
expect(result).toBeTrue();
done();
});
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Same punch delivered twice, Thalaiva - style waste!

Both tests use count: 10. The first one (line 133-139) claims to test >= 10 but uses exactly 10, making it identical to the second test (lines 141-147). For the "greater than or equal" scenario, use a count like 15 to differentiate.

♻️ Proposed differentiation
   describe('hasEnoughExpenses', () => {
-    it('should return true when expense count is >= 10', (done) => {
-      expensesService.getExpenseStats.and.returnValue(of({ data: { count: 10, total_amount: 5000 } }));
+    it('should return true when expense count is greater than 10', (done) => {
+      expensesService.getExpenseStats.and.returnValue(of({ data: { count: 15, total_amount: 5000 } }));
       service.hasEnoughExpenses().subscribe((result) => {
         expect(result).toBeTrue();
         done();
       });
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/core/services/app-rating.service.spec.ts` around lines 133 - 147, The
first test for hasEnoughExpenses() duplicates the "exactly 10" case; update the
test that intends to verify the ">= 10" behavior by changing the mocked
expensesService.getExpenseStats return value to use a count greater than 10
(e.g., count: 15) and keep the expectation service.hasEnoughExpenses()
toBeTrue(), referencing the existing test that calls
expensesService.getExpenseStats and subscribes to service.hasEnoughExpenses() so
the two specs exercise distinct scenarios.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/app/core/services/app-rating.service.spec.ts (1)

143-179: 🧹 Nitpick | 🔵 Trivial

One more step for full glory, boss!

For consistency with isUserOldEnough tests, add an exact 30-day boundary test here too. The >= 30 contract should be validated at the precise edge.

🧪 Suggested test addition
it('should return true when first mobile login was exactly 30 days ago', (done) => {
  const exactDate = new Date();
  exactDate.setDate(exactDate.getDate() - 30);
  deviceService.getDeviceInfo.and.returnValue(of({ operatingSystem: 'ios' } as any));
  appVersionService.getFirstMobileLoginDate.and.returnValue(of(exactDate));

  service.isUserOldEnoughOnMobile().subscribe((result) => {
    expect(result).toBeTrue();
    done();
  });
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/core/services/app-rating.service.spec.ts` around lines 143 - 179, Add
a boundary test for isUserOldEnoughOnMobile that verifies a first mobile login
exactly 30 days ago returns true: create a new spec (it block) that sets
deviceService.getDeviceInfo to return an OS (e.g., 'ios'), sets
appVersionService.getFirstMobileLoginDate to return a Date instance with
setDate(now - 30), subscribes to service.isUserOldEnoughOnMobile(), and asserts
the result is true (and calls done()); this mirrors the existing tests that use
deviceService.getDeviceInfo and appVersionService.getFirstMobileLoginDate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/app/core/services/app-rating.service.spec.ts`:
- Around line 143-179: Add a boundary test for isUserOldEnoughOnMobile that
verifies a first mobile login exactly 30 days ago returns true: create a new
spec (it block) that sets deviceService.getDeviceInfo to return an OS (e.g.,
'ios'), sets appVersionService.getFirstMobileLoginDate to return a Date instance
with setDate(now - 30), subscribes to service.isUserOldEnoughOnMobile(), and
asserts the result is true (and calls done()); this mirrors the existing tests
that use deviceService.getDeviceInfo and
appVersionService.getFirstMobileLoginDate.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: fbfdf305-2062-4276-a999-723fedcd9fc6

📥 Commits

Reviewing files that changed from the base of the PR and between 19b9501 and bd43ebd.

📒 Files selected for processing (1)
  • src/app/core/services/app-rating.service.spec.ts

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (5)
src/app/core/services/app-rating.service.spec.ts (5)

155-167: 🧹 Nitpick | 🔵 Trivial

En vazhi thani vazhi, but the exact 30-day boundary is missing, Thalaiva!

The contract is >= 30 days. Testing 29 days (false) and 31 days (true) is good, but the exact 30-day case — the real boundary — is not tested. A > vs >= regression can slip through undetected.

🎯 Add exact 30-day boundary test
+    it('should return true when user was created exactly 30 days ago', () => {
+      const date = new Date();
+      date.setDate(date.getDate() - 30);
+      const eou = { ...apiEouRes, ou: { ...apiEouRes.ou, created_at: date } } as ExtendedOrgUser;
+      expect(service.isUserOldEnough(eou)).toBeTrue();
+    });
+
     it('should return true when user was created exactly 31 days ago', () => {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/core/services/app-rating.service.spec.ts` around lines 155 - 167, Add
a unit test for the exact 30-day boundary to ensure the contract (>= 30 days) is
enforced: create a new test case in app-rating.service.spec.ts that constructs a
date exactly 30 days ago, builds an ExtendedOrgUser (reuse apiEouRes and set
ou.created_at = that date), and assert service.isUserOldEnough(eou) returns
true; reference the existing tests for 29 and 31 days and the service method
isUserOldEnough to place and model the new test consistently.

527-540: 🧹 Nitpick | 🔵 Trivial

Mind it! The save-failure spec doesn't prove it reached saveConfiguration(), Thalaiva!

This test only checks "no throw". If the flow bails out before the failing call, the test still passes. Add an assertion that saveConfiguration() was invoked so the swallowed-error path is truly under test.

🎯 Proposed fix
       expect(() => {
         service.attemptRatingPrompt();
         tick();
       }).not.toThrow();
+      expect(featureConfigService.saveConfiguration).toHaveBeenCalled();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/core/services/app-rating.service.spec.ts` around lines 527 - 540, The
test currently only verifies no exception but doesn't ensure the flow reached
the failing call; update the spec for 'recordInteraction error handling' to
assert that featureConfigService.saveConfiguration was actually invoked when
running service.attemptRatingPrompt (which triggers the save path). After
configuring spies (spyOn(service, 'checkEligibility')...,
featureConfigService.getConfiguration...,
featureConfigService.saveConfiguration...), call service.attemptRatingPrompt()
and tick(), then add an expectation like
expect(featureConfigService.saveConfiguration).toHaveBeenCalled() to prove the
swallowed-error branch was exercised.

218-233: 🧹 Nitpick | 🔵 Trivial

One test, two names, same punch — style waste, Thalaiva!

Both tests use count: 10 making them functionally identical. The first test (lines 219-225) claims to test >= 10 but should use a count greater than 10 (e.g., 15) to differentiate from the "exactly 10" case.

Additionally, these specs only verify count-to-boolean mapping but never assert that getExpenseStats was called with the correct filter. If the state query changes, the suite still passes while the rating prompt starts counting the wrong expense set.

♻️ Proposed fix
   describe('hasEnoughExpenses', () => {
-    it('should return true when expense count is >= 10', (done) => {
-      expensesService.getExpenseStats.and.returnValue(of({ data: { count: 10, total_amount: 5000 } }));
+    it('should return true when expense count is greater than 10', (done) => {
+      expensesService.getExpenseStats.and.returnValue(of({ data: { count: 15, total_amount: 5000 } }));
       service.hasEnoughExpenses().subscribe((result) => {
         expect(result).toBeTrue();
+        expect(expensesService.getExpenseStats).toHaveBeenCalledOnceWith({
+          state: 'in.(COMPLETE,APPROVER_PENDING,APPROVED,PAYMENT_PENDING,PAYMENT_PROCESSING,PAID)',
+        });
         done();
       });
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/core/services/app-rating.service.spec.ts` around lines 218 - 233,
Update the first spec in hasEnoughExpenses so it tests a count greater than 10
(e.g., 15) instead of 10 to differentiate the ">= 10" case from the "exactly 10"
case; keep the second spec at exactly 10. Also add an assertion that the spy
expensesService.getExpenseStats was called with the expected filter used by
service.hasEnoughExpenses (use toHaveBeenCalledWith on the getExpenseStats spy
and the same filter object/shape that hasEnoughExpenses supplies) so the test
verifies both the count logic and the correct query parameters.

305-316: 🧹 Nitpick | 🔵 Trivial

Mind it! Exact cooldown boundaries still missing, Thalaiva!

The tests cover values on both sides of the window (200 vs 30 days for native, 90 vs 15 days for dismissal), but the exact boundaries — 180 days and 60 days — are not tested. These edge cases catch > vs >= slip-ups cleanly.

🎯 Add exact boundary tests
+    it('should return true when last native prompt was exactly 180 days ago', () => {
+      const exactDate = new Date();
+      exactDate.setDate(exactDate.getDate() - 180);
+      const history: AppRatingHistory = { nativePrompts: [exactDate.toISOString()], dismissals: [] };
+      expect(service.isNativePromptCooldownMet(history)).toBeTrue();
+    });
+
     it('should return false when last native prompt was less than 180 days ago', () => {
+    it('should return true when last dismissal was exactly 60 days ago', () => {
+      const exactDate = new Date();
+      exactDate.setDate(exactDate.getDate() - 60);
+      const history: AppRatingHistory = { nativePrompts: [], dismissals: [exactDate.toISOString()] };
+      expect(service.isDismissalCooldownMet(history)).toBeTrue();
+    });
+
     it('should return false when last dismissal was less than 60 days ago', () => {

Also applies to: 343-355

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/core/services/app-rating.service.spec.ts` around lines 305 - 316, Add
precise boundary unit tests for the cooldown logic: create one test that sets a
native prompt timestamp exactly 180 days ago and assert
service.isNativePromptCooldownMet(history) returns true, and another that sets a
dismissal timestamp exactly 60 days ago and assert
service.isDismissalCooldownMet(history) returns true (use AppRatingHistory with
nativePrompts/dismissals populated by date.toISOString()); this catches
off-by-one (> vs >=) errors while referencing the isNativePromptCooldownMet and
isDismissalCooldownMet methods and the AppRatingHistory shape.

509-524: 🧹 Nitpick | 🔵 Trivial

Duplicate stub at line 514, boss — style overload!

The featureConfigService.saveConfiguration.and.returnValue(of(undefined)) at line 514 is redundant — it's already set up in the beforeEach block at line 421. Remove it for cleaner specs.

🧹 Proposed cleanup
     it('should create popover with correct component and props', fakeAsync(() => {
       popoverSpy.onWillDismiss.and.resolveTo({ data: { action: 'dismiss' }, role: undefined });
       featureConfigService.getConfiguration.and.returnValue(
         of({ value: { nativePrompts: [], dismissals: [] } } as FeatureConfig<AppRatingHistory>),
       );
-      featureConfigService.saveConfiguration.and.returnValue(of(undefined));

       service.attemptRatingPrompt();
       tick();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/core/services/app-rating.service.spec.ts` around lines 509 - 524,
Remove the duplicate stub for featureConfigService.saveConfiguration in the test
"should create popover with correct component and props": delete the line
featureConfigService.saveConfiguration.and.returnValue(of(undefined)) inside
that it(...) block since the same stub is already configured in the beforeEach;
leave the rest of the test (popoverSpy setup,
featureConfigService.getConfiguration, service.attemptRatingPrompt(), and
assertions referencing popoverController.create and PopupAlertComponent)
unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/app/core/services/app-rating.service.spec.ts`:
- Around line 155-167: Add a unit test for the exact 30-day boundary to ensure
the contract (>= 30 days) is enforced: create a new test case in
app-rating.service.spec.ts that constructs a date exactly 30 days ago, builds an
ExtendedOrgUser (reuse apiEouRes and set ou.created_at = that date), and assert
service.isUserOldEnough(eou) returns true; reference the existing tests for 29
and 31 days and the service method isUserOldEnough to place and model the new
test consistently.
- Around line 527-540: The test currently only verifies no exception but doesn't
ensure the flow reached the failing call; update the spec for 'recordInteraction
error handling' to assert that featureConfigService.saveConfiguration was
actually invoked when running service.attemptRatingPrompt (which triggers the
save path). After configuring spies (spyOn(service, 'checkEligibility')...,
featureConfigService.getConfiguration...,
featureConfigService.saveConfiguration...), call service.attemptRatingPrompt()
and tick(), then add an expectation like
expect(featureConfigService.saveConfiguration).toHaveBeenCalled() to prove the
swallowed-error branch was exercised.
- Around line 218-233: Update the first spec in hasEnoughExpenses so it tests a
count greater than 10 (e.g., 15) instead of 10 to differentiate the ">= 10" case
from the "exactly 10" case; keep the second spec at exactly 10. Also add an
assertion that the spy expensesService.getExpenseStats was called with the
expected filter used by service.hasEnoughExpenses (use toHaveBeenCalledWith on
the getExpenseStats spy and the same filter object/shape that hasEnoughExpenses
supplies) so the test verifies both the count logic and the correct query
parameters.
- Around line 305-316: Add precise boundary unit tests for the cooldown logic:
create one test that sets a native prompt timestamp exactly 180 days ago and
assert service.isNativePromptCooldownMet(history) returns true, and another that
sets a dismissal timestamp exactly 60 days ago and assert
service.isDismissalCooldownMet(history) returns true (use AppRatingHistory with
nativePrompts/dismissals populated by date.toISOString()); this catches
off-by-one (> vs >=) errors while referencing the isNativePromptCooldownMet and
isDismissalCooldownMet methods and the AppRatingHistory shape.
- Around line 509-524: Remove the duplicate stub for
featureConfigService.saveConfiguration in the test "should create popover with
correct component and props": delete the line
featureConfigService.saveConfiguration.and.returnValue(of(undefined)) inside
that it(...) block since the same stub is already configured in the beforeEach;
leave the rest of the test (popoverSpy setup,
featureConfigService.getConfiguration, service.attemptRatingPrompt(), and
assertions referencing popoverController.create and PopupAlertComponent)
unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 661f955e-edd9-4f44-ba19-c9296498af14

📥 Commits

Reviewing files that changed from the base of the PR and between bd43ebd and 8c53ed7.

📒 Files selected for processing (1)
  • src/app/core/services/app-rating.service.spec.ts

@github-actions
Copy link

Unit Test Coverage % values
Statements 95.06% ( 21575 / 22696 )
Branches 91.01% ( 7992 / 8781 )
Functions 94.37% ( 6310 / 6686 )
Lines 95.08% ( 20891 / 21971 )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size/L Large PR

Development

Successfully merging this pull request may close these issues.

1 participant