Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/smp granular locks v4 #1154

Draft
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

sudeep-mohanty
Copy link
Contributor

@sudeep-mohanty sudeep-mohanty commented Oct 9, 2024

This PR adds support for granular locking to the FreeRTOS kernel.

Description

Granular locking introduces the concept of having localized locks per kernel data group for SMP configuration. This method is an optional replacement of the existing kernel locks and is controlled by a new port layer configuration, viz., portUSING_GRANULAR_LOCKS. More details about the approach can be found here.

Test Steps

The implementation has been tested on Espressif SoC targets, viz., the ESP32 using the ESP-IDF framework.

1. Testing on an esp32 target

  • To test the implementation of the granular locking scheme on esp32, setup the ESP-IDF environment on your local machine. The steps to follow are listed in the Getting Started Guide.
  • Instead of the main ESP-IDF repository, the granular locks implementation resides in this froked repository - https://github.com/sudeep-mohanty/esp-idf.
  • Once you have cloned the forked repository and setup the ESP-IDF environment, change your branch to feat/granular_locks_tests.
  • To run the FreeRTOS unit tests, change the directory to components/freertos/test_apps/freertos where all the test cases are located.
  • Performance tests are located in the performance subfolder at the same location.
  • Setup the target device with the command idf.py seet-target esp32.
  • Select the Amazon FreeRTOS SMP kernel using the menuconfig options. To do this you must enter the command -idf.py menuconfig -> Component config -> FreeRTOS -> Kernel -> Run the Amazon SMP FreeRTOS kernel instead (FEATURE UNDER DEVELOPMENT). Save the configuration and exit the menuconfig.
  • Now, build and flash the unit test application with the command idf.py build flash monitor.
  • Once the app is up, you can enter the number of the test case and run the unity test case.

TODO

  • Test setup for other targets (Raspberry Pi Pico)
  • Generic target tests to be uploaded to the FreeRTOS repository.

Checklist:

  • I have tested my changes. No regression in existing tests.
  • I have modified and/or added unit-tests to cover the code changes in this Pull Request.

Related Issue

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@sudeep-mohanty
Copy link
Contributor Author

@chinglee-iot @aggarg This PR introduces the granular locks changes to the FreeRTOS kernel. Please have a look and we could have discussions/changes in this PR context. Thank you.

cc: @ESP-Marius @Dazza0

@rawalexe
Copy link
Member

rawalexe commented Oct 9, 2024

Thank you for your contribution, I'll forward this request to the team. There are few error in the PR can you please try fixing them

@rawalexe
Copy link
Member

Hello @sudeep-mohanty, I am just following up if you had time to fix the build issues

@sudeep-mohanty sudeep-mohanty marked this pull request as draft October 15, 2024 08:25
@sudeep-mohanty
Copy link
Contributor Author

@rawalexe Yes! I shall work on the failures and would also do some refactoring for an easier review process. For now, I've put this PR in draft.

@sudeep-mohanty sudeep-mohanty force-pushed the feature/smp_granular_locks_v4 branch 9 times, most recently from 0159a4a to 5bf8c33 Compare October 30, 2024 10:26
@sudeep-mohanty sudeep-mohanty marked this pull request as ready for review October 30, 2024 10:34
@sudeep-mohanty sudeep-mohanty requested a review from a team as a code owner October 30, 2024 10:34
@ActoryOu
Copy link
Member

Hi @sudeep-mohanty,
Could you help check CI failing?

@sudeep-mohanty sudeep-mohanty force-pushed the feature/smp_granular_locks_v4 branch from 5bf8c33 to 9f8acc7 Compare October 31, 2024 09:25
@sudeep-mohanty
Copy link
Contributor Author

Hi @sudeep-mohanty, Could you help check CI failing?

Hi @ActoryOu, I've made some updates which should fix the CI failures however I could not understand why the link-verifier action fails. Seems more like a script failure to me. So this action would still fail. If you have more information on what is causing it, could you let me know and I shall fix it. Thanks.

@sudeep-mohanty
Copy link
Contributor Author

sudeep-mohanty commented Nov 1, 2024

Created a PR to fix the CI failure - FreeRTOS/FreeRTOS#1292

@sudeep-mohanty sudeep-mohanty force-pushed the feature/smp_granular_locks_v4 branch 4 times, most recently from 70eb466 to edc1c98 Compare November 4, 2024 15:38
Copy link

sonarqubecloud bot commented Nov 4, 2024

@sudeep-mohanty sudeep-mohanty marked this pull request as draft November 15, 2024 14:25
@sudeep-mohanty sudeep-mohanty force-pushed the feature/smp_granular_locks_v4 branch from c9eabd1 to 97a6841 Compare January 14, 2025 10:50
@sudeep-mohanty sudeep-mohanty marked this pull request as ready for review January 14, 2025 14:21
@sudeep-mohanty sudeep-mohanty force-pushed the feature/smp_granular_locks_v4 branch from 97a6841 to 9975cb2 Compare January 15, 2025 10:42
Copy link
Member

@chinglee-iot chinglee-iot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Early feedback for this PR.

Some NIT can be improved separately

  • Review the SMP comment and use the following terms when granular lock feature has different implementation
    • SMP with granular lock
    • SMP without granular lock

For example,

#ifndef portRELEASE_TASK_LOCK
    #if ( ( portUSING_GRANULAR_LOCKS == 1 ) || ( configNUMBER_OF_CORES == 1 ) )
        #define portRELEASE_TASK_LOCK( xCoreID )
    #else
        #error portRELEASE_TASK_LOCK is required in SMP without granular lock feature
    #endif
#endif /* portRELEASE_TASK_LOCK */
  • egLOCK/UNLOCK and egENTER/EXIT_CRITICAL macros are used. I am favor of not using abbreviation "eg" here. We can consider to use full name. For example, using the following macros or other macros with full name.
#define event_groupLOCK
#define event_groupUNLOCK
#define event_groupENTER_CRITICAL
#define event_groupEXIT_CRITICAL

I will continue to review the following files and update my review suggestion again.

  • queue.c
  • stream_buffer.c
  • tasks.c

@sudeep-mohanty sudeep-mohanty force-pushed the feature/smp_granular_locks_v4 branch from 9975cb2 to 7dae269 Compare January 21, 2025 07:52
Copy link
Member

@chinglee-iot chinglee-iot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update review suggestion for the following files

  • queue.c
  • event_groups.c

Overall review suggestions:

  • portGET/RELEASE_SPINLOCK should be implemented with thread-safety.
  • Consider to add coreID as parameter for the following port macros for performance consideration.
    • portENTER_CRITICAL_DATA_GROUP
    • portEXIT_CRITICAL_DATA_GROUP
    • portENTER_CRITICAL_DATA_GROUP_FROM_ISR
    • portEXIT_CRITICAL_DATA_GROUP_FROM_ISR
  • The critical nesting count is used among all the data groups, including tasks.c. Can we move the portXXX_DATA_GROUP implementation back to kernel and use portXXX_CRITICAL_NESTING_COUNT in the implementation?

@sudeep-mohanty sudeep-mohanty force-pushed the feature/smp_granular_locks_v4 branch from 7dae269 to 4cf3837 Compare January 23, 2025 10:05
Copy link
Member

@chinglee-iot chinglee-iot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update the partial review suggestion of tasks.c file and also update previous review suggestion:

  • Adding core ID to portENTER/EXIT_CRITICAL_DATA_GROUP parameters in these macros doesn't make sense, since the core ID is only valid when the task's executing core remains unchanged.

@sudeep-mohanty sudeep-mohanty force-pushed the feature/smp_granular_locks_v4 branch from bf1d964 to d3784c3 Compare February 4, 2025 09:50
Dazza0 and others added 10 commits February 26, 2025 12:08
…herit()

xTaskPriorityInherit() is called inside a critical section from queue.c. This
commit moves the critical section into xTaskPriorityInherit().

Co-authored-by: Sudeep Mohanty <[email protected]>
Changed xPreemptionDisable to be a count rather than a pdTRUE/pdFALSE. This
allows nested calls to vTaskPreemptionEnable(), where a yield only occurs when
xPreemptionDisable is 0.

Co-authored-by: Sudeep Mohanty <[email protected]>
Adds the required checks for granular locking port macros.

Port Config:

- portUSING_GRANULAR_LOCKS to enable granular locks
- portCRITICAL_NESTING_IN_TCB should be disabled

Granular Locking Port Macros:

- Spinlocks
        - portSPINLOCK_TYPE
        - portINIT_SPINLOCK( pxSpinlock )
        - portINIT_SPINLOCK_STATIC
- Locking
        - portGET_SPINLOCK()
        - portRELEASE_SPINLOCK()

Co-authored-by: Sudeep Mohanty <[email protected]>
- Updated prvCheckForRunStateChange() for granular locks
- Updated vTaskSuspendAll() and xTaskResumeAll()
    - Now holds the xTaskSpinlock during kernel suspension
    - Increments/decrements xPreemptionDisable. Only yields when 0, thus allowing
    for nested suspensions across different data groups

Co-authored-by: Sudeep Mohanty <[email protected]>
Updated critical section macros with granular locks.

Some tasks.c API relied on their callers to enter critical sections. This
assumption no longer works under granular locking. Critical sections added to
the following functions:

- `vTaskInternalSetTimeOutState()`
- `xTaskIncrementTick()`
- `vTaskSwitchContext()`
- `xTaskRemoveFromEventList()`
- `vTaskInternalSetTimeOutState()`
- `eTaskConfirmSleepModeStatus()`
- `xTaskPriorityDisinherit()`
- `pvTaskIncrementMutexHeldCount()`

Added missing suspensions to the following functions:

- `vTaskPlaceOnEventList()`
- `vTaskPlaceOnUnorderedEventList()`
- `vTaskPlaceOnEventListRestricted()`

Fixed the locking in vTaskSwitchContext()

vTaskSwitchContext() must aquire both kernel locks, viz., task lock and
ISR lock. This is because, vTaskSwitchContext() can be called from
either task context or ISR context. Also, vTaskSwitchContext() must not
alter the interrupt state prematurely.

Co-authored-by: Sudeep Mohanty <[email protected]>
Updated queue.c to use granular locking

- Added xTaskSpinlock and xISRSpinlock
- Replaced  critical section macros with data group critical section macros
such as taskENTER/EXIT_CRITICAL/_FROM_ISR() with queueENTER/EXIT_CRITICAL_FROM_ISR().
- Added vQueueEnterCritical/FromISR() and vQueueExitCritical/FromISR()
  which map to the data group critical section macros.
- Added prvLockQueueForTasks() and prvUnlockQueueForTasks() as the granular locking equivalents
to prvLockQueue() and prvUnlockQueue() respectively

Co-authored-by: Sudeep Mohanty <[email protected]>
Updated event_groups.c to use granular locking

- Added xTaskSpinlock and xISRSpinlock
- Replaced critical section macros with data group critical section macros
such as taskENTER/EXIT_CRITICAL/_FROM_ISR() with event_groupsENTER/EXIT_CRITICAL/_FROM_ISR().
- Added vEventGroupsEnterCritical/FromISR() and
  vEventGroupsExitCriti/FromISR() functions that map to the data group
critical section macros.
- Added prvLockEventGroupForTasks() and prvUnlockEventGroupForTasks() to suspend the event
group when executing non-deterministic code.
- xEventGroupSetBits() and vEventGroupDelete() accesses the kernel data group
directly. Thus, added vTaskSuspendAll()/xTaskResumeAll() to these fucntions.

Co-authored-by: Sudeep Mohanty <[email protected]>
Updated stream_buffer.c to use granular locking

- Added xTaskSpinlock and xISRSpinlock
- Replaced critical section macros with data group critical section macros
such as taskENTER/EXIT_CRITICAL/_FROM_ISR() with sbENTER/EXIT_CRITICAL_FROM_ISR().
- Added vStreambuffersEnterCritical/FromISR() and
  vStreambuffersExitCritical/FromISR() to map to the data group critical
section macros.
- Added prvLockStreamBufferForTasks() and prvUnlockStreamBufferForTasks() to suspend the stream
buffer when executing non-deterministic code.

Co-authored-by: Sudeep Mohanty <[email protected]>
Updated timers.c to use granular locking

- Added xTaskSpinlock and xISRSpinlock
- Replaced critical section macros with data group critical section macros
such as taskENTER/EXIT_CRITICAL() with tmrENTER/EXIT_CRITICAL().
- Added vTimerEnterCritical() and vTimerExitCritical() to map to the
  data group critical section macros.

Co-authored-by: Sudeep Mohanty <[email protected]>
@sudeep-mohanty sudeep-mohanty force-pushed the feature/smp_granular_locks_v4 branch from d3784c3 to 0847cd0 Compare February 26, 2025 11:22
Comment on lines +5302 to +5304
#if ( configUSE_TASK_PREEMPTION_DISABLE == 1 )
|| ( ( taskTASK_IS_RUNNING( pxCurrentTCBs[ xCoreID ] ) ) && ( pxCurrentTCBs[ xCoreID ]->xPreemptionDisable > 0U ) )
#endif
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please help me to clarify that the following sequence.

  • Task A is running with data group lock
  • Task B suspend Task A. prvYieldCore( xTaskACore ) is called
  • Task A is requested to yield.
  • Task A's xTaskRunState is set to taskTASK_SCHEDULED_TO_YIELD
  • Task A runs xTaskSwitchContext(). In this function, the running

state is taskTASK_SCHEDULED_TO_YIELD, which is result in taskTASK_IS_RUNNING is false.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @chinglee-iot,
After reviewing the design, I believe there’s a potential issue, as you’ve rightly pointed out: when a task disables preemption, it may still be susceptible to state changes initiated from the other core. The current design does not appear to account for this scenario.

To address this, we should consider reverting to the earlier approach where the scheduler is suspended entirely, rather than just disabling task preemption. This would offer stronger guarantees against concurrent state modifications across cores.

That said, I’ll also explore whether the current design can be adapted to address this limitation. If you have any other ideas or suggestions, please feel free to share them—happy to discuss further.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to discuss the following approach

  1. check the preemption disable status in the prvYieldCore(). If the core is running a task with preemption disabled, the yield request will be delayed until the preemption enabled.
  2. Add assertion in vTaskSwitchContext(). We should not select task for a core which is running a task with preemption disabled.

@@ -6609,7 +6798,9 @@ static void prvResetNextTaskUnblockTime( void )
else
{
#if ( configNUMBER_OF_CORES > 1 )
taskENTER_CRITICAL();
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears that entering the critical section is only required when granular locking is enabled, which changes the logic for non-granular lock SMP. Could you please help verify the intention behind this design?

@@ -4911,7 +4990,11 @@ BaseType_t xTaskIncrementTick( void )
#if ( configNUMBER_OF_CORES == 1 )
{
/* For single core the core ID is always 0. */
if( xYieldPendings[ 0 ] != pdFALSE )
if( xYieldPendings[ 0 ] != pdFALSE
#if ( configUSE_TASK_PREEMPTION_DISABLE == 1 )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

configUSE_TASK_PREEMPTION_DISABLE is not available in single core FreeRTOS. We can remove the condition here.

Comment on lines +5302 to +5304
#if ( configUSE_TASK_PREEMPTION_DISABLE == 1 )
|| ( ( taskTASK_IS_RUNNING( pxCurrentTCBs[ xCoreID ] ) ) && ( pxCurrentTCBs[ xCoreID ]->xPreemptionDisable > 0U ) )
#endif
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to discuss the following approach

  1. check the preemption disable status in the prvYieldCore(). If the core is running a task with preemption disabled, the yield request will be delayed until the preemption enabled.
  2. Add assertion in vTaskSwitchContext(). We should not select task for a core which is running a task with preemption disabled.

/* Lock the kernel data group as we are about to access its members */
UBaseType_t uxSavedInterruptStatus;

if( portCHECK_IF_IN_ISR() == pdTRUE )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's consider to split this function into two functions.

  • xTaskRemoveFromEventListSafe()
  • xTaskRemoveFromEventListFromISRSafe()

Both of the function above calls xTaskRemoveFromEventList. Then we don't have to depend on portCHECK_IF_IN_ISR() function.

/* If the mutex is taken by an interrupt, the mutex holder is NULL. Priority
* inheritance is not applied in this scenario. */
if( pxMutexHolder != NULL )
kernelENTER_CRITICAL();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would like to confirm that enter/exit critical section is only required for granular lock here.

@@ -6842,88 +7049,95 @@ static void prvResetNextTaskUnblockTime( void )

traceENTER_vTaskPriorityDisinheritAfterTimeout( pxMutexHolder, uxHighestPriorityWaitingTask );

if( pxMutexHolder != NULL )
kernelENTER_CRITICAL();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would like to confirm that enter/exit critical section is only required for granular lock here.

Copy link
Member

@chinglee-iot chinglee-iot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still have nested data group critical problem needs to be updated in the implementation.

Comment on lines +287 to +292
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
#define queueENTER_CRITICAL( pxQueue ) do { ( void ) pxQueue; taskENTER_CRITICAL(); } while( 0 )
#define queueENTER_CRITICAL_FROM_ISR( pxQueue ) do { ( void ) pxQueue; taskENTER_CRITICAL_FROM_ISR(); } while( 0 )
#define queueEXIT_CRITICAL( pxQueue ) do { ( void ) pxQueue; taskEXIT_CRITICAL(); } while( 0 )
#define queueEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxQueue ) do { ( void ) pxQueue; taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus ); } while( 0 )
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do ... while block can't be used to return value. Let's remove the do while brackets here.

Suggested change
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
#define queueENTER_CRITICAL( pxQueue ) do { ( void ) pxQueue; taskENTER_CRITICAL(); } while( 0 )
#define queueENTER_CRITICAL_FROM_ISR( pxQueue ) do { ( void ) pxQueue; taskENTER_CRITICAL_FROM_ISR(); } while( 0 )
#define queueEXIT_CRITICAL( pxQueue ) do { ( void ) pxQueue; taskEXIT_CRITICAL(); } while( 0 )
#define queueEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxQueue ) do { ( void ) pxQueue; taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus ); } while( 0 )
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
#define queueENTER_CRITICAL( pxQueue ) taskENTER_CRITICAL()
#define queueENTER_CRITICAL_FROM_ISR( pxQueue ) taskENTER_CRITICAL_FROM_ISR()
#define queueEXIT_CRITICAL( pxQueue ) taskEXIT_CRITICAL()
#define queueEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxQueue ) taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus )
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */

Comment on lines +83 to +88
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
#define event_groupsENTER_CRITICAL( pxEventBits ) do { ( void ) pxEventBits; taskENTER_CRITICAL(); } while( 0 )
#define event_groupsENTER_CRITICAL_FROM_ISR( pxEventBits ) do { ( void ) pxEventBits; taskENTER_CRITICAL_FROM_ISR(); } while( 0 )
#define event_groupsEXIT_CRITICAL( pxEventBits ) do { ( void ) pxEventBits; taskEXIT_CRITICAL(); } while( 0 )
#define event_groupsEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxEventBits ) do { ( void ) pxEventBits; taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus ); } while( 0 )
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do ... while block can't be used to return value. Let's remove the do while brackets here.

Suggested change
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
#define event_groupsENTER_CRITICAL( pxEventBits ) do { ( void ) pxEventBits; taskENTER_CRITICAL(); } while( 0 )
#define event_groupsENTER_CRITICAL_FROM_ISR( pxEventBits ) do { ( void ) pxEventBits; taskENTER_CRITICAL_FROM_ISR(); } while( 0 )
#define event_groupsEXIT_CRITICAL( pxEventBits ) do { ( void ) pxEventBits; taskEXIT_CRITICAL(); } while( 0 )
#define event_groupsEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxEventBits ) do { ( void ) pxEventBits; taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus ); } while( 0 )
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
#define event_groupsENTER_CRITICAL( pxEventBits ) taskENTER_CRITICAL()
#define event_groupsENTER_CRITICAL_FROM_ISR( pxEventBits ) taskENTER_CRITICAL_FROM_ISR()
#define event_groupsEXIT_CRITICAL( pxEventBits ) taskEXIT_CRITICAL()
#define event_groupsEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxEventBits ) taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus )
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */

Comment on lines +70 to +74
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
#define sbENTER_CRITICAL( pxEventBits ) do { ( void ) pxStreamBuffer; taskENTER_CRITICAL(); } while( 0 )
#define sbENTER_CRITICAL_FROM_ISR( pxEventBits ) do { ( void ) pxStreamBuffer; taskENTER_CRITICAL_FROM_ISR(); } while( 0 )
#define sbEXIT_CRITICAL( pxEventBits ) do { ( void ) pxStreamBuffer; taskEXIT_CRITICAL(); } while( 0 )
#define sbEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxStreamBuffer ) do { ( void ) pxStreamBuffer; taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus ); } while( 0 )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
#define sbENTER_CRITICAL( pxEventBits ) do { ( void ) pxStreamBuffer; taskENTER_CRITICAL(); } while( 0 )
#define sbENTER_CRITICAL_FROM_ISR( pxEventBits ) do { ( void ) pxStreamBuffer; taskENTER_CRITICAL_FROM_ISR(); } while( 0 )
#define sbEXIT_CRITICAL( pxEventBits ) do { ( void ) pxStreamBuffer; taskEXIT_CRITICAL(); } while( 0 )
#define sbEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxStreamBuffer ) do { ( void ) pxStreamBuffer; taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus ); } while( 0 )
#else /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
#define sbENTER_CRITICAL( pxEventBits ) taskENTER_CRITICAL()
#define sbENTER_CRITICAL_FROM_ISR( pxEventBits ) taskENTER_CRITICAL_FROM_ISR((
#define sbEXIT_CRITICAL( pxEventBits ) taskEXIT_CRITICAL()
#define sbEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus, pxStreamBuffer ) taskEXIT_CRITICAL_FROM_ISR( uxSavedInterruptStatus )

Comment on lines +369 to +382
/* Macros to take and release kernel spinlocks */
#if ( configNUMBER_OF_CORES > 1 )
#if ( portUSING_GRANULAR_LOCKS == 1 )
#define taskGET_TASK_LOCK( xCoreID ) portGET_SPINLOCK( xCoreID, &xTaskSpinlock )
#define taskRELEASE_TASK_LOCK( xCoreID ) portRELEASE_SPINLOCK( xCoreID, &xTaskSpinlock )
#define taskGET_ISR_LOCK( xCoreID ) portGET_SPINLOCK( xCoreID, &xISRSpinlock )
#define taskRELEASE_ISR_LOCK( xCoreID ) portRELEASE_SPINLOCK( xCoreID, &xISRSpinlock )
#else
#define taskGET_TASK_LOCK( xCoreID ) portGET_TASK_LOCK( xCoreID )
#define taskRELEASE_TASK_LOCK( xCoreID ) portRELEASE_TASK_LOCK( xCoreID )
#define taskGET_ISR_LOCK( xCoreID ) portGET_ISR_LOCK( xCoreID )
#define taskRELEASE_ISR_LOCK( xCoreID ) portRELEASE_ISR_LOCK( xCoreID )
#endif /* #if ( portUSING_GRANULAR_LOCKS == 1 ) */
#endif /* #if ( configNUMBER_OF_CORES > 1 ) */
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To maintain consistency in our granular lock schema, we can consider to use the "kernel" prefix for related functions, such as kernelGET/RELEASE_TASK/ISR_LOCK.

@@ -4033,7 +4081,7 @@ BaseType_t xTaskResumeAll( void )
configASSERT( uxSchedulerSuspended != 0U );

uxSchedulerSuspended = ( UBaseType_t ) ( uxSchedulerSuspended - 1U );
portRELEASE_TASK_LOCK( xCoreID );
taskRELEASE_TASK_LOCK( xCoreID );
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To aligned with the taskRELEASE_TASK_LOCK definition.

Suggested change
taskRELEASE_TASK_LOCK( xCoreID );
#if ( configNUMBER_OF_CORES > 1 )
taskRELEASE_TASK_LOCK( xCoreID );
#endif

Comment on lines +529 to +559
#ifndef portENTER_CRITICAL_DATA_GROUP

#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) )
#error portENTER_CRITICAL_DATA_GROUP is required for SMP with granular locking feature enabled
#endif

#endif

#ifndef portEXIT_CRITICAL_DATA_GROUP

#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) )
#error portEXIT_CRITICAL_DATA_GROUP is required for SMP with granular locking feature enabled
#endif

#endif

#ifndef portENTER_CRITICAL_DATA_GROUP_FROM_ISR

#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) )
#error portENTER_CRITICAL_DATA_GROUP_FROM_ISR is required for SMP with granular locking feature enabled
#endif

#endif

#ifndef portEXIT_CRITICAL_DATA_GROUP_FROM_ISR

#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) )
#error portEXIT_CRITICAL_DATA_GROUP_FROM_ISR is required for SMP with granular locking feature enabled
#endif

#endif
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like we don't need the data group macro now.

@@ -7258,6 +7468,28 @@ static void prvResetNextTaskUnblockTime( void )
#endif /* #if ( configNUMBER_OF_CORES > 1 ) */
/*-----------------------------------------------------------*/

#if ( configNUMBER_OF_CORES > 1 )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#if ( configNUMBER_OF_CORES > 1 )
#if ( configNUMBER_OF_CORES > 1 ) && ( portUSING_GRANULAR_LOCKS == 1 )

@sudeep-mohanty sudeep-mohanty marked this pull request as draft March 24, 2025 14:32
Copy link
Member

@chinglee-iot chinglee-iot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other review suggestions regarding preemption disabled and entering kernel critical section with nested critical section:

  • Consider to set xYieldPendings flag if the core is running a task with preemption disabled in prvYieldCore.
  • In vTaskPreemptionEnable, the task should yield when xYieldPendings is set to 1.
  • In vTaskExitCritical, xYieldCurrentTask should consider preemption disable. It should be set when preemption is not disabled.

{
mtCOVERAGE_TEST_MARKER();
}
queueUNLOCK( pxQueue, pdTRUE );
Copy link
Member

@chinglee-iot chinglee-iot Mar 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

taskYIELD_WITHIN_API() is still required here. Otherwise, the scheduler won't switch out the task waiting for queue receive.
We will need to add the task yield in the prvUnlockQueueForTasks implementation.

/*-----------------------------------------------------------*/

#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) )
static void prvUnlockQueueForTasks( Queue_t * const pxQueue )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible that we reuse the prvUnlockQueue and do the following instead of creating a new function?

#define queueUNLOCK( pxQueue, xYieldAPI ) \
    do{ \
        prvUnlockQueue( pxQueue ); \
        /* Release the previously held task spinlock */ \
        portRELEASE_SPINLOCK( portGET_CORE_ID(), &( pxQueue->xTaskSpinlock ) ); \
        /* Re-enable preemption */ \
        vTaskPreemptionEnable( NULL ); \
    } while( 0 )

* When the tasks unlocks the queue, all pended access attempts are handled.
*/
#if ( ( portUSING_GRANULAR_LOCKS == 1 ) && ( configNUMBER_OF_CORES > 1 ) )
#define queueLOCK( pxQueue ) prvLockQueueForTasks( pxQueue )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we consider to reuse prvLockQueue for the implementation here? Then we don't need prvUnlockQueueForTasks function.

#define queueLOCK( pxQueue ) \
    vTaskPreemptionDisable( NULL ); \
    prvLockQueue( pxQueue ); \
    portGET_SPINLOCK( portGET_CORE_ID(), &( pxQueue->xTaskSpinlock ) );

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Granular Lock for SMP
5 participants