-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
High performance setup zh TW
這與低記憶體設定完全相反,若您想進一步提高ASF的效能(就CPU速度方面),請遵循這些提示,這可能會增加記憶體使用量。
ASF已經嘗試在一般的平衡性中考慮效能優先,因此您沒有很多提高效能的餘地,但您也不是完全沒有其他選擇。 但請注意,這些選項預設是未啟用,這代表它們不能在大多數情形下保證平衡性,因此,您應該自行決定是否能夠接受它們所造成的記憶體增加。
以下技巧涉及嚴重的記憶體及啟動時間的增加,因此應謹慎使用。
套用這些設定的推薦方法,是設定DOTNET_環境屬性。 Of course, you could also use other methods, e.g. runtimeconfig.json, but some settings are impossible to be set this way, and on top of that ASF will replace your custom runtimeconfig.json with its own on the next update, therefore we recommend environment properties that you can set easily prior to launching the process.
.NET runtime allows you to tweak garbage collector in a lot of ways, effectively fine-tuning the GC process according to your needs. We've documented below properties that are especially important in our opinion.
Configures whether the application uses workstation garbage collection or server garbage collection.
You can read the exact specific of the server GC at fundamentals of garbage collection.
ASF is using workstation garbage collection by default. This is mainly because of a good balance between memory usage and performance, which is more than enough for just a few bots, as usually a single concurrent background GC thread is fast enough to handle entire memory allocated by ASF.
However, today we have a lot of CPU cores that ASF can greatly benefit from, by having a dedicated GC thread per each CPU vCore that is available. This can greatly improve the performance during heavy ASF tasks such as parsing badge pages or the inventory, since every CPU vCore can help, as opposed to just 2 (main and GC). Server GC is recommended for machines with 3 CPU vCores and more, workstation GC is automatically forced if your machine has just 1 CPU vCore, and if you have exactly 2 then you can consider trying both (results may vary).
Server GC itself does not result in a very huge memory increase by just being active, but it has much bigger generation sizes, and therefore is far more lazy when it comes to giving memory back to OS. You may find yourself in a sweet spot where server GC increases performance significantly and you'd like to keep using it, but at the same time you can't afford that huge memory increase that comes out of using it. Luckily for you, there is a "best of both worlds" setting, by using server GC with GCLatencyLevel configuration property set to 0, which will still enable server GC, but limit generation sizes and focus more on memory. Alternatively, you might also experiment with another property, GCHeapHardLimitPercent, or even both of them at the same time.
However, if memory is not a problem for you (as GC still takes into account your available memory and tweaks itself), it's a much better idea to not change those properties at all, achieving superior performance in result.
This setting enables dynamic or tiered profile-guided optimization (PGO) in .NET 6 and later versions.
Disabled by default. In a nutshell, this will cause JIT to spend more time analyzing ASF's code and its patterns in order to generate superior code optimized for your typical usage. If you want to learn more about this setting, visit performance improvements in .NET 6.
Configures whether the .NET Core runtime uses pre-compiled code for images with available ReadyToRun data. Disabling this option forces the runtime to JIT-compile framework code.
預設啟用。 Disabling this in combination with enabling DOTNET_TieredPGO allows you to extend tiered profile-guided optimization to the whole .NET platform, and not just ASF code.
Configures whether the JIT compiler uses quick JIT on methods that contain loops. Enabling quick JIT for loops may improve startup performance. However, long-running loops can get stuck in less-optimized code for long periods.
預設停用。 While the description doesn't make it obvious, enabling this will allow methods with loops to go through additional compilation tier, which will allow DOTNET_TieredPGO to do a better job by analyzing its usage data.
You can enable selected properties by setting appropriate environment variables. For example, on Linux (shell):
export DOTNET_gcServer=1
export DOTNET_TieredPGO=1
export DOTNET_ReadyToRun=0
export DOTNET_TC_QuickJitForLoops=1
./ArchiSteamFarm # 適用於您的作業系統的建置版本或在 Windows 上(PowerShell):
$Env:DOTNET_gcServer=1
$Env:DOTNET_TieredPGO=1
$Env:DOTNET_ReadyToRun=0
$Env:DOTNET_TC_QuickJitForLoops=1
.\ArchiSteamFarm.exe # 適用於您的作業系統的建置版本- Ensure that you're using default value of
OptimizationModewhich isMaxPerformance. This is by far the most important setting, as usingMinMemoryUsagevalue has dramatic effects on performance. - 啟用伺服器 GC。 Server GC can be immediately seen as being active by significant memory increase compared to workstation GC. This will spawn a GC thread for every CPU thread your machine has in order to perform GC operations in parallel with maximum speed.
- If you can't afford memory increase due to server GC, consider tweaking
GCLatencyLeveland/orGCHeapHardLimitPercentto achieve "the best of both worlds". However, if your memory can afford it, then it's better to keep it at default - server GC already tweaks itself during runtime and is smart enough to use less memory when your OS will truly need it. - You can also consider increased optimization for longer startup time with additional tweaking through other
DOTNET_properties explained above.
Applying recommendations above allows you to have superior ASF performance that should be blazing fast even with hundreds or thousands of enabled bots. CPU should not be a bottleneck anymore, as ASF is able to use your entire CPU power when needed, cutting required time to bare minimum. The next step would be CPU and RAM upgrades.







