You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently facing a challenge in my WRF-Hydro research and would greatly appreciate your insights.
Recently, I have been studying soil moisture data assimilation to improve flood simulation accuracy in WRF-Hydro. After reading several papers on this topic, I attempted to replicate their methods. However, I observed that the difference in streamflow simulations before and after assimilation is minimal—typically only around 10 m³/s.
Here is my detailed workflow:
Update soil moisture variables (SMC, SH2O, smc1, sh2xo1) in two RESTART files, focusing solely on surface soil moisture (as suggested in the literature).
In hydro.namelist, set rst_typ = 1 ("Overwrite LSM soil states from the high-res routing restart file").
Run the simulation, stop at fixed time intervals to output RESTART files, assimilate updates into these files, adjust the time settings in the namelist, then restart the simulation. This cycle repeats until the flood event concludes.
Despite these efforts, the assimilation appears ineffective. To debug, I conducted tests: Manually increased all soil moisture variables by 0.1 in the RESTART files and ran a flood simulation without further updates. Result: Slight peak flow increase (~10 m³/s), but negligible changes elsewhere. Increased soil moisture variables by 0.3. Result: Significant initial rise in simulated discharge, later converging with the baseline.
However, real-world assimilation differences between observed and simulated soil moisture rarely exceed 0.1, which makes the results puzzling. Key questions:
Is my update method flawed?
Could the RESTART file update cycle (stop-modify-restart) introduce unintended resets?
Are the targeted variables (SMC, SH2O, etc.) truly affect in runoff generation?
Is WRF-Hydro insensitive to realistic soil moisture changes?
Do soil hydraulic parameters (e.g., saturation conductivity) or routing schemes (e.g., bucket vs. kinematic wave) limit sensitivity?
Are there overlooked thresholds (e.g., soil moisture saturation levels) that govern hydrological responses?
Any suggestions, references, or shared experiences would be invaluable. Thank you for your time and expertise!
Best regards,
li
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
HI,everyone,
I am currently facing a challenge in my WRF-Hydro research and would greatly appreciate your insights.
Recently, I have been studying soil moisture data assimilation to improve flood simulation accuracy in WRF-Hydro. After reading several papers on this topic, I attempted to replicate their methods. However, I observed that the difference in streamflow simulations before and after assimilation is minimal—typically only around 10 m³/s.
Here is my detailed workflow:
Update soil moisture variables (SMC, SH2O, smc1, sh2xo1) in two RESTART files, focusing solely on surface soil moisture (as suggested in the literature).
In hydro.namelist, set rst_typ = 1 ("Overwrite LSM soil states from the high-res routing restart file").
Run the simulation, stop at fixed time intervals to output RESTART files, assimilate updates into these files, adjust the time settings in the namelist, then restart the simulation. This cycle repeats until the flood event concludes.
Despite these efforts, the assimilation appears ineffective. To debug, I conducted tests: Manually increased all soil moisture variables by 0.1 in the RESTART files and ran a flood simulation without further updates. Result: Slight peak flow increase (~10 m³/s), but negligible changes elsewhere. Increased soil moisture variables by 0.3. Result: Significant initial rise in simulated discharge, later converging with the baseline.
However, real-world assimilation differences between observed and simulated soil moisture rarely exceed 0.1, which makes the results puzzling. Key questions:
Is my update method flawed?
Could the RESTART file update cycle (stop-modify-restart) introduce unintended resets?
Are the targeted variables (SMC, SH2O, etc.) truly affect in runoff generation?
Is WRF-Hydro insensitive to realistic soil moisture changes?
Do soil hydraulic parameters (e.g., saturation conductivity) or routing schemes (e.g., bucket vs. kinematic wave) limit sensitivity?
Are there overlooked thresholds (e.g., soil moisture saturation levels) that govern hydrological responses?
Any suggestions, references, or shared experiences would be invaluable. Thank you for your time and expertise!
Best regards,
li
Beta Was this translation helpful? Give feedback.
All reactions