Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SmearingType input #230

Open
mbercx opened this issue Oct 4, 2021 · 6 comments
Open

Add SmearingType input #230

mbercx opened this issue Oct 4, 2021 · 6 comments

Comments

@mbercx
Copy link
Member

mbercx commented Oct 4, 2021

One of the suggestions during the discussion of the DFT inputs subgroup was to add a new SmearingType input, where the kind of smearing for the calculation may be specified. Current options from the google docs are:

  • Tetrahedron (so, no smearing -> Might not be considered a SmearingType?)
  • Gaussian
  • Fermi Dirac
  • Marzari-Vanderbilt-DeVita-Payne (aka "cold smearing")
  • Methfessel-Paxton

A couple of remarks here:

  1. It's clear that the SmearingType is connected to the ElectronicType. E.g. for some codes, setting a SmearingTypemight only be sensible for ElectronicType.METAL. We might have to add some (code-specific) validation here.
  2. As @sponce24's results have shown (and as one might expect), the smearing value and k-point density required to obtain a certain precision will depend on the SmearingType. This adds some more complexity in defining a protocol: should e.g. the "moderate" protocol define these settings for each smearing type?
@sphuber
Copy link
Collaborator

sphuber commented Oct 4, 2021

My two cents:

  1. I wouldn't add tetrahedron as one of the enum values for SmearingType as it quite literally means "no smearing", so that should simply be SmearinType.NONE. We have the NONE value for a few other enum's as well and seems to make the most sense to me
  2. Fully agree that we should add validation for arguments that are related and raise if they get conflicting values.
  3. I think a protocol should provide "coherent" defaults for all the different inputs and the precise one should yield converged results. As you mention, this requires coordinating the smearing_type with the smearing_width and other relevant parameters for convergence. If a user then overrides smearing_type with another type, I don't think it is the responsibility of the protocol to automatically adapt all the rest. I think that if the user overrides certain parameters, we can no longer guarantee convergence, and conversely a user should no longer expect that. At most, we should just document this clearly.

@d-wortmann
Copy link

I would like to stress again my concern about exposing too many parameters in the commonWF interfaces.
I understand that it can be useful to have such functionality in order to compare the response of different codes to these parameters and to make it simpler for "expert users" to modify parameters in different codes without looking into the details of these codes, but I think these advantages come with significant drawbacks:

  • My biggest concern is that parameters exposed in such a way are highlighted and might seem very important to users, even though these are not the most relevant convergence parameters.
  • If we add common WFs for more physical properties, we will encounter many more of such "common" parameters. This will reduce the simplicity of the interface and then the pure number of available parameters will give the strong impression that actually all relevant parameters of the calculations can be controlled through the common interface.
  • Many parameters should be varied consistently, and this can also be code dependent. E.g. different k-point integration schemes can interact with the smearing.
  • We already have parameters such as the "ElectronicType" that are somewhat code specific. If I understood Alberto correctly they also do not follow the basic idea to adjust the smearing according to the expected system property and this is also not something we usually do in FLEUR. We will end up having more such cases, which I fear will seriously reduce the usefulness of the commonWFs.

So in summary, I think that the very idea of having commonWFs as a tool to simplify the use of different codes (not to replace code specific workflows) is somehow orthogonal to exposing more than the required minimum of parameters.

To be positive, I would like to push again the idea to instead extend the idea of the protocols. I think it would be really useful for users of DFT codes if we would provide a hierarchy of protocols that lead to increasing converged results in a code specific manner. We could then perhaps augment these with the property to calculate to also handle this additional complexity. For example by defining a profile "EOS@fast" or something similar.

@sphuber
Copy link
Collaborator

sphuber commented Oct 26, 2021

Thanks for the input @d-wortmann . I definitely see your point, it is clear that what we are trying to add is not trivial at all. If I understand you correctly your point boils down to you being afraid that adding too many parameters will confuse the user as to what is really important and will make the common workflows more difficult to use. If it is indeed mostly a question of appearance for you, if we were able to come with a system where these more specific parameters are clearly marked as such and shown less prominently in the documentation, such that the main parameters get the main focus, would that improve things for you? Because I still have the nagging feeling that having these kinds of extended common parameters are still going to be useful in certain cases. The alternative of having to manually update the inputs of the builder is not going to be practical. The namespace are potentially complex, and you have to be an expert for all codes to know exactly what to do.

@d-wortmann
Copy link

I think it is mostly a question of purpose and design. In my opinion, the common workflows should facilitate users to be able to perform specific simulations with different codes for comparison purpose very easily. This implies that parameters should be minimal. As soon as more complex studies beyond such a simple comparison are planed, code specific settings will quickly become relevant, and thus I would urge not to give the impression that this can easily be done using the common workflows.

On the other hand, I of course see a clear value in the idea of having a common interface between codes to specify parameters. But I feel that this effort to "harmonize" input should be pushed forward by introducing conventions for the code specific workflows and by agreeing on common ways to set these parameters instead of introducing another layer in the form of the common workflows to do a "translation" from a "common" language into the code specific requirements.

@espenfl
Copy link
Collaborator

espenfl commented Oct 26, 2021

Yes, all in all I think this boils down to the general issue of scoping this project. For these things I think it is always a good approach to ask ourself: What does the majority of the users want or need? But before asking that, we need to define what the user base is. We then quickly come into the discussion, which I have had many times when talking about AiiDA in the context of AiiDA-VASP: Should we cater to the experienced or novice user? Is it worthwhile to consider both at the same time? Maybe it would be best at some point to try to scope this project and come up with a clear definition and boundary conditions? And then a valid question to answer is, what is it not? more than what it is? Happy to participate in such a discussion.

One thing is for certain, we can not do everything with the time we have, so limiting scope will not only help users to know what this project can do for them, but also make it easier for us developers.

@bosonie
Copy link
Collaborator

bosonie commented Oct 26, 2021

Hi all, thanks a million for the discussion.
I want to make two points:

  1. The fact that we are using our common workflows for the verification project is maybe derailing a bit the focus of our project, but I can insure that we will not leave behind the idea @d-wortmann expressed and that I believe is the real strong point of the project: the idea to provide a hierarchy of protocols that lead to increasing converged results in a code specific manner.
  2. In fact, also from the implementation point of view, we will create a separate class DFTCommonRelaxWorkChain that exposes more inputs, but the current implementation of CommonRelaxWorkChain will remain and the protocols will be improved in the direction Daniel says.

At least this is my idea

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants