Skip to content

Commit

Permalink
Merge pull request #178 from JuDFTteam/release-0.11.3
Browse files Browse the repository at this point in the history
🚀 Release `0.11.3`
  • Loading branch information
janssenhenning authored Jul 14, 2022
2 parents 83b35b4 + 62a1b48 commit f0370ca
Show file tree
Hide file tree
Showing 31 changed files with 2,140 additions and 304 deletions.
22 changes: 18 additions & 4 deletions .github/workflows/cd.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,19 @@ jobs:

runs-on: ubuntu-latest

strategy:
matrix:
include:
- name: docs
sphinx-options: ''
allow-failure: false
- name: docs-nitpicky
sphinx-options: '-nW'
allow-failure: true

name: ${{ matrix.name }}
continue-on-error: ${{ matrix.allow-failure }}

steps:
- uses: actions/checkout@v3

Expand Down Expand Up @@ -43,8 +56,9 @@ jobs:
- name: Build documentation
env:
READTHEDOCS: 'True'
SPHINXOPTS: ${{ matrix.sphinx-options }}
run: |
SPHINXOPTS='-nW' make -C docs html
make -C docs html
pre-commit:
runs-on: ubuntu-latest
Expand All @@ -55,13 +69,13 @@ jobs:
include:
- name: pre-commit-errors
skip-hooks: pylint-warnings
strict: false
allow-failure: false
- name: pre-commit-warnings
skip-hooks: pylint-errors
strict: true
allow-failure: true

name: ${{ matrix.name }}
continue-on-error: ${{ matrix.strict }}
continue-on-error: ${{ matrix.allow-failure }}

steps:
- uses: actions/checkout@v3
Expand Down
22 changes: 18 additions & 4 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,19 @@ jobs:

runs-on: ubuntu-latest

strategy:
matrix:
include:
- name: docs
sphinx-options: ''
allow-failure: false
- name: docs-nitpicky
sphinx-options: '-nW'
allow-failure: true

name: ${{ matrix.name }}
continue-on-error: ${{ matrix.allow-failure }}

steps:
- uses: actions/checkout@v3

Expand Down Expand Up @@ -37,8 +50,9 @@ jobs:
- name: Build documentation
env:
READTHEDOCS: 'True'
SPHINXOPTS: ${{ matrix.sphinx-options }}
run: |
SPHINXOPTS='-nW' make -C docs html
make -C docs html
pre-commit:
runs-on: ubuntu-latest
Expand All @@ -49,13 +63,13 @@ jobs:
include:
- name: pre-commit-errors
skip-hooks: pylint-warnings
strict: false
allow-failure: false
- name: pre-commit-warnings
skip-hooks: pylint-errors
strict: true
allow-failure: true

name: ${{ matrix.name }}
continue-on-error: ${{ matrix.strict }}
continue-on-error: ${{ matrix.allow-failure }}

steps:
- uses: actions/checkout@v3
Expand Down
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ ci:

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.2.0
rev: v4.3.0
hooks:
- id: double-quote-string-fixer
types: [python]
Expand Down Expand Up @@ -42,7 +42,7 @@ repos:
]

- repo: https://github.com/asottile/pyupgrade
rev: v2.32.0
rev: v2.34.0
hooks:
- id: pyupgrade
args: [
Expand Down
15 changes: 15 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,20 @@
# Changelog

## latest
[full changelog](https://github.com/JuDFTteam/masci-tools/compare/v0.11.3...develop)

Nothing here yet

## v.0.11.3
[full changelog](https://github.com/JuDFTteam/masci-tools/compare/v0.11.2...v0.11.3)

### Improvements
- Changes to KKR plotting routine `dispersionplot` for compatibility with AiiDA v2.0
- Connecting vectors for intersite `GreensFunction` are now saved in Angstroem. For better interoperability with ase, pymatgen, AiiDA

### For Developers
- Relaxed CI requirements for docs build. Nitpicky mode is no longer required to pass but is treated as a hint to look into the warnings

## v.0.11.2
[full changelog](https://github.com/JuDFTteam/masci-tools/compare/v0.11.1...v0.11.2)

Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/hdf5_parser.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ The recipe for extracting bandstructure information form the `banddos.hdf` looks
```{literalinclude} ../../../masci_tools/io/parsers/hdf5/recipes.py
:language: python
:linenos: true
:lines: 170-323
:pyobject: bands_recipe_format
```

Each recipe can define the `datasets` and `attributes` entry (if one is not defined,
Expand Down
2 changes: 1 addition & 1 deletion masci_tools/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
__copyright__ = ('Copyright (c), Forschungszentrum Jülich GmbH, IAS-1/PGI-1, Germany. '
'All rights reserved.')
__license__ = 'MIT license, see LICENSE.txt file.'
__version__ = '0.11.2'
__version__ = '0.11.3'
__authors__ = 'The JuDFT team. Also see AUTHORS.txt file.'

logging.getLogger(__name__).addHandler(logging.NullHandler())
Expand Down
2 changes: 1 addition & 1 deletion masci_tools/io/cif2inp_ase.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
Binv = np.linalg.inv(structure.cell)
frac_coordinates = structure.arrays['positions'].dot(Binv)

with open(inpFilename, 'w+') as f:
with open(inpFilename, 'w+', encoding='utf-8') as f:
natoms = len(structure.arrays['numbers'])
f.write(structureFormula + '\r\n')
f.write('&input film=F /\r\n')
Expand Down
4 changes: 2 additions & 2 deletions masci_tools/io/parsers/fleur/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@

from .fleur_inpxml_parser import inpxml_parser
from .fleur_outxml_parser import outxml_parser, register_migration, conversion_function
from . import task_migrations #pylint: disable=unused-import
from . import outxml_conversions #pylint: disable=unused-import
from . import task_migrations #pylint: disable=unused-import,cyclic-import
from . import outxml_conversions #pylint: disable=unused-import,cyclic-import

__all__ = ['inpxml_parser', 'outxml_parser', 'register_migration', 'conversion_function']
2 changes: 1 addition & 1 deletion masci_tools/io/parsers/fleur/default_parse_tasks.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@
.. literalinclude:: ../../../../masci_tools/io/parsers/fleur/default_parse_tasks.py
:language: python
:lines: 66-
:lines: 70-
:linenos:
"""
Expand Down
154 changes: 71 additions & 83 deletions masci_tools/io/parsers/fleur/fleur_inpxml_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -128,104 +128,92 @@ def inpxml_todict(parent: etree._Element,
:return: a python dictionary
"""
#These keys have to never appear as an attribute/tag name
#The underscores should guarantee that
_TEXT_PLACEHOLDER = '__text__'
_OMIT_PLACEHOLDER = '__omit__'

#Check if this is the first call to this routine
if base_xpath is None:
base_xpath = f'/{parent.tag}'

return_dict: dict[str, Any] = {}
if list(parent.items()):
return_dict = {str(key): val for key, val in parent.items()}
# Now we have to convert lazy fortran style into pretty things for the Database
for key in return_dict:
if key in schema_dict['attrib_types']:
return_dict[key], suc = convert_from_xml(return_dict[key],
content: dict[str, Any] = {}
# Now we have to convert lazy fortran style into pretty things for the Database
for key, value in parent.items():
attrib_name, value = str(key), str(value)
if attrib_name in schema_dict['attrib_types']:
content[attrib_name], suc = convert_from_xml(value,
schema_dict,
key,
attrib_name,
text=False,
constants=constants,
logger=logger)
if not suc and logger is not None:
logger.warning("Failed to convert attribute '%s' Got: '%s'", key, return_dict[key])

if parent.text:
# has text, but we don't want all the '\n' s and empty strings in the database
if parent.text.strip() != '': # might not be the best solutions
if parent.tag not in schema_dict['text_tags']:
if logger is not None:
logger.error('Something is wrong in the schema_dict: %s is not in text_tags, but it has text',
parent.tag)
raise ValueError(
f'Something is wrong in the schema_dict: {parent.tag} is not in text_tags, but it has text')

converted_text, suc = convert_from_xml(str(parent.text),
schema_dict,
parent.tag,
text=True,
constants=constants,
logger=logger)

if not suc and logger is not None:
logger.warning("Failed to text of '%s' Got: '%s'", parent.tag, parent.text)
logger.warning("Failed to convert attribute '%s' Got: '%s'", attrib_name, value)

# has text, but we don't want all the '\n' s and empty strings in the database
if parent.text and parent.text.strip() != '':

if parent.tag not in schema_dict['text_tags']:
if logger is not None:
logger.error('Something is wrong in the schema_dict: %s is not in text_tags, but it has text',
parent.tag)
raise ValueError(
f'Something is wrong in the schema_dict: {parent.tag} is not in text_tags, but it has text')

if not return_dict:
return_dict = converted_text #type:ignore
else:
return_dict['text_value'] = converted_text
if 'label' in return_dict:
return_dict['text_label'] = return_dict['label']
return_dict.pop('label')
converted_text, suc = convert_from_xml(str(parent.text),
schema_dict,
parent.tag,
text=True,
constants=constants,
logger=logger)

if not suc and logger is not None:
logger.warning("Failed to text of '%s' Got: '%s'", parent.tag, parent.text)

content[_TEXT_PLACEHOLDER] = converted_text

tag_info = schema_dict['tag_info'].get(base_xpath, EMPTY_TAG_INFO)
for element in parent:

new_base_xpath = f'{base_xpath}/{element.tag}'
omitt_contained_tags = element.tag in schema_dict['omitt_contained_tags']
new_return_dict = inpxml_todict(element,
schema_dict,
constants,
base_xpath=new_base_xpath,
omitted_tags=omitt_contained_tags,
logger=logger)

if element.tag in tag_info['several']:
# make a list, otherwise the tag will be overwritten in the dict
if element.tag not in return_dict: # is this the first occurrence?
if omitted_tags:
if len(return_dict) == 0:
return_dict = [] #type:ignore
else:
return_dict[element.tag] = []
if omitted_tags:
return_dict.append(new_return_dict) #type:ignore
elif 'text_value' in new_return_dict:
for key, value in new_return_dict.items():
if key == 'text_value':
return_dict[element.tag].append(value)
elif key == 'text_label':
if 'labels' not in return_dict:
return_dict['labels'] = {}
return_dict['labels'][value] = new_return_dict['text_value']
else:
if key not in return_dict:
return_dict[key] = []
elif not isinstance(return_dict[key], list): #Key seems to be defined already
if logger is not None:
logger.error('%s cannot be extracted to the next level', key)
raise ValueError(f'{key} cannot be extracted to the next level')
return_dict[key].append(value)
for key in new_return_dict.keys():
if key in ['text_value', 'text_label']:
continue
if len(return_dict[key]) != len(return_dict[element.tag]):
child_content = inpxml_todict(element,
schema_dict,
constants,
base_xpath=f'{base_xpath}/{element.tag}',
omitted_tags=element.tag in schema_dict['omitt_contained_tags'],
logger=logger)

if _OMIT_PLACEHOLDER in child_content:
#We knoe that there is only one key here
child_content = child_content.pop(_OMIT_PLACEHOLDER)

tag_name = element.tag
if omitted_tags:
tag_name = _OMIT_PLACEHOLDER

if element.tag in tag_info['several']\
and _TEXT_PLACEHOLDER in child_content:
#The text is stored under the name of the tag
text_value = child_content.pop(_TEXT_PLACEHOLDER)
content.setdefault(tag_name, []).append(text_value)
child_tag_info = schema_dict['tag_info'].get(f'{base_xpath}/{element.tag}', EMPTY_TAG_INFO)
for key, value in child_content.items():
if key not in child_tag_info['optional_attribs']:
#All required attributes are stored as lists
if key in content and \
not isinstance(content[key], list): #Key seems to be defined already
if logger is not None:
logger.error(
'Extracted optional argument %s at the moment only label is supported correctly', key)
raise ValueError(
f'Extracted optional argument {key} at the moment only label is supported correctly')
else:
return_dict[element.tag].append(new_return_dict)
logger.error('%s cannot be extracted to the next level', key)
raise ValueError(f'{key} cannot be extracted to the next level')
content.setdefault(key, []).append(value)
else:
#All optional attributes are stored as dicts pointing to the text
content.setdefault(key, {})[value] = text_value
elif element.tag in tag_info['several']:
content.setdefault(tag_name, []).append(child_content)
elif _TEXT_PLACEHOLDER in child_content:
content[tag_name] = child_content.pop(_TEXT_PLACEHOLDER)
else:
return_dict[element.tag] = new_return_dict
content[tag_name] = child_content

return return_dict
return content
10 changes: 4 additions & 6 deletions masci_tools/io/parsers/fleur/fleur_outxml_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -218,14 +218,12 @@ def outxml_parser(outxmlfile: XMLFileLike,
if not list_return:
#Convert one item lists to simple values
for key, value in out_dict.items():
if isinstance(value, list):
if len(value) == 1:
out_dict[key] = value[0]
if isinstance(value, list) and len(value) == 1:
out_dict[key] = value[0]
elif isinstance(value, dict):
for subkey, subvalue in value.items():
if isinstance(subvalue, list):
if len(subvalue) == 1:
out_dict[key][subkey] = subvalue[0]
if isinstance(subvalue, list) and len(subvalue) == 1:
out_dict[key][subkey] = subvalue[0]

if parser_log_handler is not None:
if logger is not None:
Expand Down
Loading

0 comments on commit f0370ca

Please sign in to comment.