-
Notifications
You must be signed in to change notification settings - Fork 75
NTDS.dit Plugin #1347
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
NTDS.dit Plugin #1347
Conversation
5f6c452 to
0be940c
Compare
|
Regarding this PR, I've managed to get it to work with my own PR (fox-it/dissect.database#8) by changing a few things: In the ntds_path_key = target.registry.value(
key="HKLM\\SYSTEM\\CurrentControlSet\\Services\\NTDS\\Parameters",
value="DSA Database file"
)
self.ntds = NTDS(target.fs.open(ntds_path_key.value))Changing all the def _collect_pek_and_user_records(self) -> tuple[bytes, Generator[User]]:
return next(self.ntds.lookup(objectCategory="domainDNS")).pekList, self.ntds.users()and some other minor adjustments. Even on a small dataset that already seemed to result in a performance boost, since the Note: not all accounts seem to have entries for Please let me know what you think of this and if you need any help with refactoring & editing the code. You can also DM me on the Dissect discord |
|
Hi @joost-j , |
|
Cool! Good to hear - a quick check from my side also confirms that it works well. When running your |
Hi @joost-j , Thanks for the insight! I’ve fixed that import as well. By the way, do you plan to test your database parser against any older domain controllers? The extraction seems to be a bit different — especially the hash extraction — and I currently don’t have good test data from older DC versions to compare with. |
That is the goal, but we will incrementally improve the parser. |
CodSpeed Performance ReportMerging #1347 will not alter performanceComparing Summary
|
Makes sense. Once you feel the parser is ready for those older versions, could you let me know? I’d be happy to run a few tests on my end and give you some feedback on how it handles the extraction. |
|
PR fox-it/dissect.database#8 is now ready to be reviewed, and probably will be reviewed somewhere in the upcoming weeks. The PR now also includes most of the decryption routines for PEK lists etc. Once that PR is merged, you can edit your plugin once more to focus mainly on the plugin functionality and record publishing parts, leaving out most of the internals because you can now mostly wrap around those functions from |
If you have any test sets of old domains that are suitable for open-source sharing, that would be greatly appreciated! |
Thanks for the heads-up! I’ve already converted the plugin to use the new functions from your PR, and the code is definitely much cleaner now. However, I’ve run into a couple of issues with the new routines. First, the hash decryption doesn't seem to be working—it’s returning the same buffer instead of the decrypted result. Second, the tool is now crashing when I try to decrypt the hash for the I'll keep digging into it, but let me know if there are any specific nuances in the new |
Unfortunately, I don't currently have any test data from old domains that would be suitable for open-source sharing. |
|
@B0TAxy you need to “unlock” the PEK first now ( If after that you still have some issues, please let me know and if possible provide some test data (PEK, syskey and encrypted blobs). |
Yes, I'm aware of the issue—I had some trouble with Git and it didn't push the new code as expected. That’s fixed now and the latest version is pushed. However, I’m still running into the problem with the |
For test data, you can use the updated plugin along with the SYSTEM hive and the NTDS.dit file from |
Regarding the decryption, I noticed the output buffer remains unchanged, so I've left the function as-is for now. I'll merge it once I can re-test the plugin. |
|
Can you add the NTDS.dit file you're using as a test file under That way, we can add a unit test. Maybe use the GOAD one as that one has credential history? |
That’s a great idea. I’m adding both the GOAD data and my previous test samples to the repo so we can validate the logic against multiple sources and ensure coverage for credential history. I'm also finishing up the shared DES function now to ensure the SAM and NTDS plugins remain consistent. I'll update the PR as soon as the tests are ready. |
|
Hello, apologies for the delay. I encountered several challenges with the tests, but I have made progress nonetheless. Here is a summary of the status and open items:
Could you please look into both the hash history length issue and the |
|
I can take a look at the fixtures and hash length issue. What exactly do you mean with ground truth data? What would you need? As for the exception, probably this is because I currently chose to throw an exception when trying to list the groups/members of objects that have been replicated from another DC (phantom objects), as that NTDS won't have that information available. My reasoning was that returning an empty list is not really accurate, since they probably do have groups/members, we just can't know about it. I wanted to experiment with how that would work out in dissect.target, if the exception is annoying (vs a boolean check on |
Thanks for looking into the fixtures and hash length!
Knowing these allows me to confirm that I am extracting the correct hash. |
|
Except for the changed passwords (which I unfortunately don't remember what I changed them too :)) all data is the same as here: https://github.com/Orange-Cyberdefense/GOAD/blob/main/ad/GOAD/data/config.json |
I looked into this, and I believe this is a side-effect of the large NTDS.dit probably being "dirty" (can you confirm @joost-j?). Probably this could be fixed if we implement transaction files: fox-it/dissect.database#18 |
We confirmed this is the case and updated the test data in |
Description
This merge request introduces an initial implementation of an NTDS.dit Plugin.
The parser can parse and decrypt secret records (e.g., user hashes, supplemental credentials) from an Active Directory NTDS database.
At this stage, the parser is in a late development preview:
Core functionality for parsing and decrypting secret records is available.
Support for additional record types (e.g., groups, ACLs, domain data) is not yet implemented.
Output support for BloodHound-compatible JSON is planned.
Automated tests have not been added yet — these will be included in a follow-up update.
The goal of creating this MR now is to gather early feedback and design review before finalizing the remaining features.
Checklist