Skip to content

yatima1460/Drill

Repository files navigation

Drill

Very fast file searcher without indexing

Download latest release

CD Financial Contributors on Open Collective GitHub issues GitHub forks GitHub stars GitHub license

Twitter

TL;DR: What is this

  • Multithreaded
  • Use as much RAM as possible for caching stuff
  • Heuristics to prioritize more important folders
  • Intended for desktop users, no obscure Linux files and system files scans
  • Betting on the future: being tested only for SSDs/M.2 or fast RAID arrays

Run

Precompiled one

Just grab the latest version and run the executable

From source

pip3 install -r requirements-run.txt
python3 src/main.py

What happened to the old code?

The old D code and other experimental versions are available in the other branches This main branch started as an orphan branch to make a clean cut with the old Drill

What is this in detail

I was stressed on Linux because I couldn't find the files I needed, file searchers based on system indexing (updatedb) are prone to breaking and hard to configure for the average user, so did an all nighter and started this.

Drill is a modern file searcher for Linux that tries to fix the old problem of slow searching and indexing. Nowadays even some SSDs are used for storage and every PC has nearly a minimum of 8GB of RAM and quad-core; knowing this it's time to design a future-proof file searcher that doesn't care about weak systems and uses the full multithreaded power in a clever way to find your files in the fastest possible way.

  • Heuristics: The first change was the algorithm, a lot of file searchers use depth-first algorithms, this is a very stupid choice and everyone that implemented it is a moron, why? You see, normal humans don't create nested folders too much and you will probably get lost inside "black hole folders" or artificial archives (created by software); a breadth-first algorithm that scans your hard disks by depth has a higher chance to find the files you need. Second change is excluding some obvious folders while crawling like Windows and node_modules, the average user doesn't care about .dlls and all the system files, and generally even devs too don't care, and if you need to find a system file you already know what you are doing and you should not use a UI tool.

  • Clever multithreading: The second change is clever multithreading, I've never seen a file searcher that starts a thread per disk and it's 2019. The limitation for file searchers is 99% of the time just the disk speed, not the CPU or RAM, then why everyone just scans the disks sequentially????

  • Use your goddamn RAM: The third change is caching everything, I don't care about your RAM, I will use even 8GB of your RAM if this provides me a faster way to find your files, unused RAM is wasted RAM, even truer the more time passes.

Contributing

Read the Issues and check the labels for high priority ones

Contributors

Code Contributors

This project exists thanks to all the people who contribute. [Contribute].

Financial Contributors

Become a financial contributor and help us sustain our community. [Contribute]

Individuals

Organizations

Support this project with your organization. Your logo will show up here with a link to your website. [Contribute]

About

Search files without indexing, but fast crawling

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

  •  

Languages