This script eliminates the duplicated records from formatted .xlsx files from Scopus, Web of Science, PubMed, PubMed Central, Dimensions or Google Scholar exported from Publish or Perish. Is mandatory that there be at least 2 different files from 2 different databases.
$ sudo apt install -y python3-pip
$ sudo pip3 install --upgrade pip
$ sudo pip3 install argparse
$ sudo pip3 install xlsxwriter
$ sudo pip3 install numpy
$ sudo pip3 install pandas
$ sudo pip3 install crossrefapi
$ sudo pip3 install tqdm
$ sudo pip3 install colorama
To clone and run this application, you'll need Git installed on your computer. From your command line:
# Clone this repository
$ git clone https://github.com/glenjasper/remove-duplicates.git
# Go into the repository
$ cd remove-duplicates
# Run the app
$ python3 remove_duplicates.py --help
You can download the latest installable version of remove-duplicates.
$ python3 remove_duplicates.py --help
usage: remove_duplicates.py [-h] -f FILES [-o OUTPUT] [--version]
This script eliminates the duplicated records from formatted .xlsx files from Scopus,
Web of Science, PubMed, PubMed Central, Dimensions or Google Scholar (Publish or
Perish). Is mandatory that there be at least 2 different files from 2 different
databases.
optional arguments:
-h, --help show this help message and exit
-f FILES, --files FILES
.xlsx files separated by comma
-o OUTPUT, --output OUTPUT
Output folder
--version show program's version number and exit
Thank you!
- Molecular and Computational Biology of Fungi Laboratory (LBMCF, ICB - UFMG, Belo Horizonte, Brazil).
This project is licensed under the MIT License - see the LICENSE file for details.