Skip to content

Commit

Permalink
Merge pull request #139 from stephenhky/readthedocs
Browse files Browse the repository at this point in the history
pin requirements for ReadTheDocs
  • Loading branch information
stephenhky authored Aug 26, 2023
2 parents a3af85b + f587e91 commit bbab686
Show file tree
Hide file tree
Showing 17 changed files with 28,425 additions and 18,376 deletions.
6 changes: 0 additions & 6 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,6 @@ shared: &shared
jobs:
py37:
<<: *shared
docker:
- image: cimg/python:3.7

py38:
<<: *shared
docker:
Expand All @@ -58,7 +53,6 @@ workflows:
version: 2
build:
jobs:
- py37
- py38
- py39
- py310
Expand Down
10 changes: 6 additions & 4 deletions .readthedocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,11 @@ version: 2
sphinx:
configuration: docs/conf.py

build:
os: ubuntu-22.04
tools:
python: "3.8"

# Build documentation with MkDocs
#mkdocs:
# configuration: mkdocs.yml
Expand All @@ -18,11 +23,8 @@ formats: all

# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.7
install:
- requirements: requirements.txt
- method: pip
path: .
- requirements: docs/requirements.txt

# conda environment
#conda:
Expand Down
11 changes: 2 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,15 +18,7 @@ representation of the texts and documents are needed before they are put into
any classification algorithm. In this package, it facilitates various types
of these representations, including topic modeling and word-embedding algorithms.

Since release 1.5.2, it runs on Python 3.9.
Since release 1.5.0, support for Python 3.6 was decommissioned.
Since release 1.2.4, it runs on Python 3.8.
Since release 1.2.3, support for Python 3.5 was decommissioned.
Since release 1.1.7, support for Python 2.7 was decommissioned.
Since release 1.0.8, it runs on Python 3.7 with 'TensorFlow' being the backend for `keras`.
Since release 1.0.7, it runs on Python 3.7 as well, but the backend for `keras` cannot be `TensorFlow`.
Since release 1.0.0, `shorttext` runs on Python 2.7, 3.5, and 3.6.

The package `shorttext` runs on Python 3.8, 3.9, 3.10, and 3.11.
Characteristics:

- example data provided (including subject keywords and NIH RePORT);
Expand Down Expand Up @@ -92,6 +84,7 @@ If you would like to contribute, feel free to submit the pull requests. You can

## News

* 08/26/2023: `shorttext` 1.6.0 released.
* 06/19/2023: `shorttext` 1.5.9 released.
* 09/23/2022: `shorttext` 1.5.8 released.
* 09/22/2022: `shorttext` 1.5.7 released.
Expand Down
4 changes: 2 additions & 2 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,9 @@
# built documents.
#
# The short X.Y version.
version = u'1.5'
version = u'1.6'
# The full version, including alpha/beta/rc tags.
release = u'1.5.9'
release = u'1.6.0'

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
Expand Down
9 changes: 2 additions & 7 deletions docs/intro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,7 @@ representation of the texts and documents are needed before they are put into
any classification algorithm. In this package, it facilitates various types
of these representations, including topic modeling and word-embedding algorithms.

The package `shorttext` runs on Python 3.7, 3.8, and 3.9.

Since release 1.0.0, `shorttext` runs on Python 2.7, 3.5, and 3.6. Since release 1.0.7,
it runs also in Python 3.7. Since release 1.1.7, the support for Python 2.7 was decommissioned.
Since release 1.2.3, the support for Python 3.5 is decommissioned.
Since release 1.5.0, the support for Python 3.6 is decommissioned.
The package `shorttext` runs on Python 3.8, 3.9, 3.10, and 3.11.

Characteristics:

Expand All @@ -35,7 +30,7 @@ Before release 0.7.2, part of the package was implemented using C, and it is int
Python using SWIG_ (Simplified Wrapper and Interface Generator). Since 1.0.0, these implementations
were replaced with Cython_.

Author: Kwan-Yuet Ho (LinkedIn_, ResearchGate_, Twitter_)
Author: Kwan Yuet Stephen Ho (LinkedIn_, ResearchGate_, Twitter_)

Home: :doc:`index`

Expand Down
9 changes: 9 additions & 0 deletions docs/news.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
News
====

* 08/26/2023: `shorttext` 1.6.0 released.
* 06/19/2023: `shorttext` 1.5.9 released.
* 09/23/2022: `shorttext` 1.5.8 released.
* 09/22/2022: `shorttext` 1.5.7 released.
Expand Down Expand Up @@ -78,6 +79,14 @@ News
What's New
----------

Released 1.6.0 (August 26, 2023)
--------------------------------

* Pinned requirements for ReadTheDocs documentation;
* Fixed bugs in word-embedding model mean pooling classifiers;
* Updated package requirements.


Release 1.5.9 (June 19, 2023)
-----------------------------

Expand Down
14 changes: 14 additions & 0 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
Cython==3.0.0
numpy==1.23.3
scipy==1.10.1
joblib==1.3.0
scikit-learn==1.2.0
tensorflow==2.13.0
keras==2.13.1
gensim==4.0.0
pandas==1.2.4
snowballstemmer==2.1.0
transformers==4.32.0
torch==2.0.1
python-Levenshtein==0.21.1
numba==0.57.1
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
[build-system]
requires = ["setuptools", "wheel", "Cython>=0.29", "numpy >= 1.16"]
requires = ["setuptools", "wheel", "Cython>=3.0.0", "numpy >= 1.23.3"]
25 changes: 12 additions & 13 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,15 +1,14 @@
Cython>=0.29.0
numpy>=1.16.0
scipy>=1.6.0
joblib>=0.14
scikit-learn>=0.22.0
tensorflow>=2.5.0
keras>=2.4.0
Cython>=3.0.0
numpy>=1.23.3
scipy>=1.10.0
joblib>=1.3.0
scikit-learn>=1.2.0
tensorflow>=2.13.0
keras>=2.13.0
gensim>=4.0.0
pandas>=1.0.0
Flask>=2.0.0
pandas>=1.2.0
snowballstemmer>=2.0.0
transformers>=4.1.0
torch>=1.5.0
python-Levenshtein>=0.12.0
numba>=0.52.0
transformers>=4.32.0
torch>=2.0.0
python-Levenshtein>=0.21.0
numba>=0.57.0
20 changes: 13 additions & 7 deletions setup.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,17 @@

from setuptools import setup
import numpy as np
from Cython.Build import cythonize


ext_modules = cythonize(['shorttext/metrics/dynprog/dldist.pyx',
'shorttext/metrics/dynprog/lcp.pyx'])
try:
from Cython.Build import cythonize
ext_modules = cythonize(['shorttext/metrics/dynprog/dldist.pyx',
'shorttext/metrics/dynprog/lcp.pyx'])
except ImportError:
from setuptools import Extension
ext_modules = [
Extension('shorttext.metrics.dynprog.dldist', ['shorttext/metrics/dynprog/dldist.c']),
Extension('shorttext.metrics.dynprog.lcp', ['shorttext/metrics/dynprog/lcp.c'])
]


def package_description():
Expand All @@ -28,7 +34,7 @@ def test_requirements():


setup(name='shorttext',
version='1.5.9',
version='1.6.0',
description="Short Text Mining",
long_description=package_description(),
long_description_content_type='text/markdown',
Expand All @@ -37,10 +43,10 @@ def test_requirements():
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Text Processing :: Linguistic",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Cython",
"Programming Language :: C",
"Natural Language :: English",
Expand All @@ -52,7 +58,7 @@ def test_requirements():
],
keywords="shorttext natural language processing text mining",
url="https://github.com/stephenhky/PyShortTextCategorization",
author="Kwan-Yuet Ho",
author="Kwan Yuet Stephen Ho",
author_email="[email protected]",
license='MIT',
ext_modules=ext_modules,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ def convert_trainingdata_matrix(self, classdict):
for i in range(len(phrases)):
for j in range(min(self.maxlen, len(phrases[i]))):
train_embedvec[i, j] = self.word_to_embedvec(phrases[i][j])
indices = np.array(indices, dtype=np.int)
indices = np.array(indices, dtype=np.int_)

return classlabels, train_embedvec, indices

Expand Down
Loading

0 comments on commit bbab686

Please sign in to comment.