Skip to content
This repository was archived by the owner on Apr 15, 2022. It is now read-only.

Commit ea6779e

Browse files
authored
Dbaas 2571 (#23)
* DBAAS-2387: Added createTable function to create a new schema.table from a dataframe * DBAAS-2387: Added an optional upper param to insert, upsert and update in case of mixed case columns in SQL tables * Added a replaceDataframeSchema function to properly set the case of each column name of the dataframe * DBAAS-2387: fixed schema_table_name in params * DBAAS-2387: Added 26 Unit Tests; Refactored Create Table Function; Refactored Tests * removed pyc files and __pycache__ via gitignore * removed pytest cache * DBAAS-2387: removed changes from .gitignore * DBAAS-2571: Updating MLManager to have new features released in MLFlow 1.0 * DBAAS-2571: More features for MLFlow 1.0 Upgrade; deprecated SpliceMLContext; Fixed specificity in Binary Classification Evaluator * DBAAS-2571: Added Deploymenet to Sagemaker programmatically * DBAAS-2571: Fixed minor bugs and renamed functions * DBAAS-2571: Fixed typo in if-statement checking validity of MLFlow REST tracking endpoint URL * DBAAS-2571: Added Governance for new runs as well as support for overriding username * DBAAS-2571: Fixed get user function to be less verbose * DBAAS-2571: Added more checks * DBAAS-2571: Removed useless dependencies * DBAAS-2571: Removed mutable initial argument * DBAAS-2571: Fixed user governance * DBAAS-2571: Added database store support via py4j for models & artifacts * DBAAS-2571: Fixed typo in reset run * DBAAS-2571: Updated JVM * DBAAS-2571: Added Basic Auth to Job Initiation and added Azure Deployment * DBAAS-2571: Added support for OneVsRest for Model Hyperparameter extraction * Missing comma in dependencies * Update setup.py * Update setup.py * Update context.py
1 parent 8e037f8 commit ea6779e

File tree

6 files changed

+847
-137
lines changed

6 files changed

+847
-137
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,9 @@
11
/.idea
22
/.vscode
33
.DS_Store
4+
splicemachine/.DS_Store
45
*.pyc
6+
*.pyo
7+
**/__pycache__
58
splicemachine/ml/test/
69
splicemachine/ml/utilities.pyc

setup.py

Lines changed: 9 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -17,22 +17,11 @@
1717
from setuptools import setup, find_packages
1818

1919
dependencies = [
20-
"atomicwrites==1.1.5",
21-
"attrs==18.1.0",
22-
"more-itertools==4.2.0",
23-
"pluggy==0.6.0",
24-
"py==1.5.3",
25-
"py4j==0.10.7",
26-
"pytest==3.6.1",
27-
"six==1.11.0",
28-
"mlflow==0.8.0",
29-
"graphviz==0.8.4",
30-
"numpy==1.15.0",
31-
"h2o_pysparkling_2.2",
32-
"pandas==0.22.0",
33-
"pyspark-dist-explore==0.1.7",
34-
"tqdm==4.32.2",
35-
"statsmodels==0.9.0"
20+
"py4j==0.10.8.1",
21+
"pytest==5.1.3",
22+
"mlflow==1.1.0",
23+
"graphviz==0.13",
24+
"future"
3625
]
3726
setup(
3827
name="splicemachine",
@@ -42,7 +31,9 @@
4231
license='Apache License, Version 2.0',
4332
long_description=open('README.md').read(),
4433
author="Splice Machine, Inc.",
45-
author_email="[email protected]",
46-
description="This package contains all of the classes and functions you need to interact with Splice Machine's scale out, Hadoop on SQL RDBMS from Python. It also contains several machine learning utilities for use with Apache Spark.",
34+
author_email="[email protected]",
35+
description="This package contains all of the classes and functions you need to interact "
36+
"with Splice Machine's scale out, Hadoop on SQL RDBMS from Python. It also contains"
37+
" several machine learning utilities for use with Apache Spark.",
4738
url="https://github.com/splicemachine/pysplice/"
4839
)

0 commit comments

Comments
 (0)