-
Notifications
You must be signed in to change notification settings - Fork 0
1. OptiNiSt Environment Setup Procedure
- v1.0 (2024-03) … New Creation
- Target Environment Information
- OS … Docker on (Ubuntu 22.x / Mac / Windows 11, WSL2)
Perform the following steps to set up the environment.
Get latest version of optinist-for-server.
git clone https://github.com/arayabrain/optinist-for-server.git -b release/v2.0.0
Rename .env file name.
cd {OPTINIST_SETUP_DIR}
mv studio/config/.env.example stdio/config/.env
mv frontend/.env.example frontend/.env
Edit stdio/config/.env
contents.
EXPDB_DIR="/app/experiments_datasets"
PUBLIC_EXPDB_DIR="/app/experiments_public"
The most up-to-date multi-user firebase setup is specified here. Please check this as well. However, the outline of the procedure is written here.
docker compose -f docker-compose.dev.yml up
Note: At this point, the multiuser mode setup is still in progress, so when you access
http://localhost:3000
, you will see a blank page. (You will be able to access it after the setup is complete.)
If the host is Linux, perform the following additional tasks:
- Set the permissions of allowing access from within the container (non-root user)
- Set optinist(uid:500):www-data(gid:33)
These can be performed using this single command:
cd {OPTINIST_SETUP_DIR}
sudo chown 500:53 ../optinist-docker-volumes/* ../optinist-docker-volumes/.snakemake/
The most up-to-date multi-user firebase setup is specified here. Please check this as well. However, the outline of the procedure is written here.
The following procedure can be carried out by individual developers. However, it is also possible to use the shared development firebase configs which can be found here. In that case, copy firebase_config.json and firebase_private.json to studio/config/auth.
- If you want to use the shared development firebase configs, ask the administrator of the shared development Firebase Project to create your worker account.
Otherwise, follow these steps below to make your own firebase config credentials.
- Go to https://console.firebase.google.com/
- Click "Add project"
- Enter your project name and click "Continue"
- Choose whether to enable Google Analytics (optional)
- Click "Continue" when project is ready
- Select "Build > Authentication" from the left menu
- Click "Get started"
- Select "Sign-in method" tab
- Click "Add new provider" in "Sign-in providers" section
- Enable "Email/Password"
- Click "Save"
-
Select "Authentication" from the left menu
-
Select "Users" tab
-
Click "Add user"
-
Fill in the form:
- Email address: your email address
- Password: your password
Important: Save the "User UID" - you'll need this later for database setup
- Click settings icon (next to Project Overview)
- Select "Project settings"
- Select "General" tab
- Select "web app" in "Your apps" section
- Enter your app name and click "Register app"
"Firebase Hosting" setup not required
- Copy the following contents to studio/config/auth/firebase_config.json
// For Firebase JS SDK v7.20.0 and later, measurementId is optional
const firebaseConfig = {
apiKey: "xxxxxxxxxxx",
authDomain: "xxxxxxxxxx.firebaseapp.com",
projectId: "xxxxxxxxxxxx",
storageBucket: "xxxxxxxxxxx.firebasestorage.app",
messagingSenderId: "xxxxxxxxxxxxxxxx",
appId: "1:xxxxxxxxxx:web:xxxxxxxxxxxxxxxxxxx",
measurementId: "G-xxxxxxxxxx"
};
- Select "Service accounts" tab
- Click "Generate new private key"
- Save the downloaded private key file as studio/config/auth/firebase_private.json
Obtain the test data from the following: https://drive.google.com/drive/folders/1-oY2yYl2QuteA13o2SvAAxKljhwKsVJY
Check the Readme file for information on the data organisation. Save in a directory on your computer within the same parent directory as your optimist repo e.g. optinist-for-server. Unzip the data. Then set permissions:
The file structure should be:
/app/experiments_datasets/
└── M000024/
└── M000024_ori017/
├── M000024_ori017.nd2
├── M000024_ori017_metadata.json
└── M000024_ori017_trialstructure.mat
cd optinist-docker-volumes ||
sudo chmod 777 ./ logs/ .snakemake/ ../experiments_datasets/ ../experiments_public/
Maybe try:
sudo chmod -R a+rw ./experiments_datasets
If the host is Linux, perform the following additional tasks:
- Set the permissions of allowing access from within the container (non-root user)
- Set optinist(uid:500):www-data(gid:33)
These can be performed using this single command:
cd {OPTINIST_SETUP_DIR} # most probably optinist-for-server
sudo chown -R 500:53 ../experiments_datasets/ ../experiments_public/
studio/config.env should have these variables. Set the bold variables with your own details. Examples are given for docker implementation studio/config.env.docker and local implementation studio/config.env.local
SECRET_KEY='XXXXXXX'
USE_FIREBASE_TOKEN=True # for log in with multiple users
REFRESH_TOKEN_EXPIRE_MINUTES=1440 # 24 hours
ECHO_SQL=True
MYSQL_SERVER=db # for docker
MYSQL_ROOT_PASSWORD=MYSQL_ROOT_PASSWORD
MYSQL_DATABASE=OPTINIST_FOR_SERVER
MYSQL_USER=MY_NAME
MYSQL_PASSWORD=MYSQL_PASSWORD
PMA_ARBITRARY=1
PMA_HOST=db # for docker
PMA_USER=root
PMA_PASSWORD=docker
# EXPDB_DIR is a directory path of the data unpacked.
# PUBLIC_EXPDB_DIR = Optional directory path (path to generate external publication data)
# Paths to these variables are different depending on if you are running locally or via docker.
EXPDB_DIR="/app/experiments_datasets" # for when docker
PUBLIC_EXPDB_DIR="/app/experiments_public" # for when docker
EXPDB_DIR="/MY_DATA_PATH/experiments_datasets" # for local development
PUBLIC_EXPDB_DIR="MY_DATA_PATH/experiments_public" # for local development
GRAPH_HOST="http://localhost:8000/datasets" # URL for publishing data externally
SELFHOST_GRAPH=True
To setup frontend/.env.development, make a copy of frontend/.env.example frontend/.env.development should have these variables:
# (default: location.hostname)
REACT_APP_SERVER_HOST=localhost
# (default: location.port)
REACT_APP_SERVER_PORT=8000
# (default: location.protocol)
REACT_APP_SERVER_PROTO=http
# configs
REACT_APP_EXPDB_METADATA_EDITABLE=true
- You can set up your own mysql server, or you can use the DB container defined in docker-compose(.dev).yml. To set up your own mysql server follow these instructions:
# Start the database container
docker compose -f docker-compose.dev.yml up -d
Wait a few seconds for the database to initialize. Verify database is running:
docker ps # Should see the database container running, You'll use the ID in the next step.
# Verify database connection
docker exec -it [DB_CONTAINER_NAME] mysql -u studio_db_user -p studio
docker exec -it optinist-for-server-db-1 mysql -u docker -pdocker docker
# Find DB_CONTAINER_NAME using 'docker ps'
INSERT INTO organization (name) VALUES ('Your Organization');
INSERT INTO roles (id, role) VALUES (1, 'admin'), (20, 'operator');
INSERT INTO roles (id, role) VALUES (10, 'data manager'), (30, 'guest operator'); -- Additions in optinist-for-server
INSERT INTO users (uid, organization_id, name, email, active) VALUES ('YOUR_FIREBASE_UID', 1, 'YOUR_NAME', '[email protected]', true);
INSERT INTO user_roles (user_id, role_id) VALUES (1, 1);
After performing the above setup procedures, perform the following operation checks:
docker compose -f docker-compose.dev.yml up
# or
# docker compose -f docker-compose.yml up
Access http://localhost:3000
and confirm that the TOP page is displayed.
Confirm that you can log in with the registered Firebase account from SignIn
.
At this point, the number of data items on the TOP page is 0 because no Database records have been created.
- First, place the command file that specifies the data to be processed by the batch analysis.
- Create a file which ends in the extension .proc
- Example: /{YOUR_DATA_DEPLOYMENT_PATH}/M000024/M000024_ori001.proc
- Add the text to the file:
command: regist
- Be careful to include the space between command: and regist
Then run the docker image. The -d runs the image in the background and allows you to use commands from the same window:
docker compose -f docker-compose.dev.yml up
In another terminal window login to the docker container:
docker exec -it optinist-for-server-studio-dev-be-1 bash # Enter container
conda activate expdb_batch # Activate conda environment
alembic upgrade head # Run migrations
echo "command: regist" > /app/experiments_datasets/M000024/M000024_ori017.proc # reset .proc file
python run_expdb_batch.py -o 1 # Run the batch process
Note: On docker on mac, there are cases where "generate_plots()" stops the process. This will be resolved in the following PR (at 12/2024): https://github.com/arayabrain/barebone-studio/pull/531
- If the processing is successful, the data to be processed will be listed on the Database screen of the management console.
- [Operation manual](https://drive.google.com/drive/folders/1mjMRVf9tROlCX90bhIrTmxKe3Mg7OGrn)
- [Batch Analysis Processing Specification Document](https://drive.google.com/drive/folders/1zJkTyh3L4maJ8Xysa08LB6j_R859mJxW)
- [Data Storage Folder Structure_vx.x.pptx](https://drive.google.com/drive/folders/1mjMRVf9tROlCX90bhIrTmxKe3Mg7OGrn)
- [About the Analysis Process_vx.pdf](https://drive.google.com/drive/folders/1mjMRVf9tROlCX90bhIrTmxKe3Mg7OGrn)