Spartify is a crowd-voting app that allows users to create and join a party to vote for the next song in a queue.
Andy Zeff | Kathy Daniels | Rafael Ning | Rohan Khajuria |
@apzsfo | @danielskathyd | @rning | @Rohanator9000 |
We will use k6 primarily and tsung secondarily, and we'll ignore this project's server/src/loadtest/runner.ts
and server/src/loadtest/userScript.ts
.
For any scaling optimization changes, we need to load test with k6 and tsung before and after each change to generate data for our final paper and presentation. The data will be output in the console by the load testing tool and in Spartify's Honeycomb.
The load tests will presumably differ based on the local machine it's being run on, so we'll probably have to just keep track of the last commit for each change and run all the load testing once we're done making the scaling optimization changes. Put the load test data results in this Google Doc.
First, install k6. E.g. on Mac, run brew install k6
.
Then, be sure to run
docker-compose down
and
docker-compose up
before and between each test below, since the tests for the API make mutations that should affect performance.
For example, the party creation and joining test has a phase that creates many parties before a phase that joins those parties, so failing to clear the database before running the test may cause errors and will affect the performance results.
Run the load tests on http://localhost:3000
:
npm run lt:k6:home
which runs the command
k6 run server/src/loadtest/k6_script_home_page.js
npm run lt:k6:party
which runs the command
k6 run server/src/loadtest/k6_script_party_create_join.js
npm run lt:k6:vote
which runs the command
k6 run server/src/loadtest/k6_script_vote_next.js
First, install tsung. E.g. on Mac, run brew install tsung
and then sudo cpan Template
(why: 1, 2).
Then, be sure to run
docker-compose down
and
docker-compose up
Run the load tests on http://localhost:3000
:
npm run lt:tsung:gen
which runs the command
tsung -f server/src/loadtest/tsung_gen.xml -k start
View metrics at
localhost:8091
Or if that's not working, then generate a static report (report.html, graph.html, images directory):
cd /Users/<username>/.tsung/log/<loadtest_time>
/usr/local/lib/tsung/bin/tsung_stats.pl
For the Quickstart, you will need:
Later, to deploy your app you will also need:
- terraform ^0.12
- AWS CLI version 2
jq
- (Windows only)
zip
If you've already installed Node but aren't running Node.js 12.x (node --version
), use nvm
to install an appropriate version. Follow their instructions.
nvm install 12
nvm alias default 12
First, install the Quickstart dependencies.
Choose a short, alphanumeric slug for your project. This will be used to identify all the AWS resources created for your project, as well as the public URL for your project. Once you finish the Quickstart, your app will be available at https://yourteamslug.cloudcity.computer. Your slug should be only letters and digits.
Clone and initialize the starter project. You'll need to have node
and npm
installed first. See dependencies.
source <(curl -s https://cs188.cloudcity.computer/app/script/init-project.sh)
This will create a directory with the name of your project slug and install the project dependencies.
If you run into an error sourcing the init script above, you may run the steps manually:
git clone https://github.com/rothfels/spartify.git <your project slug>
cd <your project slug>
rm -rf .git
<find/replace "spartify" with your project slug>
git init
npm install
Open the project directory in VS Code. Install the recommended extensions then reload VS Code.
Your appserver can use a MySQL database and a Redis instance. Start these on your local machine in the background:
docker-compose up -d
You must compile TypeScript before it is runnable in a browser. Start a "watch" mode process that compiles your TypeScript as soon as you change it.
npm run watch:web
Open the Run/Debug
tab in VS Code and choose the server.ts
run configuration (either works; one will auto-restart your server when you edit code), then hit play.
Open http://localhost:3000 to see your app.
Open http://localhost:3000/graphql to see your interactive GraphQL API explorer.
Open Debug Console
in VS Code to see console output.
Set breakpoints in the gutter to the left of your code. Note: these only work on code executing on the server, not in the browser.
The fastest way to develop React components is in an isolated environment using Storybook. The project ships with an example storybook, see Login.stories.tsx
or Survey.stories.tsx
.
npm run storybook:web
Then, go to http://localhost:6006 to see your stories. Any changes you make to code will automatically refresh the browser.
If you are rendering React components that require a backend (e.g. because the component makes GraphQL API requests) then you should also run server.ts
in VS Code before running storybook. It is recommended to use the server.ts (no restart)
run configuration.
web
: runs code in the browser (React application). In production, this code is "bundled" into a singlebundle.js
file and served by the backend. It is sourced by the HTML served at/app
.server
: runs code on Node.js (Express server, GraphQL API). In production, this code may run in ECS or on AWS Lambda, depending on how you deploy it. Serves:/app
: React (client & server rendered) application, static assets/graphql
: GraphQL API/graphqlsubscription
: GraphQL API for subscriptions (over websocket)/api/:function
: non-graphql/REST APIs (e.g. RPCs)
common
: code that may be imported by eitherweb
orserver
projects. Must be runnable in both server and browser contexts.public
: static assets bundled with the server and served at/app
. Destination directory for build assets (e.g.bundle.js
).stories
: React components for Storybook
The project ships with an ORM and a migration manager. You may use the ORM or write raw SQL statements or both.
Define your ORM models in server/src/entities
. Tables will automatically get created. See TypeORM docs for details.
Define migrations in server/src/db/migrations
. They will automatically get run before your server starts. The starter project ships with an initial migration. Add new migrations by checking in additional migration files using the naming convention VX.X__Description.sql
. The server only runs migrations which haven't already been run successfully. The server will fail before accepting connections if any migrations fail. You must manually correct the failed migrations to get the server into a healthy state.
Create a free Honeycomb account. Save your API key.
The terraform
commands to set up your AWS infrastucture require credentials. Your AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
will be provided during the first lab session. Set these values on your environment (e.g. in .bashrc
or .zshrc
) or follow the instructions for managing AWS credentials.
export AWS_ACCESS_KEY_ID=<insert your key>
export AWS_SECRET_ACCESS_KEY=<insert your key>
export AWS_REGION=us-west-2
Your terraform configuration is in terraform/main.tf
. The default configuration includes:
- a MySQL database
- a Redis instance
- an appserver to run your Node.js server code
- an API Gateway routing traffic through a load balancer to your appserver
Open terraform/main.tf
and set your Honeycomb API key (look for <insert key here>
), then deploy your terraform resources:
cd terraform
terraform init
terraform apply
The terraform apply
step will run code to create all the resources required to run your application. It will generate a terraform.tfstate
file which it uses to track and manage the resources it has created. Don't lose this file, and make sure to check it into your repo with git.
When you're done with a resource, simply delete or comment out the code and re-run terraform apply
. :)
After provisioning your terraform
resources run:
npm run deploy:init
This will package your server code and deploy it to your appserver. After a minute, go to https://yourteamslug.cloudcity.computer to see your app.
Initially, your server will be deployed as a single ECS task running with minimally provisioned CPU and memory.
Over the quarter, you will be able to:
- horizontally scale appserver tasks (set desired run count)
- vertically scale appserver tasks (set desired CPU/memory)
- decompose services
- via additional appservers running on ECS
- via AWS lambda function(s)
You may add websockets to your app to allow publishing data from your server to clients. Add this to your main.tf
:
module "websocket_api" {
source = "./modules/websocket_api"
appserver_host = module.webserver.host
}
Provision it with terraform apply
. Then, tell your appserver how to communicate with your websocket API by setting the ws_url
variable on your appserver:
# uncomment to add graphql subscriptions, must make a websocket api first
# ws_url = module.websocket_api.url
Unfortunately terraform
can't currently trigger deployments of websocket APIs. You must manually login to the AWS console to trigger a deployment of your websocket API.
You may use lambda to decompose services from your appserver. You will also need to provision a lambda to run distributed load tests. Add this to your main.tf
:
module "lambda" {
source = "./modules/lambda"
honeycomb_key = <insert your key>
mysql_host = module.mysql.host
redis_host = module.redis.host
}
Then provision it with terraform apply
. You should also modify your deploy-local.sh
and uncomment the section which deploys code to your newly provisioned lambda.
The project includes a load test runner which you may run from the loadtest.ts
launch configuration:
A load test is a sequence of ArrivalPhase
s, each consisting of period of time when some # of users/second run a UserScript
. A user script is a TypeScript function you write which simulates real user behavior.
The default script in loadtest.ts
makes 3 GET requests to your appserver. Because your app is server rendered, your server will make GraphQL requests to itself to fetch the data necessary to render your app.
You may modify the script in loadtest.ts
to make arbitrary GET or POST (e.g. GraphQL) requests to any endpoint of your server, using the fetch
interface or apolloClient
.
Your local computer can only put so much load on your server because there are limitations to how many TCP connections Node.js will concurrently let you make.
You may execute your user scripts locally or using a distributed exeucutor (AWS lambda). By default, the loadtest is set up for local execution.
Your Honeycomb instrumentation will provide all the data visualizations you need. Login to Honeycomb to view your server metrics.