Skip to content
This repository has been archived by the owner on Sep 25, 2020. It is now read-only.

tcurl benchmark mode: sending a request many times #71

Merged
merged 5 commits into from
Oct 19, 2015
Merged

Conversation

ShanniLi
Copy link
Contributor

r @Raynos @kriskowal
cc @breerly

ref: #68

logger: DebugLogtron('tcurl')
});

var subChan = client.makeSubChannel({
self.subChannel = self.client.makeSubChannel({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add subChannel as a nulled default property in the constructor.

@kriskowal
Copy link
Contributor

I hope that these comments clarify the intent of the delegate object. The delegate stands in place of a callback as a collector of multiple errors or results instead of a single error or response. Totally a great tool for collecting results for the benchmark. There should just be one instead of two.

@kriskowal
Copy link
Contributor

That concludes my review. Looking forward to seeing this in master.

@blampe
Copy link
Contributor

blampe commented Oct 15, 2015

What does the user experience look like?

@@ -103,8 +104,32 @@ a tchannel service. It supports thrift, JSON, and raw request format.
tcurl -p 127.0.0.1:8080 serviceName endpoint --timeout 1000
```

`--shardKey`
Ringpop only. Send ringpop shardKey transport header.
`--rate value`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@blampe You can find the user experience documented here in the man page :)

@blampe
Copy link
Contributor

blampe commented Oct 16, 2015

I was using tcurl.py with a batch mode like this to benchmark Python initially, but it didn't give me the numbers I actually wanted. Knowing how long it takes to complete 1000 requests just tells you how long the longest request took, which can be anomalous.

I suggest something like --bench which just fires off as many requests as possible (with some ceiling, and maybe ramping up initially), and then continuously display how many responses were received in the last second. I think this is similar to @prashantv's benchmark for Go.

@Raynos
Copy link
Contributor

Raynos commented Oct 16, 2015

@blampe

Very good point. hyperbahn ( https://github.com/uber/hyperbahn/blob/master/test/lib/time-series-cluster.js#L588-L620 ) has a results object including latencies and error rate.

I think the expected use case is to send n req/s and see what the latency is. You can then manually increase n until the p99 is too big.

That's why we have a --rate for sending N req/s and we have --time or --requests to say how long to benchmark it.

It should definitely stream results like telling us every M seconds what the success rate and latencies are.

@ShanniLi
Copy link
Contributor Author

I have talked to @kriskowal, we agreed to use a separate delegate since the user case for benchmark is quite different.

ShanniLi added a commit that referenced this pull request Oct 19, 2015
tcurl benchmark mode: sending a request many times
@ShanniLi ShanniLi merged commit 22f7260 into master Oct 19, 2015
@ShanniLi ShanniLi deleted the benchmark branch October 19, 2015 23:03
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants