Description
Currently when a RedisResult[] is returned from a TS.RANGE or similar query, we enumerate the entire result and allocate it to a new IReadOnlyCollection. This feels unnecessary, and iteration should be left up to the user.
private static TimeStamp ParseTimeStamp(RedisResult result)
{
if (result.Type == ResultType.None) return default;
return new TimeStamp((long)result);
}
private static TimeSeriesTuple ParseTimeSeriesTuple(RedisResult result)
{
RedisResult[] redisResults = (RedisResult[])result;
if (redisResults.Length == 0) return null;
return new TimeSeriesTuple(ParseTimeStamp(redisResults[0]), (double)redisResults[1]);
}
private static IReadOnlyList<TimeSeriesTuple> ParseTimeSeriesTupleArray(RedisResult result)
{
RedisResult[] redisResults = (RedisResult[])result;
var list = new List<TimeSeriesTuple>(redisResults.Length);
if (redisResults.Length == 0) return list;
Array.ForEach(redisResults, tuple => list.Add(ParseTimeSeriesTuple(tuple)));
return list;
}
I propose we wrap up the RedisResult into a TsTimeSeriesCollection
object, something like;
public class TsTimeSeriesCollection
{
private RedisResult[] _redisResults;
public TimeSeriesCollection(RedisResult redisResult)
{
_redisResults = (RedisResult[])redisResult;
}
public (long TimeStamp, double Value) this[int index] => ((long)((RedisResult[]) _redisResults[index])[0], (double) ((RedisResult[]) _redisResults[index])[1]);
public int Count => _redisResults.Length;
}
The above will only cast the samples RedisResult when it's accessed, which should help with performance when large sample sets are returned. I haven't extended this to implement IEnumerable<(long,double)>
or started optimizing, but the general idea is to create a wrapper around the RedisResult[] and make time series arrays less allocatey and easier to work with.
Since this is a breaking change, it might make sense to do prior to the next release. Feedback welcome!