Skip to content

Quantile regression after parameter estimation  #1

@tblazina

Description

@tblazina

Hi @sachinruk, I'm very happy to have come across your quantile regression notebook. I'm a data scientist in Switzerland working on doing forecasting of fresh food and have been trying to implement the model described recently in https://arxiv.org/pdf/1704.04110.pdf, where they are using a RNN to estimate the mean and dispersion of a negative binomial distribution and then using these estimates to draw samples from the distributions defined by their estimates to calculate what they refer to as p-quantile loss but their formulation of the loss function is different from yours.

I had a sort of general question because I have no experience with quantile regression, but intuitively , if you take like the 90% quantile loss and train a RNN on it then the predictions it makes would be inherently more conservative than if you used the median (50% quantile) or the 25% quantile... so in my world of fresh food forecasting, if we used the 90% quantile as the loss to train the model then we would say that our risk of stock out would be 10% (i.e. only a 10 % chance of our demand being higher than we expect). Does my intuition hold up here?

Thanks again for the great notebook!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions