I would like to use the coref processor for dialogues where I know the speaker of each sentence. This should help eliminate spurious I/you coref chains that are obviously wrong if we know who is speaking.
I can see in the code that the model is aware of speakers during training, but I can't figure out how to feed it in the input. Is there a way to do this?