Waiting for the response of the LLM is more bearable if it is streamed. this also discourages users to write anything while a request is in progress.