-
Notifications
You must be signed in to change notification settings - Fork 17
Description
I have not participated in any projects as of yet so unsure if this is the appropriate pathway to bring this point up. If it's not feel free to delete it and I apologize.
I was jumping into bed and stumbled across this repo and just had to get this thought out before I forgot, I spent an hour and a half explaining this exact phenomena to my work mate Friday, I am interested in LMs, but obviously wildly skeptical in regards to claims made about the state of progress, I've been deeply wrestling with the ideas of alignment in general as you would be silly not to be for some 22 years (since I was in secondary school).
I can continue on tomorrow if you like but my limited / anecdotal data absolutely matches up with your findings, I've tried to provoke the models prior to long sessions to be mindful and during the sessions when I notice. I've suggested correct course, illicit focus, summarize / reference initial goals , open up the working space, present the ideas more cogently and to be more aware of the progress of states goals rather than expanding the scope endlessly as not to finish a single initial goal after 16hs of grinding out a VM / Docker deploy.
Obviously the models are weighted to be helpful but firm when pushed to uncomfortable topics, but it seems to me that for wide scale functional interactions with non narrow scientific fields (uses) that the models may need to slow down the race, and take on a little more context from the dialogue.
Good work.