Skip to content

Add custom prompt functionality #15

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

Aatif-Dawawala
Copy link
Collaborator

Description

These changes implement passing custom prompts to models as shown in the README.

Related Issues

Vision Service: Improve caption prompt and allow submitting a custom prompt with the request data

Acceptance Criteria

  • New features or enhancements are fully implemented and do not break existing functionality, so that they can be released at any time without requiring additional work
  • [N/A] Automated unit and/or acceptance tests are included to ensure that changes work as expected and to reduce repetitive manual work
  • [N/A] Documentation has been / will be updated, especially as it relates to new configuration options or potentially disruptive changes

Contribution Agreement

When contributing code or other intellectual property for the first time, we ask that you read and agree to the following so that we can safely use it in all of our projects without risking unexpected legal disputes or having to repeatedly ask for permission:

  • I grant PhotoPrism a non-exclusive, perpetual, irrevocable, worldwide, fully-paid, royalty-free, and transferable right to use, copy, modify, merge, publish, distribute, sublicense and/or sell my Contributions without any restrictions. This includes a non-exclusive, perpetual, irrevocable, worldwide, fully-paid, royalty-free, and transferable patent license to use, offer for sale, sell, import, export, and otherwise transfer or make available my Contributions. PhotoPrism may, in its sole discretion, apply any license to my Contributions that it deems appropriate for the particular purpose, including other open source, copyleft, and proprietary licenses. PhotoPrism may assign this Agreement and all of its rights, obligations and licenses hereunder.
  • I confirm that I am the sole author of this Contribution and am legally authorized to grant the above licenses and waivers with respect to this Contribution and any other of my Contributions. If your Contributions were created in the course of your employment with your former or current employer, you represent that this employer has authorized you to make your Contributions on its behalf or has waived all rights, claims, or interests in your Contributions.

PhotoPrism UG ("PhotoPrism", "we" or "us") hereby confirms to you that, to the fullest extent permitted by applicable law, this Contribution is provided "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, ANY IMPLIED WARRANTIES OR CONDITIONS OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. You have no obligation to provide support, maintenance, or other services for your Contribution.

This agreement is solely for our protection and that of our users. It does not grant us exclusive rights to your code.

Thank you very much! 🌈💎✨

@derneuere
Copy link
Collaborator

Great to have you back on this project! 🚀

return self._generate_with_prompt(model_name, model_version, [image], caption_prompt)
def generate_caption(self, model_name: str, model_version: str, image: Image, prompt) -> tuple[str, str]:

if prompt == 'default':
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if prompt == '' (empty)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, please check / confirm whether the following environment variables can still be used to set a default prompt for the corresponding endpoints:

  • OLLAMA_NSFW_PROMPT
  • OLLAMA_LABELS_PROMPT
  • OLLAMA_CAPTION_PROMPT

If they are set to an empty string or "default", the built-in default should be used. This default must never be "default" or empty.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if prompt == '' (empty)?

If the user doesn't supply a prompt or the prompt is an empty string, the default prompt will be used.

@lastzero
Copy link
Member

Note that I changed the caption default prompt to work well with the qwen2.5vl model.

@lastzero
Copy link
Member

I manually applied your changes for the caption prompt so that we can start testing it while you finalize the pull request :)

@Aatif-Dawawala Aatif-Dawawala self-assigned this Jul 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants