Skip to content

[demo]: Inferring Multiple choice Questions #337

Open
@appledora

Description

@appledora

Before you open an issue, please check if a similar issue already exists or has been closed before.

When you open an issue, please be sure to include the following

  • A descriptive title: [xxx] XXXX
  • A detailed description
  • Assign an issue type tag (label):
    • dataset (mimic-it download, usage, etc.),
    • demo (online demo), doc (readme, wiki, paper, video etc.),
    • evaluation (evaluation result, performance of Otter etc.),
    • model (model configuration, components, etc.),
    • train (training configuration, process, code, etc.)

Thank you for your contributions!

Hello, I am trying to use the in-context model (luodian/OTTER-9B-LA-InContext) to generate output from a multiple-choice question. My primary instruction looks something like this:

prompt = "<image>User: Can you pick one of the following options that best describes the image? Choose ONLY from the given two options. <options>1: cat 2: dog GPT:<answer> 1: cat<|endofchunk|><image>User: Can you pick one of the following options that best describes the image? Choose ONLY from the given two options.  <options>1: kitchen table 2: bathroom sink GPT:<answer> 2: bathroom sink<|endofchunk|><image>User: Can you pick one of the following options that best describes the image? Choose ONLY from the given two options. <options>1: chicken_wings 2: salad GPT:<answer> " 

However, considering the outputs, I feel like I am not structuring this correctly or not using the correct model for this task. I am looking for suggestions to improve my instruction, and whether I should try the different weights.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions