Skip to content

v0.14.0

Compare
Choose a tag to compare
@PawelPeczek-Roboflow PawelPeczek-Roboflow released this 12 Jul 14:59
· 5111 commits to main since this release
f7698dd

🚀 Added

inference is ready for Florence-2 🤩

Thanks to @probicheaux we have inference package ready for Florence-2. It is Large Multimodal Model capable of processing both image and text input handling wide range of generic vision and language-vision tasks.

We are excited to add it to the collection of models offered by inference. Due to the complexity of build, model is shipped only within
docker image 🐋 . Everything within our official inference server build for GPU 🤯 . To fully utilise the new models you need to wait on the release in Roboflow platform.

You should be able to spin up your container via inference-cli:

inference server start
❗ What is required to run the container and what has changed in the build?

We've needed to bump required CUDA version in docker build for GPU server from 11.7 to 11.8. That is why now, you may not be able to run
the container on servers having older CUDA. We've run the server experimentally on machine with CUDA 11.6 and it worked, but we cannot guarantee that to work on older builds.

🤔 How to run new model?
import requests

payload = {
    "api_key": "<YOUR-ROBOFLOW-API-KEY>,
    "image": {
        "type": "url",
        "value": "https://media.roboflow.com/dog.jpeg",
    },
    "prompt": "<CAPTION>",
    "model_id": "<model-id-available-when-roboflow-platform-starts-the-support>"
}

response = requests.post(
    f"{server_url}/infer/lmm",
    json=payload,
)

print(response.json())

New blocks in workflows 🥹

image

We have added the following block to workflows ecosystem:

  • Property Definition which let you to use specific attribute of data as an input for next step or as output
  • Detections Classes Replacement to replace classes of bounding boxes in scenario when you first run general object-detection model, then crop image based on predictions and you apply secondary classification model. Results of secondary model replaces originally predicted classes
  • and few others - explore our collection of blocks ✨

Blocks that were added are still in refinement - we may want to improve them over time - so stay tuned!

🌱 Changed

🔐 Mitigation for security vulnerabilities ❗ BREAKING 🚧

To two mitigate security vulnerabilities:

  • unsafe deserialisation of pickled inputs enabled by default for self-hosted inference
  • Server-side request forgery (SSRF)

we needed to add couple of changes, among which one is breaking. From now on default value for env variable: ALLOW_NUMPY_INPUT is False.

Implications:

  • if you rely on pickled numpy images passed to inference Python package or sent to inference server - you need to set this env variable explicitly into ALLOW_NUMPY_INPUT=true in your environment or start a server with this env variable (see how)
  • there are also other changes which you can optionally tune to run inference server safer - see our docs 📖

🔨 Fixed

❗ Removed bug in inference post-processing

Some models trained at Roboflow platform experienced problem with predictions post-processing when there was padding as
the option selected while creating dataset. Thanks to @grzegorz-roboflow it was fixed in #495

Other minor fixes

🏅 New Contributors

Full Changelog: v0.13.0...v0.14.0