Skip to content

Commit 5ff2948

Browse files
maryamtahhanbenoitf
authored andcommitted
chore: fix trailing whitespace + EOL issues
Signed-off-by: Maryam Tahhan <[email protected]>
1 parent 0087337 commit 5ff2948

12 files changed

+30
-41
lines changed

Diff for: .gitattributes

+1-1
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
* text=auto eol=lf
1+
* text=auto eol=lf

Diff for: .github/PULL_REQUEST_TEMPLATE.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,14 @@
22

33
### Screenshot / video of UI
44

5-
<!-- If this PR is changing UI, please include
5+
<!-- If this PR is changing UI, please include
66
screenshots or screencasts showing the difference -->
77

88
### What issues does this PR fix or reference?
99

10-
<!-- Include any related issues from Podman Desktop
10+
<!-- Include any related issues from Podman Desktop
1111
repository (or from another issue tracker). -->
1212

1313
### How to test this PR?
1414

15-
<!-- Please explain steps to reproduce -->
15+
<!-- Please explain steps to reproduce -->

Diff for: .github/workflows/release.yaml

-1
Original file line numberDiff line numberDiff line change
@@ -161,4 +161,3 @@ jobs:
161161
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
162162
with:
163163
id: ${{ needs.tag.outputs.releaseId}}
164-

Diff for: .npmrc

-1
Original file line numberDiff line numberDiff line change
@@ -1,2 +1 @@
11
node-linker=hoisted
2-

Diff for: MIGRATION.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
Before **Podman AI Lab** `v1.2.0` the [user-catalog](./PACKAGING-GUIDE.md#applicationcatalog) was not versioned.
66
Starting from `v1.2.0` the user-catalog require to have a `version` property.
77

8-
> [!NOTE]
8+
> [!NOTE]
99
> The `user-catalog.json` file can be found in `~/.local/share/containers/podman-desktop/extensions-storage/redhat.ai-lab`.
1010
1111
The list of catalog versions can be found in [packages/backend/src/utils/catalogUtils.ts](https://github.com/containers/podman-desktop-extension-ai-lab/blob/main/packages/backend/src/utils/catalogUtils.ts)
@@ -14,28 +14,28 @@ The catalog has its own version number, as we may not require to update it with
1414

1515
## `None` to Catalog `1.0`
1616

17-
`None` represents any catalog version prior to the first versioning.
17+
`None` represents any catalog version prior to the first versioning.
1818

1919
Version `1.0` of the catalog adds an important property to models `backend`, defining the type of framework required by the model to run (E.g. LLamaCPP, WhisperCPP).
2020

2121
### 🛠️ How to migrate
2222

2323
You can either delete any existing `user-catalog` by deleting the `~/.local/share/containers/podman-desktop/extensions-storage/redhat.ai-lab/user-catalog.json`.
2424

25-
> [!WARNING]
25+
> [!WARNING]
2626
> This will remove the models you have imported from the catalog. You will be able to import it again afterward.
2727
2828
If you want to keep the data, you can migrate it by updating certain properties within the recipes and models fields.
2929

3030
### Recipes
3131

32-
The recipe object has a new property `backend` which defines which framework is required.
32+
The recipe object has a new property `backend` which defines which framework is required.
3333
Value accepted are `llama-cpp`, `whisper-cpp` and `none`.
3434

3535
Moreover, the `models` property has been changed to `recommended`.
3636

3737
> [!TIP]
38-
> Before Podman AI Lab version v1.2 recipes uses the `models` property to list the models compatible. Now all models using the same `backend` could be used. We introduced `recommended` to highlight certain models.
38+
> Before Podman AI Lab version v1.2 recipes uses the `models` property to list the models compatible. Now all models using the same `backend` could be used. We introduced `recommended` to highlight certain models.
3939
4040
**Example**
4141

Diff for: PACKAGING-GUIDE.md

+4-7
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ A model has the following attributes:
4141
- ```license```: the license under which the model is available
4242
- ```url```: the URL used to download the model
4343
- ```memory```: the memory footprint of the model in bytes, as computed by the workflow `.github/workflows/compute-model-sizes.yaml`
44-
- ```sha256```: the SHA-256 checksum to be used to verify the downloaded model is identical to the original. It is optional and it must be HEX encoded
44+
- ```sha256```: the SHA-256 checksum to be used to verify the downloaded model is identical to the original. It is optional and it must be HEX encoded
4545

4646
#### Recipes
4747

@@ -65,7 +65,7 @@ The configuration file is called ```ai-lab.yaml``` and follows the following syn
6565

6666
The root elements are called ```version``` and ```application```.
6767

68-
```version``` represents the version of the specifications that ai-lab adheres to (so far, the only accepted value here is `v1.0`).
68+
```version``` represents the version of the specifications that ai-lab adheres to (so far, the only accepted value here is `v1.0`).
6969

7070
```application``` contains an attribute called ```containers``` whose syntax is an array of objects containing the following attributes:
7171
- ```name```: the name of the container
@@ -102,15 +102,12 @@ application:
102102
- name: chatbot-model-servicecuda
103103
contextdir: model_services
104104
containerfile: cuda/Containerfile
105-
model-service: true
105+
model-service: true
106106
gpu-env:
107107
- cuda
108-
arch:
108+
arch:
109109
- amd64
110110
ports:
111111
- 8501
112112
image: quay.io/redhat-et/model_services:latest
113113
```
114-
115-
116-

Diff for: RELEASE.md

+1-4
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Below is what a typical release week may look like:
1414

1515
- **Monday (Notify):** 48-hour notification. Communicate to maintainers and public channels a release will be cut on Wednesday and to merge any pending PRs. Inform QE team. Start work on blog post as it is usually the longest part of the release process.
1616
- **Tuesday (Staging, Testing & Blog):** Stage the release (see instructions below) to create a new cut of the release to test. Test the pre-release (master branch) build briefly. Get feedback from committers (if applicable). Push the blog post for review (as it usually takes a few back-and-forth reviews on documentation).
17-
- **Wednesday (Release):** Publish the new release on the catalog using the below release process.
17+
- **Wednesday (Release):** Publish the new release on the catalog using the below release process.
1818
- **Thursday (Post-release Testing & Blog):** Test the post-release build briefly for any critical bugs. Confirm that new release has been pushed to the catalog. Push the blog post live. Get a known issues list together from QE and publish to the Podman Desktop Discussions, link to this from the release notes.
1919
- **Friday (Communicate):** Friday is statistically the best day for new announcements. Post on internal channels. Post on reddit, hackernews, twitter, etc.
2020

@@ -58,6 +58,3 @@ Pre-requisites:
5858
#### Catalog
5959

6060
Create and submit a PR to the catalog (https://github.com/containers/podman-desktop-catalog on branch gh-pages). This is manual and will be automated in the future.
61-
62-
63-

Diff for: api/openapi.yaml

+3-3
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ paths:
5656
operationId: pullModel
5757
tags:
5858
- models
59-
description: |
59+
description: |
6060
Download a model from the Podman AI Lab catalog.
6161
summary: |
6262
Download a model from the Podman AI Lab Catalog.
@@ -139,9 +139,9 @@ components:
139139
stream:
140140
type: boolean
141141
description: |
142-
If false the response will be returned as a single response object,
142+
If false the response will be returned as a single response object,
143143
rather than a stream of objects
144-
required:
144+
required:
145145
- model
146146

147147
ProgressResponse:

Diff for: docs/proposals/ai-studio.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -34,20 +34,20 @@ application:
3434
contextdir: model_services
3535
containerfile: base/Containerfile
3636
model-service: true
37-
backend:
37+
backend:
3838
- llama
3939
arch:
4040
- arm64
4141
- amd64
4242
- name: chatbot-model-servicecuda
4343
contextdir: model_services
4444
containerfile: cuda/Containerfile
45-
model-service: true
46-
backend:
45+
model-service: true
46+
backend:
4747
- llama
4848
gpu-env:
4949
- cuda
50-
arch:
50+
arch:
5151
- amd64
5252
```
5353
@@ -74,7 +74,7 @@ application:
7474
exec: # added
7575
command: # added
7676
- curl -f localhost:7860 || exit 1 # added
77-
backend:
77+
backend:
7878
- llama
7979
arch:
8080
- arm64
@@ -87,11 +87,11 @@ application:
8787
exec: # added
8888
command: # added
8989
- curl -f localhost:7860 || exit 1 # added
90-
backend:
90+
backend:
9191
- llama
9292
gpu-env:
9393
- cuda
94-
arch:
94+
arch:
9595
- amd64
9696
```
9797

Diff for: docs/proposals/state-management.md

+4-5
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# State management
22

3-
The backend manages and persists the State. The backend pushes new state to the front-end
3+
The backend manages and persists the State. The backend pushes new state to the front-end
44
when changes happen, and the front-end can ask for the current value of the state.
55

6-
The front-end uses `readable` stores to expose the state to the different pages. The store
6+
The front-end uses `readable` stores to expose the state to the different pages. The store
77
listens for new states pushed by the backend (`onMessage`), and asks for the current state
88
at initial time.
99

@@ -14,7 +14,7 @@ The pages of the front-end subscribe to the store to get the value of the state
1414
The catalog is persisted as a file in the user's filesystem. The backend reads the file at startup,
1515
and watches the file for changes. The backend updates the state as soon as changes it detects changes.
1616

17-
The front-end uses a `readable` store, which waits for changes on the Catalog state
17+
The front-end uses a `readable` store, which waits for changes on the Catalog state
1818
(using `onMessage('new-catalog-state', data)`),
1919
and asks for the current state at startup (with `postMessage('ask-catalog-state')`).
2020

@@ -23,7 +23,7 @@ of the Catalog state in a reactive manner.
2323

2424
## Pulled applications
2525

26-
The front-end initiates the pulling of an application (using `postMessage('pull-application', app-id)`).
26+
The front-end initiates the pulling of an application (using `postMessage('pull-application', app-id)`).
2727

2828
The backend manages and persists the state of the pulled applications and pushes every update
2929
on the state (progression, etc.) (using `postMessage('new-pulled-application-state, app-id, data)`).
@@ -49,4 +49,3 @@ and asks for the current state at startup (using `postMessage('ask-error-state)`
4949
The interested pages of the front-end subscribe to the store to display the errors related to the page.
5050

5151
The user can acknowledge an error (using a `postMessage('ack-error', id)`).
52-

Diff for: packages/backend/src/templates/python-langchain.mustache

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
pip
22
=======
3-
pip install langchain langchain-openai
3+
pip install langchain langchain-openai
44

55
AiService.py
66
==============
@@ -10,7 +10,7 @@ from langchain_core.prompts import ChatPromptTemplate
1010

1111
model_service = "{{{ endpoint }}}"
1212

13-
llm = OpenAI(base_url=model_service,
13+
llm = OpenAI(base_url=model_service,
1414
api_key="sk-no-key-required",
1515
streaming=True)
1616
prompt = ChatPromptTemplate.from_messages([

Diff for: packages/backend/src/templates/quarkus-langchain4j.mustache

-2
Original file line numberDiff line numberDiff line change
@@ -32,5 +32,3 @@ String request(String question);
3232

3333
======
3434
Inject AIService into REST resource or other CDI resource and use the request method to call the LLM model. That's it
35-
36-

0 commit comments

Comments
 (0)