Skip to content

Commit a8d111e

Browse files
yelianevichRaman Yelianevich
andauthored
Docs: Fix list rendering and typos (#13214) (#13267)
Co-authored-by: Raman Yelianevich <[email protected]>
1 parent e141ff7 commit a8d111e

File tree

5 files changed

+16
-9
lines changed

5 files changed

+16
-9
lines changed

docs/docs/aws.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -382,7 +382,9 @@ s3://my-table-data-bucket/my_ns.db/my_table/0101/0110/1001/10110010/category=ord
382382
```
383383

384384
Note, the path resolution logic for `ObjectStoreLocationProvider` is `write.data.path` then `<tableLocation>/data`.
385+
385386
However, for the older versions up to 0.12.0, the logic is as follows:
387+
386388
- before 0.12.0, `write.object-storage.path` must be set.
387389
- at 0.12.0, `write.object-storage.path` then `write.folder-storage.path` then `<tableLocation>/data`.
388390
- at 2.0.0 `write.object-storage.path` and `write.folder-storage.path` will be removed

docs/docs/flink-writes.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -391,6 +391,8 @@ SET table.exec.iceberg.use-v2-sink = true;
391391
## Writing with DataStream
392392

393393
To use SinkV2 based implementation, replace `FlinkSink` with `IcebergSink` in the provided snippets.
394-
Warning: There are some slight differences between these implementations:
395-
- The `RANGE` distribution mode is not yet available for the `IcebergSink`
396-
- When using `IcebergSink` use `uidSuffix` instead of the `uidPrefix`
394+
!!! warning
395+
There are some slight differences between these implementations:
396+
397+
- The `RANGE` distribution mode is not yet available for the `IcebergSink`
398+
- When using `IcebergSink` use `uidSuffix` instead of the `uidPrefix`

docs/docs/hive.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -386,6 +386,7 @@ ALTER TABLE orders REPLACE COLUMNS (remaining string);
386386

387387
#### Partition evolution
388388
You change the partitioning schema using the following commands:
389+
389390
* Change the partitioning schema to new identity partitions:
390391
```sql
391392
ALTER TABLE default.customers SET PARTITION SPEC (last_name);
@@ -394,6 +395,7 @@ ALTER TABLE default.customers SET PARTITION SPEC (last_name);
394395
```sql
395396
ALTER TABLE order SET PARTITION SPEC (month(ts));
396397
```
398+
397399
#### Table migration
398400
You can migrate Avro / Parquet / ORC external tables to Iceberg tables using the following command:
399401
```sql

docs/docs/spark-queries.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -327,9 +327,10 @@ SELECT * FROM prod.db.table.files;
327327

328328
!!! info
329329
Content refers to type of content stored by the data file:
330-
* 0 Data
331-
* 1 Position Deletes
332-
* 2 Equality Deletes
330+
331+
- 0 - Data
332+
- 1 - Position Deletes
333+
- 2 - Equality Deletes
333334

334335
To show only data files or delete files, query `prod.db.table.data_files` and `prod.db.table.delete_files` respectively.
335336
To show all files, data files and delete files across all tracked snapshots, query `prod.db.table.all_files`, `prod.db.table.all_data_files` and `prod.db.table.all_delete_files` respectively.

site/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ make clean
115115

116116
#### Testing local changes on versioned docs
117117

118-
When you build the docs as described above, by default the versioned docs are mounted from the upstream remote repositiory called `iceberg_docs`. One exception is the `nightly` version that is a soft link to the local `docs/` folder.
118+
When you build the docs as described above, by default the versioned docs are mounted from the upstream remote repository called `iceberg_docs`. One exception is the `nightly` version that is a soft link to the local `docs/` folder.
119119

120120
When you make changes to some of the historical versioned docs in a local git branch you can mount this git branch instead of the remote one by setting the following environment variables:
121121

@@ -125,7 +125,7 @@ When you make changes to some of the historical versioned docs in a local git br
125125

126126
#### Offline mode
127127

128-
One of the great advantages to the MkDocs material plugin is the [offline feature](https://squidfunk.github.io/mkdocs-material/plugins/offline). You can view the Iceberg docs without the need of a server. To enable OFFLINE builds, add theOFFLINE environment variable to either `build` or `serve` recipes.
128+
One of the great advantages to the MkDocs material plugin is the [offline feature](https://squidfunk.github.io/mkdocs-material/plugins/offline). You can view the Iceberg docs without the need of a server. To enable OFFLINE builds, add the OFFLINE environment variable to either `build` or `serve` recipes.
129129

130130
```sh
131131
make build OFFLINE=true
@@ -136,7 +136,7 @@ make build OFFLINE=true
136136
137137
## Release process
138138

139-
Deploying the docs is a two step process:
139+
Deploying the docs is a two-step process:
140140

141141
> [!WARNING]
142142
> The `make release` directive is currently unavailable as we wanted to discuss the best way forward on how or if we should automate the release. It involves taking an existing snapshot of the versioned documentation, and potentially automerging the [`docs` branch](https://github.com/apache/iceberg/tree/docs) and the [`javadoc` branch](https://github.com/apache/iceberg/tree/javadoc) which are independent from the `main` branch. Once this is complete, we can create a pull request with an offline build of the documentation to verify everything renders correctly, and then have the release manager merge that PR to finalize the docs release. So the real process would be manually invoking a docs release action, then merging a pull request.

0 commit comments

Comments
 (0)