You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/en/FAQs.md
+5-5
Original file line number
Diff line number
Diff line change
@@ -13,12 +13,12 @@ Try searching for your problems in the top right corner, using different keyword
13
13
14
14
## How to seamlessly remount JuiceFS file system? {#seamless-remount}
15
15
16
-
If you can accept downtime, simply delete the Mount Pod and JuiceFS is remounted when the Mount Pod is re-created (note that if [automatic mount point recovery](./guide/configurations.md#automatic-mount-point-recovery) isn't enabled, you'll need to restart or re-create application pods to bring mount point back into service). But in Kubernetes, we often wish a seamless remount. You can achieve a seamless remount by the following process:
16
+
If you can accept downtime, simply delete the Mount Pod and JuiceFS is remounted when the Mount Pod is re-created (note that if [automatic mount point recovery](./guide/configurations.md#automatic-mount-point-recovery) isn't enabled, you'll need to restart or re-create application Pods to bring mount point back into service). But in Kubernetes, we often wish a seamless remount. You can achieve a seamless remount by the following process:
17
17
18
-
* When [upgrading or downgrading CSI Driver](./administration/upgrade-csi-driver.md), if the Mount Pod image is changed along the way, CSI Driver will create a new Mount Pod when you perform a rolling upgrade on application pods.
19
-
* Modify [mount options](./guide/configurations.md#mount-options) at PV level, and perform a rolling upgrade on application pods. Note that for dynamic provisioning, although you can modify mount options in [StorageClass](./guide/pv.md#create-storage-class), but the changes made will not be reflected on existing PVs, a rolling upgrade thereafter will not trigger Mount Pod re-creation.
20
-
* Modify [volume credentials](./guide/pv.md#volume-credentials), and perform a rolling upgrade on application pods.
21
-
* If no configuration has been modified, but a seamless remount is still in need, you can make some trivial, ineffective changes to mount options (e.g. increase `cache-size` by 1), and then perform a rolling upgrade on application pods.
18
+
* When [upgrading or downgrading CSI Driver](./administration/upgrade-csi-driver.md), if the Mount Pod image is changed along the way, CSI Driver will create a new Mount Pod when you perform a rolling upgrade on application Pods.
19
+
* Modify [mount options](./guide/configurations.md#mount-options) at PV level, and perform a rolling upgrade on application Pods. Note that for dynamic provisioning, although you can modify mount options in [StorageClass](./guide/pv.md#create-storage-class), but the changes made will not be reflected on existing PVs, a rolling upgrade thereafter will not trigger Mount Pod re-creation.
20
+
* Modify [volume credentials](./guide/pv.md#volume-credentials), and perform a rolling upgrade on application Pods.
21
+
* If no configuration has been modified, but a seamless remount is still in need, you can make some trivial, ineffective changes to mount options (e.g. increase `cache-size` by 1), and then perform a rolling upgrade on application Pods.
22
22
23
23
To learn about the CSI Driver implementation and find out when new Mount Pods will be created to achieve seamless remount, see the `GenHashOfSetting()` function in [`pkg/juicefs/mount/pod_mount.go`](https://github.com/juicedata/juicefs-csi-driver/blob/master/pkg/juicefs/mount/pod_mount.go).
Copy file name to clipboardexpand all lines: docs/en/administration/going-production.md
+5-5
Original file line number
Diff line number
Diff line change
@@ -185,7 +185,7 @@ Once metrics data is collected, refer to the following documents to set up Grafa
185
185
186
186
## Collect Mount Pod logs using EFK {#collect-mount-pod-logs}
187
187
188
-
Troubleshooting CSI Driver usually involves reading Mount Pod logs, if [checking Mount Pod logs in real time](./troubleshooting.md#check-mount-pod) isn't enough, consider deploying an EFK (Elasticsearch + Fluentd + Kibana) stack (or other suitable systems) in Kubernetes Cluster to collect pod logs for query. Taking EFK for example:
188
+
Troubleshooting CSI Driver usually involves reading Mount Pod logs, if [checking Mount Pod logs in real time](./troubleshooting.md#check-mount-pod) isn't enough, consider deploying an EFK (Elasticsearch + Fluentd + Kibana) stack (or other suitable systems) in Kubernetes Cluster to collect Pod logs for query. Taking EFK for example:
189
189
190
190
- Elasticsearch: index logs and provide a complete full-text search engine, which can facilitate users to retrieve the required data from the log. For installation, refer to the [official documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html).
191
191
- Fluentd: fetch container log files, filter and transform log data, and then deliver the data to the Elasticsearch cluster. For installation, refer to the [official documentation](https://docs.fluentd.org/installation).
Kubelet comes with [different authentication modes](https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz), and default `AlwaysAllow` mode effectively disables authentication. But if kubelet uses other authentication modes, CSI Node will run into error when listing pods (this is however, a issue fixed in newer versions, continue reading for more):
277
+
Kubelet comes with [different authentication modes](https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz), and default `AlwaysAllow` mode effectively disables authentication. But if kubelet uses other authentication modes, CSI Node will run into error when listing Pods (this is however, a issue fixed in newer versions, continue reading for more):
@@ -335,9 +335,9 @@ If however, a configuration file isn't used, then kubelet is configured purely v
335
335
336
336
## Large scale clusters {#large-scale}
337
337
338
-
"Large scale"is not precisely defined in this context, if you're using a Kubernetes cluster over 100 worker nodes, or pod number exceeds 1000, or a smaller cluster but with unusual high load for the APIServer, refer to this section for performance recommendations.
338
+
"Large scale"is not precisely defined in this context, if you're using a Kubernetes cluster over 100 worker nodes, or Pod number exceeds 1000, or a smaller cluster but with unusual high load for the APIServer, refer to this section for performance recommendations.
339
339
340
-
* Enable `ListPod` cache: CSI Driver needs to obtain the pod list, when faced with a large number of pods, APIServer and the underlying etcd can suffer performance issues. Use the `ENABLE_APISERVER_LIST_CACHE="true"` environment variable to enable this cache, which can be defined as follows inside Helm values:
340
+
* Enable `ListPod` cache: CSI Driver needs to obtain the Pod list, when faced with a large number of Pods, APIServer and the underlying etcd can suffer performance issues. Use the `ENABLE_APISERVER_LIST_CACHE="true"` environment variable to enable this cache, which can be defined as follows inside Helm values:
341
341
342
342
```yaml title="values-mycluster.yaml"
343
343
controller:
@@ -384,7 +384,7 @@ Under the premise of fully understanding the risks of `--writeback`, if your sce
384
384
385
385
* Configure cache persistence to ensure that the cache directory will not be lost when the container is destroyed. For specific configuration methods, read [Cache settings](../guide/cache.md#cache-settings);
386
386
* Choose one of the following methods (you can also adopt both) to ensure that the JuiceFS client has enough time to complete the data upload when the application container exits:
387
-
* Enable [Delayed Mount Pod deletion](../guide/resource-optimization.md#delayed-mount-pod-deletion). Even if the application pod exits, the Mount Pod will wait for the specified time before being destroyed by the CSI Node. Set a reasonable delay to ensure that data is uploaded in a timely manner;
387
+
* Enable [Delayed Mount Pod deletion](../guide/resource-optimization.md#delayed-mount-pod-deletion). Even if the application Pod exits, the Mount Pod will wait for the specified time before being destroyed by the CSI Node. Set a reasonable delay to ensure that data is uploaded in a timely manner;
388
388
* Since v0.24, the CSI Driver supports [customizing](../guide/configurations.md#customize-mount-pod) all aspects of the Mount Pod, so you can modify `terminationGracePeriodSeconds`. By using [`preStop`](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks), you can ensure that the Mount Pod waits for data uploads to finish before exiting, as demonstrated below:
0 commit comments