From 74882664806a40ec61c668c7ce0c137e8dac496a Mon Sep 17 00:00:00 2001 From: Michael McCune Date: Fri, 27 Jul 2018 17:40:42 -0400 Subject: [PATCH] add menu entry items for how do i articles --- _howdoi/how-do-i-launch-a-jupyter-notebook.adoc | 1 + _howdoi/how-do-i-recognize-version-clash.adoc | 1 + _howdoi/how-to-connect-to-cluster.adoc | 1 + _howdoi/how-to-connect-to-kafka.adoc | 1 + _howdoi/how-to-use-spark-configs.adoc | 1 + _howdoi/use-python-packages.adoc | 1 + 6 files changed, 6 insertions(+) diff --git a/_howdoi/how-do-i-launch-a-jupyter-notebook.adoc b/_howdoi/how-do-i-launch-a-jupyter-notebook.adoc index f508a14a..962efa09 100644 --- a/_howdoi/how-do-i-launch-a-jupyter-notebook.adoc +++ b/_howdoi/how-do-i-launch-a-jupyter-notebook.adoc @@ -1,5 +1,6 @@ = launch a Jupyter notebook on OpenShift :page-layout: howdoi +:page-menu_entry: How do I? There are multiple ways to launch a Jupyter notebook on OpenShift with the radanalytics.io tooling. You can use the OpenShift console or the `oc` command diff --git a/_howdoi/how-do-i-recognize-version-clash.adoc b/_howdoi/how-do-i-recognize-version-clash.adoc index b84ce052..7f5b5b4e 100644 --- a/_howdoi/how-do-i-recognize-version-clash.adoc +++ b/_howdoi/how-do-i-recognize-version-clash.adoc @@ -1,5 +1,6 @@ = recognize Spark version mismatch between driver, master and/or workers? :page-layout: howdoi +:page-menu_entry: How do I? It's important that the Spark version running on your driver, master, and worker pods all match. Although some versions _might actually_ interoperate, diff --git a/_howdoi/how-to-connect-to-cluster.adoc b/_howdoi/how-to-connect-to-cluster.adoc index 34beb698..7a67a619 100644 --- a/_howdoi/how-to-connect-to-cluster.adoc +++ b/_howdoi/how-to-connect-to-cluster.adoc @@ -1,5 +1,6 @@ = connect to a cluster to debug / develop? :page-layout: howdoi +:page-menu_entry: How do I? [source,bash] oc run -it --rm dev-shell --image=radanalyticsio/openshift-spark -- spark-shell diff --git a/_howdoi/how-to-connect-to-kafka.adoc b/_howdoi/how-to-connect-to-kafka.adoc index dab3511b..21168341 100644 --- a/_howdoi/how-to-connect-to-kafka.adoc +++ b/_howdoi/how-to-connect-to-kafka.adoc @@ -1,5 +1,6 @@ = connect to Apache Kafka? :page-layout: howdoi +:page-menu_entry: How do I? You need to add `--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.1.0` when running `spark-shell`, `spark-submit` or to `SPARK_OPTIONS` for S2I. For diff --git a/_howdoi/how-to-use-spark-configs.adoc b/_howdoi/how-to-use-spark-configs.adoc index 737acadf..aaad914e 100644 --- a/_howdoi/how-to-use-spark-configs.adoc +++ b/_howdoi/how-to-use-spark-configs.adoc @@ -1,5 +1,6 @@ = use custom Spark configuration files with my cluster? :page-layout: howdoi +:page-menu_entry: How do I? Create custom versions of standard Spark configuration files such as `spark-defaults.conf` or `spark-env.sh` and put them together in a subdirectory, then create a configmap diff --git a/_howdoi/use-python-packages.adoc b/_howdoi/use-python-packages.adoc index 6fc6e232..7e03921e 100644 --- a/_howdoi/use-python-packages.adoc +++ b/_howdoi/use-python-packages.adoc @@ -1,5 +1,6 @@ = install Python packages in Jupyter notebooks on OpenShift :page-layout: howdoi +:page-menu_entry: How do I? :source-highlighter: coderay :coderay-css: style