Skip to content

Commit 4e9e05a

Browse files
committed
temp test to see if we can publish to gh packages from a branch
1 parent a0cf372 commit 4e9e05a

File tree

2 files changed

+8
-14
lines changed

2 files changed

+8
-14
lines changed

.github/workflows/publish_dev_version.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ jobs:
2626
- name: Build with Maven
2727
run: ./mvnw -B package --file pom.xml -Pscala-2.12 -Dkotest.tags="!Kafka"
2828
- name: Deploy to GH Packages
29-
run: mvn --batch-mode deploy
29+
run: ./mvnw --batch-mode deploy
3030
env:
3131
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
3232

README.md

Lines changed: 7 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -88,22 +88,16 @@ To define a certain version of Spark or the API itself, simply add it like this:
8888
```
8989

9090
Inside the notebook a Spark session will be initiated automatically. This can be accessed via the `spark` value.
91-
`sc: JavaSparkContext` can also be accessed directly.
92-
93-
One limitation of the notebooks is that the `SparkSession` context cannot be applied
94-
implicitly to function calls. This means that instead of writing:
95-
```kotlin
96-
val ds = listOf(...).toDS()
97-
```
98-
you'll need to write:
99-
```kotlin
100-
val ds = listOf(...).toDS(spark)
101-
```
102-
103-
Other than that, the API operates pretty similarly.
91+
`sc: JavaSparkContext` can also be accessed directly. The API operates pretty similarly.
10492

10593
There is also support for HTML rendering of Datasets and simple (Java)RDDs.
10694

95+
To use Spark Streaming abilities, instead use
96+
```jupyterpython
97+
%use kotlin-spark-api-streaming
98+
```
99+
This does not start a Spark session right away, meaning you can call `withSparkStreaming(batchDuration) {}`
100+
in whichever cell you want.
107101

108102
## Kotlin for Apache Spark features
109103

0 commit comments

Comments
 (0)