Skip to content

Commit 50521ff

Browse files
committed
referring to wiki in readme
1 parent 79a596a commit 50521ff

File tree

1 file changed

+1
-43
lines changed

1 file changed

+1
-43
lines changed

README.md

Lines changed: 1 addition & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -189,49 +189,7 @@ dataset.where( col("colA") `===` 6 )
189189
dataset.where( col("colA") eq 6)
190190
```
191191

192-
In short, all supported operators are:
193-
194-
- `==`,
195-
- `!=`,
196-
- `eq` / `` `===` ``,
197-
- `neq` / `` `=!=` ``,
198-
- `-col(...)`,
199-
- `!col(...)`,
200-
- `gt`,
201-
- `lt`,
202-
- `geq`,
203-
- `leq`,
204-
- `or`,
205-
- `and` / `` `&&` ``,
206-
- `+`,
207-
- `-`,
208-
- `*`,
209-
- `/`,
210-
- `%`
211-
212-
Secondly, there are some quality of life additions as well:
213-
214-
In Kotlin, Ranges are often
215-
used to solve inclusive/exclusive situations for a range. So, you can now do:
216-
```kotlin
217-
dataset.where( col("colA") inRangeOf 0..2 )
218-
```
219-
220-
Also, for columns containing map- or array like types:
221-
222-
```kotlin
223-
dataset.where( col("colB")[0] geq 5 )
224-
```
225-
226-
Finally, thanks to Kotlin reflection, we can provide a type- and refactor safe way
227-
to create `TypedColumn`s and with those a new Dataset from pieces of another using the `selectTyped()` function, added to the API:
228-
```kotlin
229-
val dataset: Dataset<YourClass> = ...
230-
val newDataset: Dataset<Pair<TypeA, TypeB>> = dataset.selectTyped(col(YourClass::colA), col(YourClass::colB))
231-
232-
// Alternatively, for instance when working with a Dataset<Row>
233-
val typedDataset: Dataset<Pair<String, Int>> = otherDataset.selectTyped(col("a").`as`<String>(), col("b").`as`<Int>())
234-
```
192+
To read more, check the [wiki](https://github.com/JetBrains/kotlin-spark-api/wiki/Column-functions).
235193

236194
### Overload resolution ambiguity
237195

0 commit comments

Comments
 (0)