You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/en/connector-v2/sink/OssJindoFile.md
+4-1Lines changed: 4 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,9 @@ Output data to oss file system using jindo api.
8
8
9
9
:::tip
10
10
11
+
You need to download [jindosdk-4.6.1.tar.gz](https://jindodata-binary.oss-cn-shanghai.aliyuncs.com/release/4.6.1/jindosdk-4.6.1.tar.gz)
12
+
and then unzip it, copy jindo-sdk-4.6.1.jar and jindo-core-4.6.1.jar from lib to ${SEATUNNEL_HOME}/lib.
13
+
11
14
If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
12
15
13
16
If you use SeaTunnel Engine, It automatically integrated the hadoop jar when you download and install SeaTunnel Engine. You can check the jar package under ${SEATUNNEL_HOME}/lib to confirm this.
@@ -237,7 +240,7 @@ For orc file format simple config
Copy file name to clipboardExpand all lines: docs/en/connector-v2/source/OssJindoFile.md
+5-2Lines changed: 5 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,9 @@ Read data from aliyun oss file system using jindo api.
8
8
9
9
:::tip
10
10
11
+
You need to download [jindosdk-4.6.1.tar.gz](https://jindodata-binary.oss-cn-shanghai.aliyuncs.com/release/4.6.1/jindosdk-4.6.1.tar.gz)
12
+
and then unzip it, copy jindo-sdk-4.6.1.jar and jindo-core-4.6.1.jar from lib to ${SEATUNNEL_HOME}/lib.
13
+
11
14
If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
12
15
13
16
If you use SeaTunnel Engine, It automatically integrated the hadoop jar when you download and install SeaTunnel Engine. You can check the jar package under ${SEATUNNEL_HOME}/lib to confirm this.
@@ -257,7 +260,7 @@ Filter pattern, which used for filtering files.
257
260
258
261
```hocon
259
262
260
-
OssFile {
263
+
OssJindoFile {
261
264
path = "/seatunnel/orc"
262
265
bucket = "oss://tyrantlucifer-image-bed"
263
266
access_key = "xxxxxxxxxxxxxxxxx"
@@ -270,7 +273,7 @@ Filter pattern, which used for filtering files.
0 commit comments