Skip to content

Commit ac6e4c0

Browse files
liugddxgnehil
authored andcommitted
[Feature][Connector-V2][Oss jindo] Fix the problem of jindo driver download failure. (apache#5511)
1 parent 831c250 commit ac6e4c0

File tree

3 files changed

+11
-3
lines changed

3 files changed

+11
-3
lines changed

docs/en/connector-v2/sink/OssJindoFile.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,9 @@ Output data to oss file system using jindo api.
88

99
:::tip
1010

11+
You need to download [jindosdk-4.6.1.tar.gz](https://jindodata-binary.oss-cn-shanghai.aliyuncs.com/release/4.6.1/jindosdk-4.6.1.tar.gz)
12+
and then unzip it, copy jindo-sdk-4.6.1.jar and jindo-core-4.6.1.jar from lib to ${SEATUNNEL_HOME}/lib.
13+
1114
If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
1215

1316
If you use SeaTunnel Engine, It automatically integrated the hadoop jar when you download and install SeaTunnel Engine. You can check the jar package under ${SEATUNNEL_HOME}/lib to confirm this.
@@ -237,7 +240,7 @@ For orc file format simple config
237240

238241
```bash
239242

240-
OssFile {
243+
OssJindoFile {
241244
path="/seatunnel/sink"
242245
bucket = "oss://tyrantlucifer-image-bed"
243246
access_key = "xxxxxxxxxxx"

docs/en/connector-v2/source/OssJindoFile.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,9 @@ Read data from aliyun oss file system using jindo api.
88

99
:::tip
1010

11+
You need to download [jindosdk-4.6.1.tar.gz](https://jindodata-binary.oss-cn-shanghai.aliyuncs.com/release/4.6.1/jindosdk-4.6.1.tar.gz)
12+
and then unzip it, copy jindo-sdk-4.6.1.jar and jindo-core-4.6.1.jar from lib to ${SEATUNNEL_HOME}/lib.
13+
1114
If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
1215

1316
If you use SeaTunnel Engine, It automatically integrated the hadoop jar when you download and install SeaTunnel Engine. You can check the jar package under ${SEATUNNEL_HOME}/lib to confirm this.
@@ -257,7 +260,7 @@ Filter pattern, which used for filtering files.
257260

258261
```hocon
259262
260-
OssFile {
263+
OssJindoFile {
261264
path = "/seatunnel/orc"
262265
bucket = "oss://tyrantlucifer-image-bed"
263266
access_key = "xxxxxxxxxxxxxxxxx"
@@ -270,7 +273,7 @@ Filter pattern, which used for filtering files.
270273

271274
```hocon
272275
273-
OssFile {
276+
OssJindoFile {
274277
path = "/seatunnel/json"
275278
bucket = "oss://tyrantlucifer-image-bed"
276279
access_key = "xxxxxxxxxxxxxxxxx"

seatunnel-connectors-v2/connector-file/connector-file-jindo-oss/pom.xml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,12 +46,14 @@
4646
<groupId>com.aliyun.jindodata</groupId>
4747
<artifactId>jindo-core</artifactId>
4848
<version>${jindo-sdk.version}</version>
49+
<scope>provided</scope>
4950
</dependency>
5051

5152
<dependency>
5253
<groupId>com.aliyun.jindodata</groupId>
5354
<artifactId>jindosdk</artifactId>
5455
<version>${jindo-sdk.version}</version>
56+
<scope>provided</scope>
5557
</dependency>
5658

5759
<dependency>

0 commit comments

Comments
 (0)