目录
elk hadoop-3.3.3 kafka_2.12-3.2.1 spark-3.3.0-bin-hadoop3 spark-demo.jar
部署目录/home
/home/hadoop-3.3.3/sbin/start-all.sh
/home/hadoop-3.3.3/sbin/stop-all.sh
/home/spark-3.3.0-bin-hadoop3/sbin/start-all.sh
/home/spark-3.3.0-bin-hadoop3/sbin/stop-all.sh
/home/kafka_2.12-3.2.1/bin/zookeeper-server-start.sh /home/kafka_2.12-3.2.1/config/zookeeper.properties
/home/kafka_2.12-3.2.1/bin/kafka-server-start.sh /home/kafka_2.12-3.2.1/config/server.properties
使用docker部署最新版本/home/elk目录挂载配置文件和正数信息。
elasticsearch8.3.3连接使用了https的密码认证
/home/spark-3.3.0-bin-hadoop3/bin/spark-submit --class org.example.KafkaSparkEsDemo --master yarn --deploy-mode client --driver-memory 512m --executor-memory 512m --executor-cores 1 spark-demo.jar
开启了远程调试
/home/spark-3.3.0-bin-hadoop3/bin/spark-submit --class org.example.KafkaSparkEsDemo --master yarn --deploy-mode client --driver-memory 512m --executor-memory 512m --executor-cores 1 --driver-java-options "-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8899" spark-demo.jar
8.问题记录
Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
增加了三个属性
conf.set("es.internal.es.cluster.name","docker-cluster")
conf.set("es.internal.es.cluster.uuid","i033q5N4QuqsPAJs9-nCKQ")
conf.set("es.internal.es.version","8.3.3")
后来spark stream写入elasticsearch继续报错,获取Es版本信息时候,服务ip:9200响应为空。
原因是elasticsearch8.3.3版本使用了https访问,需要增加响应的配置。
临时解决办法,这里部署的elasticsearch使用了docker,所以进入docker容器。
修改配置文件elasticsearch.yml
配置文件中两个属性改为false
官网参考地址:
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
问题2、
sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors); no other nodes left - aborting...
文件路径没有指定对,后缀为p12的文件有http.p12和transport.p12
es.net.ssl.keystore.location属性配置了transport.p12出现上面的错误,文件的描述信息,可查看官网。
正确的应该是http.p12,而且前面是有前缀的file://
es.net.ssl.truststore.pass对应的密码怎么设置,如果你也使用docker部署。
可进入容器,执行命令获取,官网截图
SparkConf的配置
- conf.set("es.net.ssl", "true")
- conf.set("es.net.ssl.keystore.location", "file:///home/elk/elasticsearch/config/certs/http.p12")
- conf.set("es.net.ssl.keystore.pass", "O2_d33WTSGGCq4T2eB28HA")
- conf.set("es.net.ssl.keystore.type", "PKCS12")
- conf.set("es.net.ssl.truststore.location", "file:///home/elk/elasticsearch/config/certs/http.p12")
- conf.set("es.net.ssl.truststore.pass", "O2_d33WTSGGCq4T2eB28HA")
- conf.set("es.net.ssl.cert.allow.self.signed", "true")
- conf.set("es.index.auto.create", "true")
- conf.set("es.scroll.size", "200")
- conf.set("es.read.metadata", "true")
- conf.set("es.nodes.wan.only", "true")
- conf.set("es.nodes", "10.10.10.99")
- conf.set("es.port", "9200")
- conf.set("es.index.read.missing.as.empty", "true")
- conf.set("es.net.http.auth.user", "elastic")
- conf.set("es.net.http.auth.pass", "T0e*QUGWRt05*F-2PLFP")
- conf.set("es.internal.es.cluster.name", "docker-cluster")
- conf.set("es.internal.es.cluster.uuid", "i033q5N4QuqsPAJs9-nCKQ")
- conf.set("es.internal.es.version", "8.3.3")
- conf.set("es.nodes.client.only", "false")
- conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
- conf.set("spark.executor.memoryOverhead", "1024")
- conf.set("spark.streaming.backpressure.enabled", "true")
- conf.set("spark.streaming.kafka.maxRatePerPartition", "1000")