CDH Error Log

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Application Log Type: stderr

Log Upload Time: Wed Dec 01 15:58:12 +0900 2021


About Log Length: 46599
Jobs 21/12/01 15:57:25 INFO util.SignalUtils: Registered signal handler for TERM

21/12/01 15:57:25 INFO util.SignalUtils: Registered signal handler for HUP

Tools 21/12/01 15:57:25 INFO util.SignalUtils: Registered signal handler for INT

21/12/01 15:57:27 INFO yarn.ApplicationMaster: Preparing Local resources

21/12/01 15:57:28 WARN hdfs.DFSUtil: Namenode for ark-hadoop-ns1 remains unresolved for ID nn1. Check your hdfs-site.xml file to ens
21/12/01 15:57:28 WARN hdfs.DFSUtil: Namenode for ark-hadoop-ns1 remains unresolved for ID nn2. Check your hdfs-site.xml file to ens
21/12/01 15:57:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes wh
21/12/01 15:57:31 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop canno
21/12/01 15:57:32 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.ipc.RemoteException(org.a
21/12/01 15:57:33 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1630575922010_14328501_000001

21/12/01 15:57:33 INFO spark.SecurityManager: Changing view acls to: tito-park

21/12/01 15:57:33 INFO spark.SecurityManager: Changing modify acls to: tito-park

21/12/01 15:57:33 INFO spark.SecurityManager: Changing view acls groups to:

21/12/01 15:57:33 INFO spark.SecurityManager: Changing modify acls groups to:

21/12/01 15:57:33 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permission
21/12/01 15:57:34 INFO security.AMCredentialRenewer: Scheduling login from keytab in 64770887 millis.

21/12/01 15:57:34 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread

21/12/01 15:57:34 INFO yarn.ApplicationMaster: Waiting for spark context initialization...

21/12/01 15:57:34 INFO spark.SparkContext: Running Spark version 2.2.0

21/12/01 15:57:35 INFO spark.SparkContext: Submitted application: OpenChatLabeledCoverImageLoaderApp

21/12/01 15:57:35 INFO spark.SecurityManager: Changing view acls to: tito-park

21/12/01 15:57:35 INFO spark.SecurityManager: Changing modify acls to: tito-park

21/12/01 15:57:35 INFO spark.SecurityManager: Changing view acls groups to:

21/12/01 15:57:35 INFO spark.SecurityManager: Changing modify acls groups to:

21/12/01 15:57:35 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permission
21/12/01 15:57:36 INFO util.Utils: Successfully started service 'sparkDriver' on port 44116.

21/12/01 15:57:36 INFO spark.SparkEnv: Registering MapOutputTracker

21/12/01 15:57:36 INFO spark.SparkEnv: Registering BlockManagerMaster

21/12/01 15:57:36 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology


21/12/01 15:57:36 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up

21/12/01 15:57:36 INFO storage.DiskBlockManager: Created local directory at /data1/yarn/nm/usercache/tito-park/appcache/application_


21/12/01 15:57:36 INFO storage.DiskBlockManager: Created local directory at /data10/yarn/nm/usercache/tito-park/appcache/application_
21/12/01 15:57:36 INFO storage.DiskBlockManager: Created local directory at /data11/yarn/nm/usercache/tito-park/appcache/application_
21/12/01 15:57:36 INFO storage.DiskBlockManager: Created local directory at /data12/yarn/nm/usercache/tito-park/appcache/application_
21/12/01 15:57:36 INFO storage.DiskBlockManager: Created local directory at /data2/yarn/nm/usercache/tito-park/appcache/application_
21/12/01 15:57:36 INFO storage.DiskBlockManager: Created local directory at /data3/yarn/nm/usercache/tito-park/appcache/application_
21/12/01 15:57:36 INFO storage.DiskBlockManager: Created local directory at /data4/yarn/nm/usercache/tito-park/appcache/application_
21/12/01 15:57:36 INFO storage.DiskBlockManager: Created local directory at /data5/yarn/nm/usercache/tito-park/appcache/application_
21/12/01 15:57:36 INFO storage.DiskBlockManager: Created local directory at /data6/yarn/nm/usercache/tito-park/appcache/application_
21/12/01 15:57:36 INFO storage.DiskBlockManager: Created local directory at /data7/yarn/nm/usercache/tito-park/appcache/application_
21/12/01 15:57:36 INFO storage.DiskBlockManager: Created local directory at /data8/yarn/nm/usercache/tito-park/appcache/application_
21/12/01 15:57:36 INFO storage.DiskBlockManager: Created local directory at /data9/yarn/nm/usercache/tito-park/appcache/application_
21/12/01 15:57:36 INFO memory.MemoryStore: MemoryStore started with capacity 2.2 GB

21/12/01 15:57:37 INFO spark.SparkEnv: Registering OutputCommitCoordinator

21/12/01 15:57:38 INFO util.log: Logging initialized @15846ms

21/12/01 15:57:38 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter

21/12/01 15:57:38 INFO server.Server: jetty-9.3.z-SNAPSHOT

21/12/01 15:57:38 INFO server.Server: Started @16090ms

21/12/01 15:57:39 INFO server.AbstractConnector: Started ServerConnector@3a8a78e3{HTTP/1.1,[http/1.1]}{0.0.0.0:34739}

21/12/01 15:57:39 INFO util.Utils: Successfully started service 'SparkUI' on port 34739.

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@57c4d194{/jobs,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2895d7a0{/jobs/json,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3080e0c3{/jobs/job,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@696edf1e{/jobs/job/json,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@b5a2ee2{/stages,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@305bac6d{/stages/json,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@49b0e826{/stages/stage,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7995bae5{/stages/stage/json,null,AVAILABLE,@Spar


21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b91883e{/stages/pool,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@ce1b42e{/stages/pool/json,null,AVAILABLE,@Spark


21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2b064d5c{/storage,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@49a98fdb{/storage/json,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@439acf12{/storage/rdd,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4e39555e{/storage/rdd/json,null,AVAILABLE,@Spark


21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@792e5b94{/environment,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@68a245d7{/environment/json,null,AVAILABLE,@Spark


21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@407b20e5{/executors,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7d8aa7ab{/executors/json,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3e302d87{/executors/threadDump,null,AVAILABLE,@S


21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7701902e{/executors/threadDump/json,null,AVAILAB
21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@45d62c47{/static,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@316b805c{/,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@438027df{/api,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@77cfa04d{/jobs/job/kill,null,AVAILABLE,@Spark}

21/12/01 15:57:39 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@56e2eb17{/stages/stage/kill,null,AVAILABLE,@Spar


21/12/01 15:57:39 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.90.23.20:34739

21/12/01 15:57:39 INFO cluster.YarnClusterScheduler: Created YarnClusterScheduler

21/12/01 15:57:40 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1630575922010_143285
21/12/01 15:57:40 INFO util.Utils: Using initial executors = 0, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocat
21/12/01 15:57:40 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 45
21/12/01 15:57:40 INFO netty.NettyBlockTransferService: Server created on 10.90.23.20:45261

21/12/01 15:57:40 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
21/12/01 15:57:40 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.90.23.20, 45261, None)

21/12/01 15:57:40 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.90.23.20:45261 with 2.2 GB RAM, BlockManager
21/12/01 15:57:40 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.90.23.20, 45261, None)

21/12/01 15:57:40 INFO storage.BlockManager: external shuffle service port = 7337

21/12/01 15:57:40 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.90.23.20, 45261, None)

21/12/01 15:57:41 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6f50ec45{/metrics/json,null,AVAILABLE,@Spark}

21/12/01 15:57:42 INFO scheduler.EventLoggingListener: Logging events to hdfs://doopey/user/spark/spark2ApplicationHistory/applicatio


21/12/01 15:57:42 INFO util.Utils: Using initial executors = 0, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocat
21/12/01 15:57:42 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registere
21/12/01 15:57:42 INFO cluster.YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegis
21/12/01 15:57:42 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done

21/12/01 15:57:42 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark


21/12/01 15:57:42 INFO internal.SharedState: loading hive config file: file:/data11/yarn/nm/usercache/tito-park/appcache/application_
21/12/01 15:57:43 INFO internal.SharedState: spark.sql.warehouse.dir is not set, but hive.metastore.warehouse.dir is set. Setting spa
21/12/01 15:57:43 INFO internal.SharedState: Warehouse path is '/user/hive/warehouse'.
21/12/01 15:57:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@e27e70{/SQL,null,AVAILABLE,@Spark}

21/12/01 15:57:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@cbb7d50{/SQL/json,null,AVAILABLE,@Spark}

21/12/01 15:57:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1788fb0c{/SQL/execution,null,AVAILABLE,@Spark}

21/12/01 15:57:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@a08945e{/SQL/execution/json,null,AVAILABLE,@Spar


21/12/01 15:57:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1b30c018{/static/sql,null,AVAILABLE,@Spark}

21/12/01 15:57:43 INFO yarn.ApplicationMaster:

===============================================================================

YARN executor launch context:

env:

CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CLIENT_CONF_DIR<CPS>$HADOOP_CONF_DIR<CPS


SPARK_DIST_CLASSPATH -> /hadoop/etc/hadoop-cluster:/hadoop/share/hadoop/common/lib/*:/hadoop/share/hadoop/common/*:/hadoop/share
SPARK_YARN_STAGING_DIR -> hdfs://doopey/user/tito-park/.sparkStaging/application_1630575922010_14328501

SPARK_USER -> tito-park

SPARK_YARN_MODE -> true

SPARK_LIBRARY_PATH -> /opt/cloudera/parcels/GPLEXTRAS-5.10.0.bm-1.cdh5.10.0.p0.41/lib/hadoop/lib/native

command:

LD_LIBRARY_PATH="/opt/cloudera/parcels/CDH-5.10.0.bm2-1.cdh5.10.0.p0.41/lib/hadoop/lib/native:/opt/cloudera/parcels/GPLEXTRAS-5.
{{JAVA_HOME}}/bin/java \

-server \

-Xmx1024m \

'-XX:+UseG1GC' \

'-XX:+UnlockDiagnosticVMOptions' \

'-XX:+G1SummarizeConcMark' \

'-XX:InitiatingHeapOccupancyPercent=35' \

'-verbose:gc' \

'-XX:+PrintGCDetails' \

'-XX:+PrintGCDateStamps' \

'-XX:OnOutOfMemoryError=kill -9 %p' \

-Djava.io.tmpdir={{PWD}}/tmp \

'-Dspark.authenticate=false' \

'-Dspark.shuffle.service.port=7337' \

-Dspark.yarn.app.container.log.dir=<LOG_DIR> \

org.apache.spark.executor.CoarseGrainedExecutorBackend \

--driver-url \

spark://CoarseGrainedScheduler@10.90.23.20:44116 \

--executor-id \

<executorId> \

--hostname \

<hostname> \

--cores \

1 \

--app-id \

application_1630575922010_14328501 \

--user-class-path \

file:$PWD/__app__.jar \

1><LOG_DIR>/stdout \

2><LOG_DIR>/stderr

resources:

__spark_libs__/xbean-asm5-shaded-4.4.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without
__spark_libs__/scala-library-2.11.8.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/netty-all-4.0.43.Final.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-withou
__spark_libs__/spark-unsafe_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-witho
__spark_libs__/hk2-locator-2.4.0-b34.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without
__spark_libs__/paranamer-2.6.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop
__spark_libs__/javax.ws.rs-api-2.0.1.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without
__spark_libs__/parquet-encoding-1.8.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-withou
__spark_libs__/compress-lzf-1.0.3.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-ha
__spark_libs__/parquet-jackson-1.8.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without
__spark_libs__/metrics-core-3.1.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-ha
__spark_libs__/janino-3.0.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop/
__spark_libs__/javax.annotation-api-1.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-with
__spark_libs__/json4s-ast_2.11-3.2.11.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-withou
__spark_libs__/spark-mllib-local_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-
__spark_libs__/stream-2.7.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop/
__spark_libs__/commons-math3-3.4.1.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-h
__spark_libs__/hk2-api-2.4.0-b34.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-had
__spark_libs__/spark-sql_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/spark-repl_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without
__spark_libs__/jersey-client-2.22.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/antlr4-runtime-4.5.3.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/macro-compat_2.11-1.1.1.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-witho
__spark_libs__/spark-launcher_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-wit
__spark_libs__/commons-compiler-3.0.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-withou
__spark_libs__/spark-streaming_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-wi
__spark_libs__/commons-net-2.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoo
__spark_libs__/spark-graphx_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-witho
__spark_libs__/commons-lang3-3.5.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-had
__spark_libs__/oro-2.0.8.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop/jars
__spark_libs__/json-serde-1.3.7-jar-with-dependencies.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-
__spark_libs__/kryo-shaded-3.0.3.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-had
__spark_libs__/avro-mapred-1.7.7-hadoop2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-wit
__spark_libs__/breeze-macros_2.11-0.13.1.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-wit
__spark_libs__/scala-reflect-2.11.8.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/json4s-core_2.11-3.2.11.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-witho
__spark_libs__/validation-api-1.1.0.Final.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-wi
__spark_libs__/breeze_2.11-0.13.1.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-ha
__spark_libs__/spark-sketch_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-witho
__spark_libs__/py4j-0.10.4.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop/ja
__spark_libs__/minlog-1.3.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop/
__spark_libs__/spark-hive-exec_2.11-2.2.0.cloudera1.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2
__spark_libs__/pmml-model-1.2.15.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-had
__spark_libs__/pmml-schema-1.2.15.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-ha
__spark_libs__/spark-network-common_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-b
__spark_libs__/jackson-core-2.6.5.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-ha
__spark_libs__/jline-2.12.1.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop/
__spark_libs__/scala-compiler-2.11.8.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without
__spark_libs__/javax.servlet-api-3.1.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-witho
__spark_libs__/parquet-common-1.8.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/hk2-utils-2.4.0-b34.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-h
__spark_libs__/metrics-jvm-3.1.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-had
__spark_libs__/scala-parser-combinators_2.11-1.0.4.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2
__spark_libs__/RoaringBitmap-0.5.11.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/spark-hive_2.11-2.2.0.cloudera1.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-b
__spark_libs__/jersey-container-servlet-2.22.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-b
__app__.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/user/tito-park/.sparkStaging/application_1630575922010_
__spark_libs__/core-1.1.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop/jar
__spark_libs__/spark-network-shuffle_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-
__spark_conf__ -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/user/tito-park/.sparkStaging/application_16305759220
__spark_libs__/objenesis-2.1.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop
__spark_libs__/commons-crypto-1.0.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/spark-yarn_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without
__spark_libs__/aopalliance-repackaged-2.4.0-b34.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-
__spark_libs__/json4s-jackson_2.11-3.2.11.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-wi
__spark_libs__/parquet-format-2.3.1.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/jersey-media-jaxb-2.22.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-with
__spark_libs__/spire_2.11-0.13.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-had
__spark_libs__/metrics-graphite-3.1.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-withou
__spark_libs__/machinist_2.11-0.6.1.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/mesos-1.0.0-shaded-protobuf.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-w
__spark_libs__/scalap-2.11.8.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop
__spark_libs__/javax.inject-2.4.0-b34.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-withou
__spark_libs__/spire-macros_2.11-0.13.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-with
__spark_libs__/univocity-parsers-2.2.1.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-witho
__spark_libs__/spark-core_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without
__spark_libs__/arpack_combined_all-0.1.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-witho
__spark_libs__/netty-3.9.9.Final.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-had
__spark_libs__/commons-codec-1.10.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-ha
__spark_libs__/jersey-guava-2.22.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-h
__spark_libs__/javassist-3.18.1-GA.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-h
__spark_libs__/jcl-over-slf4j-1.7.16.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without
__spark_libs__/jersey-container-servlet-core-2.22.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2
__spark_libs__/lz4-1.3.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop/jars
__spark_libs__/jackson-annotations-2.6.5.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-wit
__spark_libs__/jsr305-1.3.9.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop/
__spark_libs__/jersey-common-2.22.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/opencsv-2.3.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop/ja
__spark_libs__/mysql-connector-java-5.1.35.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-w
__spark_libs__/ivy-2.4.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop/jars
__spark_libs__/osgi-resource-locator-1.0.1.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-w
__spark_libs__/spark-hive-thriftserver_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2
__spark_libs__/jersey-server-2.22.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/parquet-hadoop-1.8.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/spark-tags_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without
__spark_libs__/spark-mllib_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-withou
__spark_libs__/leveldbjni-all-1.8.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-ha
__spark_libs__/spark-catalyst_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-wit
__spark_libs__/pyrolite-4.13.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hadoop
__spark_libs__/jul-to-slf4j-1.7.16.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-h
__spark_libs__/jackson-databind-2.6.5.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-withou
hive-site.xml -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/conf/hive-site.xml" } size: 7068 timestamp: 15130
__spark_libs__/jackson-module-paranamer-2.6.5.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bi
__spark_libs__/metrics-json-3.1.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-ha
__spark_libs__/chill-java-0.8.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hado
__spark_libs__/parquet-column-1.8.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/spark-mesos_2.11-2.2.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-withou
__spark_libs__/jackson-module-scala_2.11-2.6.5.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-b
__spark_libs__/scala-xml_2.11-1.0.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/jtransforms-2.4.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-had
__spark_libs__/shapeless_2.11-2.3.2.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-
__spark_libs__/chill_2.11-0.8.0.jar -> resource { scheme: "hdfs" host: "doopey" port: -1 file: "/app/spark-2.2.0-bin-without-hado

===============================================================================

21/12/01 15:57:44 INFO yarn.YarnRMClient: Registering the ApplicationMaster

21/12/01 15:57:44 INFO util.Utils: Using initial executors = 0, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocat
21/12/01 15:57:44 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) int
21/12/01 15:57:45 INFO hive.HiveUtils: Initializing HiveMetastoreConnection version 1.1.0 using Spark classes.

21/12/01 15:57:47 INFO client.HiveClientImpl: Attempting to login to Kerberos using principal: tito-park@KAKAO.HADOOP and keytab: tit
21/12/01 15:57:50 INFO hive.metastore: Trying to connect to metastore with URI thrift://doopey-app1.dakao.io:9083

21/12/01 15:57:50 INFO hive.metastore: Connected to metastore.

21/12/01 15:57:55 INFO session.SessionState: Created local directory: /data11/yarn/nm/usercache/tito-park/appcache/application_16305


21/12/01 15:57:55 INFO session.SessionState: Created local directory: /data11/yarn/nm/usercache/tito-park/appcache/application_16305
21/12/01 15:57:56 INFO session.SessionState: Created HDFS directory: /tmp/hive/tito-park/41213c09-726d-464d-a7b1-1981bbb929eb

21/12/01 15:57:56 INFO session.SessionState: Created local directory: /data11/yarn/nm/usercache/tito-park/appcache/application_16305


21/12/01 15:57:56 INFO session.SessionState: Created HDFS directory: /tmp/hive/tito-park/41213c09-726d-464d-a7b1-1981bbb929eb/_tmp_sp
21/12/01 15:57:56 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.1.0) is /user/hive/warehouse

21/12/01 15:58:01 INFO client.HiveClientImpl: Attempting to login to Kerberos using principal: tito-park@KAKAO.HADOOP and keytab: tit
21/12/01 15:58:02 INFO session.SessionState: Created local directory: /data11/yarn/nm/usercache/tito-park/appcache/application_16305
21/12/01 15:58:02 INFO session.SessionState: Created HDFS directory: /tmp/hive/tito-park/8cb07ac7-c039-41e5-a6ce-adfec0b99131

21/12/01 15:58:02 INFO session.SessionState: Created local directory: /data11/yarn/nm/usercache/tito-park/appcache/application_16305


21/12/01 15:58:02 INFO session.SessionState: Created HDFS directory: /tmp/hive/tito-park/8cb07ac7-c039-41e5-a6ce-adfec0b99131/_tmp_sp
21/12/01 15:58:03 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.1.0) is /user/hive/warehouse

21/12/01 15:58:03 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint

You might also like