怎么看出 spark 是单机模式还是spark shell 集群模式式

一、创建用户
# useradd spark
# passwd spark
二、下载软件
JDK,Scala,SBT,Maven
版本信息如下:
JDK jdk-7u79-linux-x64.gz
Scala scala-2.10.5.tgz
SBT sbt-0.13.7.zip
Maven apache-maven-3.2.5-bin.tar.gz
注意:如果只是安装Spark环境,则只需JDK和Scala即可,SBT和Maven是为了后续的源码编译。
三、解压上述文件并进行环境变量配置
# cd /usr/local/
# tar xvf /root/jdk-7u79-linux-x64.gz
# tar xvf /root/scala-2.10.5.tgz
# tar xvf /root/apache-maven-3.2.5-bin.tar.gz
# unzip /root/sbt-0.13.7.zip
修改环境变量的配置文件
# vim /etc/profile
export JAVA_HOME=/usr/local/jdk1.7.0_79
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export SCALA_HOME=/usr/local/scala-2.10.5
export MAVEN_HOME=/usr/local/apache-maven-3.2.5
export SBT_HOME=/usr/local/sbt
export PATH=$PATH:$JAVA_HOME/bin:$SCALA_HOME/bin:$MAVEN_HOME/bin:$SBT_HOME/bin
使配置文件生效
# source /etc/profile
测试环境变量是否生效
# java &version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
# scala &version
Scala code runner version 2.10.5 -- Copyright 2002-2013, LAMP/EPFL
# mvn &version
Apache Maven 3.2.5 (12a6b3acbb81fd8cea1; 2014-12-15T01:29:23+08:00)
Maven home: /usr/local/apache-maven-3.2.5
Java version: 1.7.0_79, vendor: Oracle Corporation
Java home: /usr/local/jdk1.7.0_79/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-229.el7.x86_64", arch: "amd64", family: "unix"
# sbt --version
sbt launcher version 0.13.7
四、主机名绑定
[root@spark01 ~]# vim /etc/hosts
192.168.244.147 spark01
五、配置spark
切换到spark用户下
下载hadoop和spark,可使用wget命令下载
spark-1.4.0
解压上述文件并进行环境变量配置
修改spark用户环境变量的配置文件
[spark@spark01 ~]$ vim .bash_profile
export SPARK_HOME=$HOME/spark-1.4.0-bin-hadoop2.6
export HADOOP_HOME=$HOME/hadoop-2.6.0
export HADOOP_CONF_DIR=$HOME/hadoop-2.6.0/etc/hadoop
export PATH=$PATH:$SPARK_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
使配置文件生效
[spark@spark01 ~]$ source .bash_profile
修改spark配置文件
[spark@spark01 ~]$ cd spark-1.4.0-bin-hadoop2.6/conf/
[spark@spark01 conf]$ cp spark-env.sh.template spark-env.sh
[spark@spark01 conf]$ vim spark-env.sh
在后面添加如下内容:
export SCALA_HOME=/usr/local/scala-2.10.5
export SPARK_MASTER_IP=spark01
export SPARK_WORKER_MEMORY=1500m
export JAVA_HOME=/usr/local/jdk1.7.0_79
有条件的童鞋可将SPARK_WORKER_MEMORY适当设大一点,因为我虚拟机内存是2G,所以只给了1500m。
配置slaves
[spark@spark01 conf]$ cp slaves slaves.template
[spark@spark01 conf]$ vim slaves
将localhost修改为spark01
启动master
[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ sbin/start-master.sh
starting org.apache.spark.deploy.master.Master, logging to /home/spark/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-spark-org.apache.spark.deploy.master.Master-1-spark01.out
查看上述日志的输出内容
[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ cd logs/
[spark@spark01 logs]$ cat spark-spark-org.apache.spark.deploy.master.Master-1-spark01.out
Spark Command: /usr/local/jdk1.7.0_79/bin/java -cp /home/spark/spark-1.4.0-bin-hadoop2.6/sbin/../conf/:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/spark-assembly-1.4.0-hadoop2.6.0.jar:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/home/spark/hadoop-2.6.0/etc/hadoop/ -Xms512m -Xmx512m -XX:MaxPermSize=128m org.apache.spark.deploy.master.Master --ip spark01 --port 7077 --webui-port 8080
========================================
16/01/16 15:12:30 INFO master.Master: Registered signal handlers for [TERM, HUP, INT]
16/01/16 15:12:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/16 15:12:32 INFO spark.SecurityManager: Changing view acls to: spark
16/01/16 15:12:32 INFO spark.SecurityManager: Changing modify acls to: spark
16/01/16 15:12:32 INFO spark.SecurityManager: SecurityManager: aut users with view permissions: Set(spark); users with modify permissions: Set(spark)
16/01/16 15:12:33 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/01/16 15:12:33 INFO Remoting: Starting remoting
16/01/16 15:12:33 INFO Remoting: R listening on addresses :[akka.tcp://sparkMaster@spark01:7077]
16/01/16 15:12:33 INFO util.Utils: Successfully started service 'sparkMaster' on port 7077.
16/01/16 15:12:34 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:12:34 INFO server.AbstractConnector: Started SelectChannelConnector@spark01:6066
16/01/16 15:12:34 INFO util.Utils: Successfully started service on port 6066.
16/01/16 15:12:34 INFO rest.StandaloneRestServer: Started REST server for submitting applications on port 6066
16/01/16 15:12:34 INFO master.Master: Starting Spark master at spark://spark01:7077
16/01/16 15:12:34 INFO master.Master: Running Spark version 1.4.0
16/01/16 15:12:34 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:12:34 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:8080
16/01/16 15:12:34 INFO util.Utils: Successfully started service 'MasterUI' on port 8080.
16/01/16 15:12:34 INFO ui.MasterWebUI: Started MasterWebUI at http://192.168.244.147:8080
16/01/16 15:12:34 INFO master.Master: I have been elected leader! New state: ALIVE
从日志中也可看出,master启动正常
下面来看看master的 web管理界面,默认在8080端口
启动worker
[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ sbin/start-slaves.sh spark://spark01:7077
spark01: Warning: Permanently added 'spark01,192.168.244.147' (ECDSA) to the list of known hosts.
spark@spark01's password:
spark01: starting org.apache.spark.deploy.worker.Worker, logging to /home/spark/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-spark-org.apache.spark.deploy.worker.Worker-1-spark01.out
输入spark01上spark用户的密码
可通过日志的信息来确认workder是否正常启动,因信息太多,在这里就不贴出了。
[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ cd logs/
[spark@spark01 logs]$ cat spark-spark-org.apache.spark.deploy.worker.Worker-1-spark01.out
启动spark shell
[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ bin/spark-shell --master spark://spark01:7077
16/01/16 15:33:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/16 15:33:18 INFO spark.SecurityManager: Changing view acls to: spark
16/01/16 15:33:18 INFO spark.SecurityManager: Changing modify acls to: spark
16/01/16 15:33:18 INFO spark.SecurityManager: SecurityManager: aut users with view permissions: Set(spark); users with modify permissions: Set(spark)
16/01/16 15:33:18 INFO spark.HttpServer: Starting HTTP Server
16/01/16 15:33:18 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:33:18 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:42300
16/01/16 15:33:18 INFO util.Utils: Successfully started service 'HTTP class server' on port 42300.
Welcome to
___ _____/ /__
_\ \/ _ \/ _ `/ __/
/___/ .__/\_,_/_/ /_/\_\
version 1.4.0
Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_79)
Type in expressions to have them evaluated.
Type :help for more information.
16/01/16 15:33:30 INFO spark.SparkContext: Running Spark version 1.4.0
16/01/16 15:33:30 INFO spark.SecurityManager: Changing view acls to: spark
16/01/16 15:33:30 INFO spark.SecurityManager: Changing modify acls to: spark
16/01/16 15:33:30 INFO spark.SecurityManager: SecurityManager: aut users with view permissions: Set(spark); users with modify permissions: Set(spark)
16/01/16 15:33:31 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/01/16 15:33:31 INFO Remoting: Starting remoting
16/01/16 15:33:31 INFO Remoting: R listening on addresses :[akka.tcp://sparkDriver@192.168.244.147:43850]
16/01/16 15:33:31 INFO util.Utils: Successfully started service 'sparkDriver' on port 43850.
16/01/16 15:33:31 INFO spark.SparkEnv: Registering MapOutputTracker
16/01/16 15:33:31 INFO spark.SparkEnv: Registering BlockManagerMaster
16/01/16 15:33:31 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-7b7bd4bd-ff20-4e3d-a354-61a4ca7c4b2f/blockmgr-0e855210-3609-4204-b5e3-151e0c096c15
16/01/16 15:33:31 INFO storage.MemoryStore: MemoryStore started with capacity 265.4 MB
16/01/16 15:33:31 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-7b7bd4bd-ff20-4e3d-a354-61a4ca7c4b2f/httpd-56ac16d2-dd82-41cb-99d7-4d11ef36b42e
16/01/16 15:33:31 INFO spark.HttpServer: Starting HTTP Server
16/01/16 15:33:31 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:33:31 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:47633
16/01/16 15:33:31 INFO util.Utils: Successfully started service 'HTTP file server' on port 47633.
16/01/16 15:33:31 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/01/16 15:33:31 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:33:31 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/01/16 15:33:31 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/01/16 15:33:31 INFO ui.SparkUI: Started SparkUI at http://192.168.244.147:4040
16/01/16 15:33:32 INFO client.AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@spark01:7077/user/Master...
16/01/16 15:33:33 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-32-0000
16/01/16 15:33:33 INFO client.AppClient$ClientActor: Executor added: app-32-0000/0 on worker-14-192.168.244.147-58914 (192.168.244.147:58914) with 2 cores
16/01/16 15:33:33 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-32-0000/0 on hostPort 192.168.244.147:58914 with 2 cores, 512.0 MB RAM
16/01/16 15:33:33 INFO client.AppClient$ClientActor: Executor updated: app-32-0000/0 is now LOADING
16/01/16 15:33:33 INFO client.AppClient$ClientActor: Executor updated: app-32-0000/0 is now RUNNING
16/01/16 15:33:34 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33146.
16/01/16 15:33:34 INFO netty.NettyBlockTransferService: Server created on 33146
16/01/16 15:33:34 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/01/16 15:33:34 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.244.147:33146 with 265.4 MB RAM, BlockManagerId(driver, 192.168.244.147, 33146)
16/01/16 15:33:34 INFO storage.BlockManagerMaster: Registered BlockManager
16/01/16 15:33:34 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/01/16 15:33:34 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.
16/01/16 15:33:38 INFO hive.HiveContext: Initializing execution hive, version 0.13.1
16/01/16 15:33:43 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/01/16 15:33:43 INFO metastore.ObjectStore: ObjectStore, initialize called
16/01/16 15:33:44 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/01/16 15:33:44 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/01/16 15:33:44 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.244.147:46741/user/Executor#-]) with ID 0
16/01/16 15:33:44 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/01/16 15:33:45 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.244.147:33017 with 265.4 MB RAM, BlockManagerId(0, 192.168.244.147, 33017)
16/01/16 15:33:46 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/01/16 15:33:48 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/01/16 15:33:48 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5.
Encountered: "@" (64), after : "".
16/01/16 15:33:52 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/01/16 15:33:52 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/01/16 15:33:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/01/16 15:33:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/01/16 15:33:54 INFO metastore.ObjectStore: Initialized ObjectStore
16/01/16 15:33:54 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.1aa
16/01/16 15:33:55 INFO metastore.HiveMetaStore: Added admin role in metastore
16/01/16 15:33:55 INFO metastore.HiveMetaStore: Added public role in metastore
16/01/16 15:33:56 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
16/01/16 15:33:56 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
16/01/16 15:33:56 INFO repl.SparkILoop: Created sql context (with Hive support)..
SQL context available as sqlContext.
打开spark shell以后,可以写一个简单的程序,say hello to the world
scala& println("helloworld")
helloworld
再来看看spark的web管理界面,可以看出,多了一个Workders和Running Applications的信息
至此,Spark的伪分布式环境搭建完毕,
有以下几点需要注意:
1. 上述中的Maven和SBT是非必须的,只是为了后续的源码编译,所以,如果只是单纯的搭建Spark环境,可不用下载Maven和SBT。
2. 该Spark的伪分布式环境其实是集群的基础,只需修改极少的地方,然后copy到slave节点上即可,鉴于篇幅有限,后文再表。&
阅读(...) 评论()标签:至少1个,最多5个
在本文的例子中,你将使用 Kubernetes 和 Docker 创建一个功能型Apache Spark集群。
你将使用Spark standalone模式 安装一个 Spark master服务和一组Spark workers。
对于已熟悉这部分内容的读者,可以直接跳到dr 章节。
Docker 镜像主要基于 。
源码托管在
步骤零:准备工作
本示例假定你已经具备以下条件:
有已安装并运行的 kubernetes集群。●
已在某个path路径中安装了kubectl 命令行工具。●
已运行了一个spark-master的k8s service,可以使用‘spark-master’域名在kube DNS实例中自动发现该服务。
更多详细内容可在源码的Dockerfile中查看。
第一步:创建命名空间
$ kubectl create -f examples/spark/namespace-spark-cluster.yaml
现在列出所有命名空间:
$ kubectl get namespaces
spark-cluster name=spark-cluster Active
对于kubectl 客户端使用的命名空间,我们定义一个环境并使用它:
$ kubectl config set-context spark --namespace=spark-cluster --cluster=${CLUSTER_NAME} --user=${USER_NAME}
$ kubectl config use-context spark
你可以在Kubernetes配置文件~/.kube/config中查看集群名称以及用户名称。
第二步:启动你的master服务
Master service 是 Spark集群的master服务。使用 examples/spark/spark-master-controller.yaml 文件来创建一个replication controller 运行Spark Master服务。
$ kubectl create -f examples/spark/spark-master-controller.yaml
replicationcontroller "spark-master-controller" created
然后,用examples/spark/spark-master-service.yaml 文件来创建一个逻辑上的服务端点,Spark workers可以使用该服务来访问Master pod
$ kubectl create -f examples/spark/spark-master-service.yaml
service "spark-master" created
然后你可以为Spark Master WebUI 创建一个服务:
$ kubectl create -f examples/spark/spark-webui.yaml
service "spark-webui" created
查看 Master 是否能运行和可访问
$ kubectl get podsNAME
AGEspark-master-controller-5u0q5
检查日志以查看master的状态。(使用上一条指令输出的pod名称)
{{{$ kubectl logs spark-master-controller-5u0q5
starting org.apache.spark.deploy.master.Master, logging to /opt/spark
-1.5.1-bin-hadoop2.6/sbin/../logs/spark--org.apache.spark.deploy.master.
- Master-1-spark-
master-controller-g0oao.out
Spark Command: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -cp /
opt/spark-1.5.1-bin-hadoop2.6/sbin/../conf/:/opt/spark-1.5.1-bin-
hadoop2.6/lib/spark-assembly-1.5.1-hadoop2.6.0.jar:/opt/spark-1.5.1
-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/opt/spark-1.5.1-bin
- -hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/spark-1.5.1-bin-
- hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar -Xms1g -Xmx1g org.apache.spark.deploy.master.Master --ip spark-master --port 7077
--webui-port 8080
========================================
15/10/27 21:25:05 INFO Master: Registered signal handlers for
[TERM, HUP, INT]
15/10/27 21:25:05 INFO SecurityManager: Changing view acls to: root
15/10/27 21:25:05 INFO SecurityManager: Changing modify acls to: root
15/10/27 21:25:05 INFO SecurityManager: SecurityManager: authentication
users with view permissions: Set(root); users with
modify permissions: Set(root)
15/10/27 21:25:06 INFO Slf4jLogger: Slf4jLogger started
15/10/27 21:25:06 INFO Remoting: Starting remoting
15/10/27 21:25:06 INFO Remoting: R listening on addresses :[akka.tcp://sparkMaster@spark-master:7077]
15/10/27 21:25:06 INFO Utils: Successfully started service 'sparkMaster' on port
15/10/27 21:25:07 INFO Master: Starting Spark master at spark://spark-master:
15/10/27 21:25:07 INFO Master: Running Spark version 1.5.1
15/10/27 21:25:07 INFO Utils: Successfully started service 'MasterUI' on
port 8080.
15/10/27 21:25:07 INFO MasterWebUI: Started MasterWebUI at http://
spark-master:8080
15/10/27 21:25:07 INFO Utils: Successfully started service on port 6066.
15/10/27 21:25:07 INFO StandaloneRestServer: Started REST server for
submitting applications on port 6066
15/10/27 21:25:07 INFO Master: I have been elected leader! New state: ALIVE}}}
确认master正常运行后,你可以使用k8s集群代理访问Spark WebUI:
kubectl proxy --port=8001
此时可以通过访问UI
第三步:启动 Spark workers
Spark workers 在 Spark 集群中扮演十分重要的角色。他们为程序提供执行资源和数据缓存功能。
Spark workers 需要 Master service处于运行状态。
使用examples/spark/spark-worker-controller.yaml 文件创建一个replication controller来管理 worker pods 。
$ kubectl create -f examples/spark/spark-worker-controller.yaml
replicationcontroller "spark-worker-controller" created
查看 workers 是否正常运行
如果你启动Spark WebUI,当worker就绪后应该出现在UI 中。(这可能需要一些时间来拉取镜像并启动pods。)你也可以用以下方式查询状态:
$ kubectl get pods
spark-master-controller-5u0q5
spark-worker-controller-e8otp
spark-worker-controller-fiivl
spark-worker-controller-ytc7o
$ kubectl logs spark-master-controller-5u0q5
15/10/26 18:20:14 INFO Master: Registering worker 10.244.1.13:53567
with 2 cores, 6.3 GB RAM
15/10/26 18:20:14 INFO Master: Registering worker 10.244.2.7:46195
with 2 cores, 6.3 GB RAM
15/10/26 18:20:14 INFO Master: Registering worker 10.244.3.8:39926
with 2 cores, 6.3 GB RAM
假如上一节中kubectl proxy仍在运行,你应该也能在UI中看到workers。注:UI 将含有到 worker Web UI的超链接。 这些链接是不能正常工作的(链接将尝试连接到集群IP,而Kubernetes不会自动代理的集群IP)。
第四步:在 Spark 集群中启动 Zeppelin UI来加载工作任务
Zeppelin UI pod可以用来在Spark集群加载jobs,加载操作既可以通过web端的笔记本完成,也可以通过传统的Spark命令行方式完成。更多细节见 Zeppelin和 Spark architecture架构。
$ kubectl create -f examples/spark/zeppelin-controller.yaml
replicationcontroller "zeppelin-controller" created
Zeppelin 需要 Master service处于运行状态。
查看 Zeppelin 是否正常运行
$ kubectl get pods -l component=zeppelin
zeppelin-controller-ja09s
第五步:操作集群
现在你有两个选择:可以通过图形界面访问Spark 集群,也可以保持使用CLI。
快速使用pyspark
使用 kubectl exec连接到Zeppelin 驱动并运行pipeline。
使用 kubectl exec连接到Zeppelin 驱动并运行pipeline。
$ kubectl exec zeppelin-controller-ja09s -it pyspark
Python 2.7.9 (default, Mar
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more
information.
Welcome to
___ _____/ /__
_\ \/ _ \/ _ `/ __/
/__ / .__/\_,_/_/ /_/\_\
version 1.5.1
Using Python version 2.7.9 (default, Mar
SparkContext available as sc, HiveContext available as sqlContext.
&&& sc.textFile("gs://dataflow-samples/shakespeare/*").map
(lambda s: len(s.split())).sum()
恭喜你,你把所有莎士比亚戏剧中的单词个数统计出来了。
使用图形界面,亮瞎你的眼!
使用之前创建的Zeppelin pod,设置WebUI的转发端口:
$ kubectl port-forward zeppelin-controller-ja09s
这条指令将访问localhost 8080端口的请求转发到容器中的8080端口。然后你可以通过 访问Zeppelin。
创建一个“New Notebook”。在其中输入:
print sc.textFile("gs://dataflow-samples/shakespeare/*").map
(lambda s: len(s.split())).sum()
现在你已为Spark master,Spark workers 和 Spark driver创建了service和replication controller。你可以用这个例子来进行下一步,并开始使用刚才创建的Apache Spark集群,更多信息见Spark 文档。
kubectl create -f examples/spark
kubectl get pods # Make sure everything is running
kubectl proxy --port=8001 # Start an application proxy, if you want
to see the Spark Master WebUI
kubectl get pods -lcomponent=zeppelin # Get the driver pod to interact
此时Master UI 可以通过访问。
你可以通过kubectl exec使用传统的spark-shell / spark-subsubmit / pyspark 命令行与Spark 集群进行交互,或者如果你想与Zeppelin交互:
kubectl port-forward zeppelin-controller-abc123
kubectl port-forward zeppelin-controller-abc123
使用Spark 的已知问题
该方案提供了一个仅限于集群网络的Spark 配置,这意味着Spark master 只能通过集群service访问。如果你需要在 zeppelin pod 中使用除了Zeppelin 或spark-submit 之外的外部客户端来提交 jobs ,你需要为客户端提供一种访问examples/spark/spark-master-service.yaml 的方式。参见service以了解更多信息。
使用 Zeppelin 的已知问题
Zeppelin pod 很大,因此拉取镜像可能会消耗一段时间,拉取速度取决于你的网络条件。Zeppelin pod 的尺寸是我们正要解决的问题,详见问题# 17231。●
第一次运行Zeppelin 时, pipeline可能会花费很多时间(约一分钟)。似乎需要相当多的时间来加载。●
在GKE环境中, kubectl port-forward 不可能长时间保持稳定状态。如果你发现Zeppelin变成断开状态,port-forward很可能出现故障,并需要重启。详见# 12179。
本文由翻译,如若转载,需注明转载自“”原文链接:
0 收藏&&|&&1
你可能感兴趣的文章
1 收藏,1.3k
4 收藏,961
3 收藏,375
分享到微博?
你好!看起来你挺喜欢这个内容,但是你还没有注册帐号。 当你创建了帐号,我们能准确地追踪你关注的问题,在有新答案或内容的时候收到网页和邮件通知。还能直接向作者咨询更多细节。如果上面的内容有帮助,记得点赞 (????)? 表示感谢。
明天提醒我
我要该,理由是:}

我要回帖

更多关于 spark集群 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信