innovationkrot.blogg.se

Download spark-2.2.0-bin-hadoop2.7
Download spark-2.2.0-bin-hadoop2.7









This program is to check the value of ‘pi’ (22/7), for which memory allocated for this job is 1G, and cores jars]$ spark-submit –class .SparkPi –master spark://node1:7077 –executor-memory 1G –total-executor-cores 1 /mnt/oracle/hadoop/spark-2.1.0/examples/jars/spark-examples_2.11-2.1.0.jar 10ġ7/03/12 11:20:41 INFO SparkContext: Running Spark version 2.1.0ġ7/03/12 11:20:43 INFO Utils: Successfully started service ‘sparkDriver’ on port 59659.ġ7/03/12 11:20:43 INFO SparkEnv: Registering MapOutputTrackerġ7/03/12 11:20:43 INFO SparkEnv: Registering BlockManagerMasterġ7/03/12 11:20:43 INFO BlockManagerMasterEndpoint: Using .DefaultTopologyMapper for getting topology informationġ7/03/12 11:20:43 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint upġ7/03/12 11:20:43 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-7fbe65f7-b525-4ef3-b85b-ded22d85e476ġ7/03/12 11:20:43 INFO MemoryStore: MemoryStore started with capacity 408.9 MBġ7/03/12 11:20:43 INFO SparkEnv: Registering OutputCommitCoordinatorġ7/03/12 11:20:43 WARN Utils: Service ‘SparkUI’ could not bind on port 4040. Submit a Sample program to validate the environment. This starts the Master and Slave in Spark. View /mnt/oracle/hadoop/spark-2.1.0/logs/. Type in expressions to have them evaluated. Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_80) Spark context available as ‘sc’ (master = local, app id = local-1489331022269). To adjust logging level use sc.setLogLevel(newLevel). Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties Validate the same by querying the version of spark and ~]$ echo ~]$ which sbin]$ spark-shell Rename the file to a short hadoop]$ mv spark-2.1.0-bin-hadoop2.7 spark-2.1.0Įxport SPARK_HOME=/mnt/oracle/hadoop/spark-2.1.0Įxport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeĮxport PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$SQOOP_HOME/bin:$SPARK_HOME/bin:$PATH

download spark-2.2.0-bin-hadoop2.7

Un-compress the downloaded hadoop]$ tar -xvf spark-2.1.0-bin-hadoop2.7.tgz Spark has multiple APIs (Spark Scala, Spark Java, Spark Python, Spark R) on which it can be setup.











Download spark-2.2.0-bin-hadoop2.7