How to Set Up Amazon EMR (Amazon Elastic MapReduce)

Amazon EMR is a web service which can be used to easily and efficiently process enormous amounts of data. It uses a hosted Hadoop framework running on the web scale infrastructure of Amazon EC2 and Amazon S3.

Step 1 − Sign in to AWS account and select Amazon EMR on management console.

Step 2 − Create Amazon S3 bucket for cluster logs & output data. (Procedure is explained in detail in Amazon S3 section)

Step 3 − Launch Amazon EMR cluster.

– Use this link to open Amazon EMR console − https://console.aws.amazon.com/elasticmapreduce/home

– Select create cluster and provide the required details on Cluster Configuration page.

– Leave the Tags section options as default and proceed.

On the Software configuration section, level the options as default.

– On the File System Configuration section, leave the options for EMRFS as set by default. EMRFS is an implementation of HDFS, it allows Amazon EMR clusters to store data on Amazon S3.

– On the Hardware Configuration section, select m3.xlarge in EC2 instance type field and leave other settings as default. Click the Next button.

– On the Security and Access section, for EC2 key pair, select the pair from the list in EC2 key pair field and leave the other settings as default.

– On Bootstrap Actions section, leave the fields as set by default and click the Add button. Bootstrap actions are scripts that are executed during the setup before Hadoop starts on every cluster node.

– On the Steps section, leave the settings as default and proceed.

– Click the Create Cluster button and the Cluster Details page opens. This is where we should run the Hive script as a cluster step and use the Hue web interface to query the data.

Step 4 − Run the Hive script using the following steps.

– Open the Amazon EMR console and select the desired cluster.

– Move to the Steps section and expand it. Then click the Add step button.

– The Add Step dialog box opens. Fill the required fields, then click the Add button.

– To view the output of Hive script, use the following steps −

– Open the Amazon S3 console and select S3 bucket used for the output data.

– Select the output folder.

– The query writes the results into a separate folder. Select os_requests.

– The output is stored in a text file. This file can be downloaded.


Pamer
[collapse]
aws amazon emr apache spark on vs azure databricks advantages of airflow redshift aws_amazon_emr best practices elastic mapreduce (amazon emr) ami versions supported in for successfully managing memory applications resizing and automatic scaling whitepaper benefits securing build sagemaker notebooks backed by a concurrent data orchestration pipeline using livy (august 2013) white paper com committer optimize sparksql parquetoutputcommitter hive serde cloudtrail cloudera class not found input format cluster hadoop create an cost ws fs emrfilesystem define disadvantages difference between ec2 default linux dask google dataproc dynamodb developer guide enable seamless domain join the exploring ngrams with example jar file foxyproxy free full form what is used s3 lite errors ishttp200with error code flink getting started glue management migration release security groups how does work to execute submit from lambda function use works hdinsight hbase implementing authorization auditing ranger if slave node goes down can recover it serverless fully managed identify impala instance types interview questions jupyter notebook jupyterhub jobs java lang classnotfoundexception version api scuf key kinesis kafka kerberos kms gives hosted service known as (emr) analytics kosten cdh learn 2 logo logging lake limits migrating must either names securitygroups metadata classification lineage discovery atlas maven means new-apache-spark-on-amazon-emr edge nifi nodes open ssh tunnel master orchestrate big workflows genie overview running which one pyspark presto pros cons pricing tutorial pdf python calculator ports dell active pen que es qubole qiita quora query run application releases has terminated reason bootstrap_failure at failure review secure encryption splunk strategies reducing your costs ui streaming transient talend tableau economic terraform training cases hdfs simulations subsets ephemeral drives udemy util awssessioncredentials providerfactory upload plan regular planner checkexistenceifnotoverwriting user_request ubuntu view web interfaces clusters athena stand processing engine behind wiki whats working youtube yarn configuration y zeppelin zookeeper 10 minutes 3 5 29 24 6 0 auto blackbelt bootstrap actions deep dive & cloud docs dist cp introduces runtime or tier faq icon instances validation_error oozie restart services core task stands architecture alternatives blog black belt basics book boto3 console components certification internal_error documentation definition docker database delete etl explained elasticsearch emrfs environment variables system framework features trial gpu govcloud là gì hudi high availability hue image amazon-emr-instance-controller json logs latest log analysis login lab meaning multi az monitoring machine learning move on-premises masterclass medium normalized hours nedir notes outposts os demand operating odbc driver output o é pip install parquet pig runs 1 server read spot team tensorflow tools uses combined several products user interface write workflow

Leave a Comment