Our client, a large conglomerate is looking for a Big Data Hadoop Operations Administrator for their office in Mumbai. The candidate should have a broad set of technology skills to be able to build and support robust Hadoop solutions for big data problems and learn quickly as the industry grows.
Primary responsibilities of this role would include owning, tracking and resolving Hadoop related incidents and requests, fulfilling requests and resolving incidents within SLAs, reviewing service related reports on a daily basis.
- Must be Proficient in Apache Atlas, Knox, Ranger, Kerberos, Active Directory/OpenAM, Cluster Configuration
- Experience in administering high performance and large Hadoop clusters
- Install and maintain platform level Hadoop infrastructure including the additional tools .
- Strong in-memory database and Apache Hadoop distribution knowledge (e.g. HDFS, MapReduce, Hive, Pig, Flume, Oozie, Spark)
- Very Good troubleshooting on Hadoop technologies including HDFS, MapReduce2, YARN, Hive, Pig, Flume, HBase, Cassandra, Accumulo, Tez, Sqoop, Zookeeper, Spark, Kafka, and Storm.
- Hands-on expertise on ingest and process data on a Hadoop cluster using the most up-to-date tools and techniques.
- Proficient in technical configuration, administration and monitoring of Hadoop clusters.
- Support technical team members for automation, installation and configuration tasks.
- Suggest improvement processes for all process automation scripts and tasks.
- Be able to grasp the problem at hand and recognize appropriate approach, tools and technologies to solve it.
- In depth understanding of system level resource consumption (memory, CPU, OS, storage, and networking data) for Hadoop Cluster.
- Familiarity with version control, job scheduling and configuration management tools such as Github, Puppet, UC4.