Overall sysadmin experience of 7+ years and Hadoop eco system with 3+ years of relevant experience in Handoop.
Hands-on experience to deploy & maintain Big Data stack solution. Good exposure to Hadoop production deployment, monitoring and relevant tools & techniques
Excellent knowledge of application/production support practices & tools in vogue.
Must have exposure to
Hadoop ecosystem projects (Flume, Scoop, Ambari, Knox, Ranger, Hive etc.) and Hadoop connectors for either Cloudera or Hortonworks distributions.
Familiar with Hadoop Operations best practices Including but not limited to Installation and setup, Cluster Configuration, Security Governance, Data governance
Advance Linux commands
Deep knowledge on DevOps Tools (Proficient in one of them SaltStack (preferred), or Ansible, or Chef
Must have independently managed distributed application deployment & production performance monitoring
Exposure to technical skills like
Command line interfaces for various flavors of Linux/Unix systems
Monitoring tools like Nagios, Ganglia or HP Monitoring tool (HP OM).
Databases (Oracle, Hbase, MongoDB)
Scripting (Shell, Perl, etc)
Build scripting (Make, ANT)
Networking(DNS, TCP/IP, HTTP) etc
Ability to effectively document & represent problems encountered & tasks completed
Self-starter with minimum supervision, confident & detail oriented person
Manage the application deployment process based on configurations & instructions from architecture team (including Hadoop cluster deployment & management)
Responsible for independently managing upkeep of software deployed and continually monitor production performance to ensure that production SLAs are met consistently
Act as direct interface with various stakeholders and triage with engineering & other teams for production issues
Continuously improve opportunities to facilitate automation of production monitoring & application deployment
Follow & implement the methods & procedures suggested
Qualification: BE / MCA with profile relevant certifications