Expertise in computing infrastructure (servers, storage, network) and Linux cluster provisioning, management and tuning.
Experience in building and tuning Hadoop clusters.
Experience in configuring, running, and tuning Big Data benchmarks.
Proficiency in Linux automation using scripting languages: bash, python etc.
Strong knowledge of configuration management tools e.g. Puppet, Chef, Ansible (preferable).
Deep understanding of Hadoop Ecosystem components.
Roles and Responsibilities:
Continuous performance evaluation of Hadoop infrastructure solutions e.g. MapReduce, Yarn, HBase, Hive, Spark etc.
Design and implement features, enhancements of tools for automating deployment, operations and monitoring of Hadoop infrastructure.
Participate in software development from design to implementation, testing, release and maintenance.
Excellent problem solving and analytical skills.