- 4+ years- experience operating Hadoop services, preferably in a large-scale production environment
- Knowledge of capacity planning, management, cluster operations
- Experience monitoring, troubleshooting and tuning services and applications
- Familiarity with use of standard Hadoop ecosystem features and applications such as MapReduce, HBase, and Hive.
- Experience running and troubleshooting Java applications. Java programming background is required.
- Familiarity to Linux administration and troubleshooting skills is a plus
- Own and manage several Hadoop clusters and other services of Hadoop Ecosystem in development and production environments
- Work closely with engineering teams and participate in the infrastructure development and framework development as required
- Automate deployment and management of Hadoop services including implementing monitoring
- Contribute to the evolving architecture of our services to meet changing requirements for scaling, reliability, performance, manageability, and price
- Capacity planning of Hadoop clusters based on application requirement.