Candidate should understand the components of a Hadoop/Big Data ecosystem and able to design solutions based on those for solving big data problems in the industry.
Previous experience with designing, delivering and developing applications leveraging Hadoop/Big Data ecosystem is required
1. Worked on Cloudera or Hortonworks Hadoop
2. Experience on writing MapReduce jobs
3. Excellent skills in Java/J2ee, Web Services, Spring, XML, Linux, shell scripting
4. Experience on Hive Query Language (Hive / HQL),
5. Worked on Linux / Unix shell scripts
Good to Have:
1. Hands on experience/architecting skills in NOSQL technologies like Hbase, Cassandra, MongoDB, Datastax is an added advantage.
2. Worked on XML, Cassandra, Flume, sqoop.