Roles and Responsibilities : - The position will be in support of data creation, ingestion, management, and client consumption. - This individual has an absolute requirement to be well versed in Big Data fundamentals such as HDFS and YARN. - More than a working knowledge of Sqoop and Hive is required with understanding of partitioning/data formats/compression/performance tuning/etc. - Preferably, the candidate has a strong knowledge of Spark on either Python or Scala. Experience in Spark is must. - SQL for Teradata/Hive Query is required. Knowledge of other industry ETL tools (including No SQL) such Cassandra/Drill/Impala/etc. is a plus. - Data Science Background to work with DS Team.