We have a great opportunity with one of our Insurance Based Client in Bangalore for Big Data Developer[2 to 6 Years] -Permanent Role with Client. Please go through the below Detailed JD and share your CV if suitable and Also share only if you are available to Attend F2F Interview on 6th October Saturday. Data Engineer (2 to 6 Years) Shift: 1 to 9.30 PM IST Allstate, Bengaluru, Karnataka We are looking for a Data engineer (big data developer) with 2-5 years of experience. If you are passionate about writing code that will crunch data from different source systems , creating a rock solid technology and will not give up until the code is best in class, then Allstate is the place for you. Responsibilities: Implement and maintain data extraction, processing and storage processes in large scale data systems (data pipelines, data warehouses) for analytics, and reporting features. Research opportunities for data acquisition and new uses for existing data. Design, build and launch new data models and datasets in production Implement and maintain data on big data platform (HDFS) systems primarily using Spark for our data scientists. Work closely with engineers, product managers, data scientists and data analysts to understand needs and requirements. Recommend ways to improve data reliability, efficiency and quality Maintain and improve existing systems and processes in production. Employ a variety of languages and tools (e.g. scripting languages) for automation Qualification: B.Tech / Engineering in Computer Science( or equivalent ) data/data analytics with 2+ years of relevant work experience Strong intuition for data and Keen aptitude on large scale data analysis Strong communication and collaboration Strong analytical skills related to working with variety of datasets. Competencies and Skills Required Primary Skills Big data technologies, Spark(python/Scala), Unix Shell Scripting, oops concepts, Hive Secondary Skills Reporting tools ( like tableau) Kafka, AWS,Ansible Skill Sets - Strong knowledge to Build the optimal process for integration, extraction, transformation, and loading of data from a wide variety of data sources & tools like Spark (Pyspark/Scala) and Big data (Cloud era / Horton works). Working knowledge of Kafka, stream processing (spark streaming) by connecting to different data sources Strong working SQL knowledge and experience working with relational /Non-Relational databases (Oracle / Cassandra/others) Strong shell scripting skills Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala. Good knowledge of using Git and version control tools Knowledge on continuous deployment and using tools like Jenkins Experience working in agile methodology Working knowledge on object-oriented concept and languages: Python/Scala/Java Experience in performing root cause analysis on data to provide insights, process, identify opportunities for improvement. Knowledge of data structures and data analysis tools desirable (pandas).