Data Engineer (AWS ETL Kafka Redshift Glue Python Pyspark)

Experience: 7- 10 Years
Type: 10 Days WFO
Location: Bengaluru/Pune/Chennai
Notice-period: Immediate/15 days
Technology: IT

Responsibilities

  1. Candidate should be able to Lead & Mentor Data Engineering team
  2. Hands on Experience in PySpark
  3. Hands on Experience in AWS Data Engineering services
  4. Should have deep understanding of designing Data Pipeline, Data Lake
  5. Should be familiar with Data governance principles

Job Description :


1. Professional experience as a data engineer / Architect : Min 7+ years
2. Experience designing and deploying end to end data lake solution in AWS. Min 7+years
3. Experience with migrating data from db2 to cloud
4. Experience with AWS ETL tools
5. Experience with AWS Databases (Redshift, DynamoDB)
6. Experience with AWS Public Cloud
7. Experience with On-Prem (Private) Cloud
8. Experience with Python, Kafka, Spark, Advance SQL
9. Technical mentoring experience & communication skills
10. Experience working with Atlassian suite (Confluence and Jira)

11. Data Governance various data related regulations and compliances.

Apply for this position

Allowed Type(s): .pdf, .doc, .docx
Scroll to Top