Loading...

Data Engineer

Primary Skills :
AWS or Azure
Data Pipeline exp
Python, Pyspark
SQL – Advanced

6+ years of experience
Onsite Location – Gurugram or Bangalore
Budget – 2 to 2.3 L per month + GST (negotiable)
Contract Duration – min 6 months and extendible

• Enhancements, new development, defect resolution, and production support of ETL development using AWS native services
• Integration of data sets using AWS services such as Glue and Lambda functions.
• Utilization of AWS SNS to send emails and alerts
• Authoring ETL processes using Python and PySpark
• ETL process monitoring using CloudWatch events
• Connecting with different data sources like S3 and validating data using Athena.
• Experience in CI/CD using GitHub Actions
• Proficiency in Agile methodology
• Extensive working experience with Advanced SQL and a complex understanding of SQL.

Secondary Skills:
• Experience working with Snowflake and understanding of Snowflake architecture, including concepts like internal and external tables, stages, and masking policies.
Competencies / Experience:
• Deep technical skills in AWS Glue (Crawler, Data Catalog)
• Hands-on experience with Python and PySpark
• PL/SQL experience
• CloudFormation and Terraform
• CI/CD GitHub actions
• Experience with BI systems (PowerBI, Tableau)
• Good understanding of AWS services like S3, SNS, Secret Manager, Athena, and Lambda
• Additionally, familiarity with any of the following is highly desirable: Jira, GitHub, Snowflake

Location – Work from Gurugram or Bangalore office

Job Summary

Published On: 25 Jul, 2025

Job Nature: Full Time

Salary: As per Industry standards

Location: Bengaluru & Gurugram

Similar Jobs

Integration Tester with Business Analysis Expertise
View Details
UX UI Design Developer
View Details
Technical Subject Matter Expert (Tech SME)

Bengaluru, India

View Details