• Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.
• Create data pipeline architecture by designing and implementing data ingestion solutions.
• Integrate data sets using AWS services such as Glue, Lambda functions
• Configure and use Dynamo DB to store, process and analyze NOSQL data
• Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena
• Author ETL processes using Python, Pyspark
• Build Redshift Spectrum direct transformations and data modeling using data in S3
• ETL process monitoring using Cloudwatch events
• You will be working in collaboration with other teams. Good communication must.
Qualifications & Experience.
• Must have 4+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment
• Expertise in Redshift, Kinesis, EC2 clusters highly desired
• Past or current experience working with US clients preferred.
Timezone - US 7am- 3pm CT
*** Not Part-time position ***