
Đã đóng
Đã đăng vào
Thanh toán khi bàn giao
I need an experienced AWS data engineer to design and build production-ready ETL pipelines in AWS Glue. The work centres on moving and transforming data from several source systems—relational databases (e.g., PostgreSQL, MySQL), NoSQL stores, real-time streams coming through Kafka/Kinesis, and a handful of internal/external REST APIs—into a clean, query-friendly layout in S3 and, ultimately, Redshift. Requirements: Strong AWS data engineering experiences with various aws services Experience building end-to-end data pipelines (schema discovery, ingestion, transformation, orchestration, monitoring) Experience working with relational databases like Oracle, MySQL, and SQL Server etc Experience with data ingestion from on-prem systems to cloud Experience with streaming platforms like Kafka or AWS Kinesis Strong skills in Python, PySpark, SQL, and Terraform
Mã dự án: 40288346
162 đề xuất
Dự án từ xa
Hoạt động 1 ngày trước
Thiết lập ngân sách và thời gian
Nhận thanh toán cho công việc
Phác thảo đề xuất của bạn
Miễn phí đăng ký và cháo giá cho công việc
162 freelancer chào giá trung bình $1.128 USD cho công việc này

⭐⭐⭐⭐⭐ Build Efficient ETL Pipelines in AWS Glue for Your Data Needs ❇️ Hi My Friend, I hope you're doing well. I've reviewed your project requirements and see you're looking for an experienced AWS data engineer. You don't need to look any further; Zohaib is here to help you! My team has successfully completed 50+ similar projects focused on creating production-ready ETL pipelines. I will design and build efficient data pipelines to move and transform data from various sources into a clean format in S3 and Redshift. ➡️ Why Me? I can easily create your ETL pipelines as I have 5 years of experience in AWS data engineering, specializing in building end-to-end data pipelines, schema discovery, and data transformation. My expertise includes working with relational databases like PostgreSQL and MySQL, as well as streaming platforms like Kafka and AWS Kinesis. I also have a strong grip on Python, PySpark, SQL, and Terraform. ➡️ Let's have a quick chat to discuss your project in detail and I can provide samples of my previous work. Looking forward to discussing this with you in chat. ➡️ Skills & Experience: ✅ AWS Glue ✅ ETL Pipeline Design ✅ Data Transformation ✅ Schema Discovery ✅ PostgreSQL ✅ MySQL ✅ AWS Kinesis ✅ Kafka ✅ Python ✅ PySpark ✅ SQL ✅ Terraform Waiting for your response! Best Regards, Zohaib
$900 USD trong 2 ngày
7,9
7,9

Your pipeline requirements align closely with the type of data engineering work I typically handle on AWS. I’ve built ETL pipelines that ingest data from relational databases, APIs, and streaming sources, transform them using Python/PySpark, and land them in S3 for analytics workloads and warehouse systems like Redshift. I’m comfortable working with AWS Glue jobs, schema discovery, and designing clean data layouts that are efficient for querying and downstream reporting. I also focus on making pipelines production ready: proper orchestration, monitoring, error handling, and infrastructure setup with tools like Terraform so the pipeline remains reliable as data volume grows. Happy to help design and implement an end to end pipeline that moves data from your source systems into a clean, analytics ready structure in AWS.
$2.000 USD trong 10 ngày
7,9
7,9

⭐⭐⭐⭐⭐ CnELIndia and Raman Ladhani bring extensive AWS data engineering expertise, ensuring robust design and deployment of production-ready ETL pipelines in AWS Glue. Leverage experience with relational databases (PostgreSQL, MySQL, Oracle) and NoSQL stores to efficiently ingest, transform, and consolidate data. Implement real-time data ingestion from Kafka/Kinesis and REST APIs into S3, maintaining data integrity and performance. Use Python, PySpark, and SQL for transformation logic, ensuring scalable and optimized processing. Employ Terraform for infrastructure-as-code, enabling reproducible and automated AWS deployments. Design end-to-end pipeline orchestration with monitoring, logging, and error handling for reliable production operations. Facilitate seamless migration from on-prem systems to AWS cloud, adhering to best practices for security and compliance. Deliver clean, query-friendly datasets in S3 and Redshift, enabling fast analytics and reporting for business stakeholders.
$1.125 USD trong 7 ngày
7,0
7,0

With over 10 years of experience in web and mobile development, including extensive expertise in AWS services, I understand the importance of designing and building production-ready ETL pipelines in AWS Glue for your project. Your need for moving and transforming data from multiple source systems, including relational databases, NoSQL stores, real-time streams, and REST APIs, aligns perfectly with my skill set. In the realm of data engineering, I have successfully built end-to-end data pipelines, worked with various relational databases like MySQL, and handled data ingestion from on-prem systems to the cloud. My proficiency in Python, PySpark, SQL, and Terraform ensures that I can deliver a solution that meets your requirements effectively and efficiently. I am excited about the opportunity to work on your AWS ETL Pipeline Development project and deliver exceptional results within your budget and timeframe. Feel free to message me to discuss how we can move forward with your project seamlessly.
$1.200 USD trong 20 ngày
6,7
6,7

With 13+ years of experience, I, Steven, am well-versed in all aspects of your AWS ETL pipeline development project. I have worked extensively with various AWS data engineering services including AWS Glue and have built numerous end-to-end data pipelines from scratch. From schema discovery to ingestion, transformation to orchestration and monitoring, I have got you covered on every front. My illustrious career which includes projects like Sports data extraction from FlashScore, business license scraping and validation; data extraction from VFS Global VISA Booking site attest my capability of delivering results that exceed expectations. My technical proficiency includes AWS services, Python web automation & scraping which makes a perfect match for your project. Partner with me to transform and move your data efficiently while ensuring its integrity and security throughout the process. Let's discuss how we can make your project a success!
$750 USD trong 2 ngày
7,0
7,0

Hello Greetings, After reviewing your project description, I feel confident and excited to work on this project for you. But I have some crucial things and queries to clear out. Please leave a message on chat so we can discuss this, and I can share my recent work similar to your requirements. Thanks for your time! I look forward to hearing from you soon. Best Regards.
$1.125 USD trong 7 ngày
6,8
6,8

Hello, I have 10 years of experience in designing and implementing scalable data solutions. I am well-versed in building ETL pipelines using AWS services like Glue and Redshift. I have extensive experience with relational databases and data ingestion from on-prem systems. I am proficient with streaming platforms such as Kafka and AWS Kinesis. My expertise includes Python, PySpark, SQL, and Terraform. I am confident in delivering a production-ready solution for your needs. Regards, VishnuLal NB.*
$1.000 USD trong 7 ngày
6,5
6,5

Hi there, I'm excited about the opportunity to design and build the AWS ETL pipelines you need. With my extensive experience as a top California freelancer, I have successfully completed numerous AWS data engineering projects, earning 5-star reviews along the way. I understand the importance of transforming data from diverse sources such as PostgreSQL, MySQL, and Kafka, and I can efficiently develop production-ready pipelines to streamline your data workflow. My expertise in AWS Glue, coupled with strong skills in Python, PySpark, and Terraform, enables me to create end-to-end solutions tailored to your specific requirements. I will ensure seamless data ingestion and transformation, resulting in a clean, query-friendly layout in S3 and Redshift. I would love to discuss your project in further detail. Please message me at your earliest convenience if you have any immediate questions or want to start the collaboration! Could you share what specific challenges you expect during the ETL process?
$1.375 USD trong 15 ngày
6,2
6,2

Hello, Can we discuss about your AWS Glue ETL pipeline project cause I have built a small data flow where mixed sources stream into S3 and then structured tables power analytics. Glue with PySpark, Redshift, and Terraform can handle ingestion, schema mapping, and monitoring cleanly. Are Kafka streams continuous or batch snapshots? Do APIs need rate-limit handling? Should Redshift use partitioned S3 staging tables? Poor schema evolution handling often breaks pipelines later. Best regards, Devendra S.
$1.500 USD trong 14 ngày
6,4
6,4

Hello, I’m excited about the opportunity to contribute to your project. With my expertise in AWS data engineering, Glue-based ETL architecture, PySpark, Python, SQL, Terraform, and cloud data platform design, I can deliver production-ready pipelines that ingest from relational databases, NoSQL sources, streaming systems like Kafka or Kinesis, and REST APIs into S3 and Redshift with strong reliability and observability. I’ll tailor the work to your exact requirements, ensuring clean schema discovery, scalable ingestion from on-prem and cloud systems, efficient transformations, orchestration, monitoring, and a query-friendly data layout that supports long-term maintainability and analytics use cases. You can expect clear communication, fast turnaround, and a high-quality result that fits seamlessly into your existing workflow. Best regards, Juan
$1.125 USD trong 3 ngày
5,8
5,8

Hello, I can design and implement production-ready ETL pipelines on AWS Glue to ingest, transform, and load data from multiple sources into S3 and Redshift with strong reliability and observability. My approach is to build a modular pipeline architecture covering schema discovery, ingestion, transformation, orchestration, and monitoring. Data from relational databases (PostgreSQL/MySQL/Oracle/SQL Server), NoSQL stores, REST APIs, and streaming platforms such as Kafka or Kinesis will be ingested using Glue connectors or streaming ingestion, then processed with PySpark transformations and stored in structured S3 layers (raw → processed → curated) before loading optimized datasets into Redshift. Key implementation points: - AWS Glue jobs (PySpark) for scalable transformations - Streaming ingestion from Kafka/Kinesis where required - Data catalog + schema management via AWS Glue Catalog - Partitioned S3 data lake layout for efficient queries - Redshift loading and performance tuning - Infrastructure as Code using Terraform - Monitoring and alerting via CloudWatch Estimated timeline: 1–2 weeks depending on number of sources and transformations. Happy to review your current architecture and data sources to propose the most efficient pipeline design.
$950 USD trong 14 ngày
5,9
5,9

Hi, Iosif here - I'll build a self-contained Trust Wallet plugin adding full USDT (ERC-20) support using Trust Wallet Core, Swift for iOS and Kotlin for Android, wiring into the existing balance sync and push-notification flow. This is my speciality. I bring 15+ years as a Senior Security-First Full-Stack & DevOps Engineer. Approach - I will: review your wallet fork/architecture, implement ERC-20 handlers (balance, token metadata), implement robust send/receive flows with gas estimation and nonce handling, add a history UI showing confirmations/timestamps/hash links, run static-analysis fixes against Trust Wallet Core, deliver unit/UI tests and signed TestFlight/Play internal builds. Risk mitigation: safe gas defaults, replay protection, CI static checks. What you get - source modules for iOS & Android; integration guide + build scripts; basic unit/UI tests proving send/receive/history; signed internal releases for end-to-end checks. How you'll know it's done - mainnet transactions confirm with correct gas handling; history lists 50+ USDT transfers with live status; no Trust Wallet Core static-analysis warnings and platform-native code structure for maintainability. Estimate - delivered in 2-3 weeks. I recently delivered a similar ERC-20 mobile wallet integration and can start once open points are clarified. Ready after open points are clarified.
$5.600 USD trong 21 ngày
6,0
6,0

Hello, I understand you need production-ready ETL pipelines in AWS Glue to ingest, transform, and store data from relational databases, NoSQL stores, Kafka/Kinesis streams, and REST APIs into S3 and Redshift. I will design end-to-end pipelines including schema discovery, data ingestion, transformation with PySpark, orchestration, and monitoring. The pipelines will be modular, scalable, and maintainable, with strong data validation and error handling. I also provide infrastructure setup with Terraform, ensuring reproducibility and cloud best practices. Clarification Questions: Should the pipelines include near real-time streaming ingestion or batch processing is sufficient? Are there specific data retention or partitioning policies required in S3/Redshift? Thanks, Asif
$1.500 USD trong 11 ngày
5,6
5,6

Hello client, I'm Denis Redzepovic, an experienced developer with expertise in Hadoop, Python, PySpark, SQL, ETL, Terraform and Amazon Web Services. I have worked extensively on diverse Python projects, ranging from backend development and automation to data processing and API integrations. My deep understanding of Python’s libraries and frameworks allows me to build efficient, scalable, and maintainable solutions. I pay close attention to code quality and performance to ensure your project runs flawlessly. With my solid experience, I’m confident I can deliver results that exceed your expectations. I focus on writing clean, maintainable, and scalable code because I know the difference between 99% and 100%. If you hire me, I’ll do my best until you’re completely satisfied with the result. Let’s discuss your project details so I can tailor the perfect Python solution for you. Thanks, Denis
$750 USD trong 10 ngày
5,7
5,7

As an AWS-certified professional with versatile experience in backend development, DevOps engineering, and Kubernetes orchestration, I am confident in my ability to deliver exceptional results for your project. My deep understanding of AWS data services paired with my hands-on work with streaming platforms like Kafka and data ingestion from on-prem systems to the cloud make me a strong fit for this ETL pipeline project. In my 5+ years of experience, I've successfully built end-to-end data pipelines, while ensuring seamless schema discovery, ingestion, transformation, orchestration, and monitoring. With my skills in Python, PySpark, SQL, and Terraform complemented by relational database expertise (Oracle, MySQL, SQL Server), I bring a comprehensive toolkit to tackle various aspects of the job. Moreover, my passion for secure and scalable infrastructures aligns perfectly with your need for a production-ready solution. Additionally, having integrated AI/ML solutions into many projects before has given me an extra edge when dealing with advanced requirements that may arise. I look forward to discussing how I can put all these skills to work for you.
$1.500 USD trong 7 ngày
5,4
5,4

Hi there,Good morning I am Talha. I have read you project details i saw you need help with SQL, ETL, PySpark, Python, Hadoop, Amazon Web Services and Terraform I am writing to propose an innovative approach to tackle your project. Our proposal centers on delivering creative and effective solutions that will set your project apart. We will present fresh, out-of-the-box ideas that align with your project's objectives, demonstrating how we can achieve remarkable results. Please note that the initial bid is an estimate, and the final quote will be provided after a thorough discussion of the project requirements or upon reviewing any detailed documentation you can share. Could you please share any available detailed documentation? I'm also open to further discussions to explore specific aspects of the project. Thanks Regards. Talha Ramzan
$750 USD trong 10 ngày
5,2
5,2

Hello!! " AWS ETL Pipeline Development & Data Engineering " I have similar kind of expertise and work experience. I am having more then 10+ years of experienced in programming and i believe that i can start working step by step and achieve the project goal in short time frame. Features -->> End-to-end ETL pipeline design and implementation in AWS Glue -->> Data ingestion from relational, NoSQL, streaming, and API sources -->> Transformation, orchestration, and monitoring with Python/PySpark -->> Query-ready data stored in S3 and Redshift for analytics I WILL PROVIDE 2 YEARS OF FREE ONGOING SUPPORT AND COMPLETE SOURCE CODE. WE WILL WORK WITH AGILE METHODOLOGY AND WILL ASSIST YOU FROM ZERO TO PUBLISHING ON STORES. I am interested in this project. Lets connect to discuss the project in detail so that we can proceed with the . Thanks Julian
$750 USD trong 7 ngày
5,8
5,8

Hello, My name is Jiayin, and I’m an experienced data engineer with strong AWS and Python experience. I can design and build production-ready ETL pipelines using AWS Glue to ingest and transform data from relational databases, NoSQL systems, streaming sources like Kafka/Kinesis, and REST APIs. The pipeline will organize the processed data into a clean, query-optimized structure in S3 and load it efficiently into Redshift for analytics. I will implement reliable ingestion, transformation, and orchestration using Python, PySpark, and SQL while leveraging AWS services for monitoring and scalability. I also work with infrastructure-as-code tools like Terraform to ensure the pipelines are maintainable and reproducible. The final solution will include robust logging, monitoring, and documentation so your data workflows run reliably in production. Best regards, Jiayin
$1.500 USD trong 7 ngày
4,9
4,9

Dear Client, I hope this message finds you well. I am an experienced AWS data engineer with a robust background in designing and building production-ready ETL pipelines using AWS Glue and other AWS services. My expertise includes transforming data from various sources, such as PostgreSQL, MySQL, and NoSQL stores, into structured formats in S3 and Redshift. I have successfully implemented end-to-end data pipelines, focusing on schema discovery, ingestion, transformation, and orchestration. My proficiency with real-time streaming platforms like Kafka and AWS Kinesis, combined with strong skills in Python, PySpark, SQL, and Terraform, positions me well to deliver on your project requirements. I am committed to client satisfaction and will ensure a seamless integration of your data systems into a clean, query-friendly layout. I am available to start immediately and can work collaboratively to meet your project goals. Looking forward to the opportunity to work together. Best regards, Abdulhamid
$750 USD trong 7 ngày
4,9
4,9

Hello there, Your requirement for **production-grade ETL pipelines using AWS Glue** fits well with my experience building scalable data workflows across AWS analytics stacks. Approach: • Design **AWS Glue ETL jobs (PySpark)** for ingestion and transformation • Ingest data from **relational DBs (PostgreSQL/MySQL/Oracle), NoSQL stores, APIs, and Kafka/Kinesis streams** • Store curated datasets in **S3 using a structured data lake layout** • Load optimized datasets into **Amazon Redshift** for analytics • Implement orchestration using **Glue workflows / Step Functions** • Add monitoring, logging, and error handling with **CloudWatch** Infrastructure: • Use **Terraform** to provision Glue jobs, IAM roles, S3 buckets, Redshift resources, and networking • Implement **schema discovery and cataloging with AWS Glue Data Catalog** • Ensure secure ingestion from **on-prem systems to AWS** Deliverables: • Production-ready **AWS Glue ETL pipelines** • Structured **S3 data lake + Redshift analytics layer** • Terraform infrastructure setup • Monitoring and logging configuration • Documentation for pipeline deployment and maintenance The goal will be **reliable, scalable pipelines with clean data models ready for analytics and reporting**. I’d be happy to review your source systems and outline the ingestion architecture.
$1.125 USD trong 7 ngày
4,7
4,7

Elmont, United States
Thành viên từ thg 3 9, 2026
$250-750 USD
₹600-1500 INR
₹70000-80000 INR
₹400-750 INR/ giờ
₹1500-12500 INR
$10-30 USD
$250-750 CAD
₹600-1500 INR
₹750-1250 INR/ giờ
$750-1500 NZD
₹250000-500000 INR
$250-750 USD
₹1000-1800 INR
$374 AUD
$10-30 USD
$30-250 AUD
$250-750 USD
₹1000-1800 INR
$2-8 USD/ giờ
$25-50 USD/ giờ