Data Engineer – Build Scalable Data Pipelines
The Chefz · Amman
Job description
About the role
We are looking for a Data Engineer to design, build, and maintain scalable data pipelines that support both batch and real-time processing. The role works closely with product, engineering, and analytics teams to ensure data reliability and enable data-driven decision making.
Key responsibilities
- Design, develop, and operate ETL/ELT pipelines on AWS.
- Build and optimise data lakes, warehouses and data models (Redshift, PostgreSQL/Aurora, MySQL/MariaDB).
- Ensure data quality, reliability and integrity across all systems.
- Collaborate with cross-functional teams to define data requirements and deliver solutions.
- Implement real-time streaming solutions and batch processing workflows using tools such as Kafka and Airflow.
- Monitor and improve database performance and query efficiency.
- Apply data governance, security and best-practice standards.
- Mentor junior engineers and promote engineering excellence.
Required profile
- 5+ years of experience in data engineering or a related field.
- Strong problem-solving skills in a fast-moving product environment.
- Ability to design reusable, tested pipeline code.
Required skills
- Python
- SQL
- AWS services: Redshift, S3, Athena, ECS, EventBridge
- Workflow orchestration: Airflow (or equivalent)
- Relational databases: PostgreSQL/Aurora, MySQL/MariaDB
- Data modelling, dimensional design and schema evolution
- Kafka (streaming) – a plus
Questions fréquentes
Why are you reporting this job?
Apply in 30 seconds
Enter your email to apply. An account will be created automatically.
By continuing, you accept our terms of use.
Already have an account? Login
Published 3 hours ago
Expires 1 month from now
5 views · 0 applications
Boost your chances
Upload your CV — we will match you with relevant openings.
Analyzing your CV...
The Chefz
Amman