Full Time

Data Engineer

We are looking for an experienced Data Engineer with strong expertise in data engineering and data warehousing solutions, particularly using Apache Spark and PySpark. The ideal candidate should have experience working with AWS and Azure data ecosystems, building scalable data pipelines, and participating in system design discussions. This role requires someone who can work in a fast-paced environment, collaborate with cross-functional teams, and contribute to designing robust and efficient data architectures.

1Opening
Apply byApril 2, 2026

Key Responsibilities

  • Design, build, and maintain scalable data pipelines and data warehouse solutions.

  • Develop and optimize data processing workflows using Apache Spark and PySpark.

  • Build and manage data integration pipelines using Azure Data Factory (ADF).

  • Work with AWS and Azure storage services, including data ingestion, processing, and storage optimization.

  • Implement and maintain data storage layers and structured data architectures.

  • Participate actively in architecture and design sessions to define scalable data solutions.

  • Collaborate with engineering, analytics, and business teams to deliver reliable data solutions.

  • Monitor, troubleshoot, and optimize data pipelines for performance and reliability.

  • Work effectively under pressure and manage priorities in a fast-paced environment.

Required Skills & Qualifications

  • 4+ years of experience in Data Engineering or Data Warehousing.

  • Strong experience with Apache Spark and PySpark.

  • Strong analytical skills and Retail domain knowledge

  • Hands-on experience with AWS services for data engineering.

  • Experience working with Azure Data Factory (ADF).

  • Familiarity with Azure Storage and structured storage layers.

  • Solid understanding of data pipeline architecture and ETL/ELT processes.

  • Ability to participate in technical design discussions and architectural planning.

  • Strong problem-solving skills and ability to work under pressure.

Preferred Qualifications

  • Experience working with Databricks.

  • Experience designing dynamic and parameterized pipelines in Azure Data Factory.

  • Familiarity with modern data lake or lakehouse architectures.

  • Experience optimizing large-scale data processing workflows.

Soft Skills

  • Strong communication and collaboration skills.

  • Ability to handle multiple tasks and tight deadlines.

  • Proactive mindset and ownership of deliverables.

 For further details and updates, Follow Ycotek on Linkedin

How to Apply

Interested candidates are requested to send their cover letter and resume, clarifying their work experience and expected salary to hrd@ycotek.com by April 2, 2026