Data Engineer(Hadoop, Spark) – Contract

Other Jobs To Apply

No other job posts for this day.

Key Responsibilities

  • Design, develop, and maintain robust data pipelines and ETL processes to support analytics and reporting needs.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality solutions.
  • Implement data integration solutions across structured and unstructured data sources.
  • Ensure data quality, integrity, and security across all stages of the data lifecycle.
  • Optimize data workflows for performance and scalability in cloud and on-premise environments.
  • Support data migration and transformation initiatives for client projects.
  • Monitor and troubleshoot data pipeline issues and provide timely resolutions.


Required Qualifications

  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field.
  • 3+ years of experience in data engineering or related roles.
  • Proficiency in SQL and Python or Scala.
  • Experience with data pipeline tools such as Apache Spark, Kafka, Airflow, or similar.
  • Familiarity with cloud platforms (AWS, Azure, or GCP).
  • Strong understanding of data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery).
  • Knowledge of data governance, security, and compliance standards.
Back to blog