Senior Data Engineer

We have an exciting job opportunity for role Senior Data Engineer based in London, UK (Hybrid)

Job Type : Contract (Inside IR 35)

Note: Active SC Clearence needed

Job Description:

Job Summary:

  • We are seeking a highly skilled and experienced Senior Data Engineer to join our team and contribute to the development and maintenance of our cutting-edge Azure Databricks platform for economic data. This platform is critical for our Monetary Analysis, Forecasting, and Modelling activities.
  • The Senior Data Engineer will be responsible for building and optimising data pipelines, implementing data transformations, and ensuring data quality and reliability.
  • This role requires a strong understanding of data engineering principles, big data technologies, cloud computing (specifically Azure), and experience working with large datasets .

Key Responsibilities:

Data Pipeline Development & Optimisation:

  • Design, develop, and maintain robust and scalable data pipelines for ingesting, transforming, and loading data from various sources (e.g., APIs, databases, financial data providers) into the Azure Databricks platform.
  • Optimise data pipelines for performance, efficiency, and cost-effectiveness.
  • Implement data quality checks and validation rules within data pipelines.

Data Transformation & Processing:

  • Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies.
  • Develop and maintain data processing logic for cleaning, enriching, and aggregating data.
  • Ensure data consistency and accuracy throughout the data lifecycle.

Azure Databricks Implementation:

  • Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services.
  • Implement best practices for Databricks development and deployment.
  • Optimise Databricks workloads for performance and cost.
  • Need to program using the languages such as SQL, Python, R, YAML and JavaScript

Data Integration:

  • Integrate data from various sources, including relational databases, APIs, and streaming data sources.
  • Implement data integration patterns and best practices.
  • Work with API developers to ensure seamless data exchange.

Data Quality & Governance:

  • Hands on experience to use Azure Purview for data quality and data governance
  • Implement data quality monitoring and alerting processes.
  • Work with data governance teams to ensure compliance with data governance policies and standards.
  • Implement data lineage tracking and metadata management processes.

Collaboration & Communication:

  • Collaborate closely with data scientists, economists, and other technical teams to understand data requirements and translate them into technical solutions.
  • Communicate technical concepts effectively to both technical and non-technical audiences.
  • Participate in code reviews and knowledge sharing sessions.

Automation & DevOps:

  • Implement automation for data pipeline deployments and other data engineering tasks.
  • Work with DevOps teams to implement and Build CI/CD pipelines, for environmental deployments.
  • Promote and implement DevOps best practices .

Essential Skills & Experience:

  • 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks .
  • Strong proficiency in Python and Spark (PySpark) or Scala.
  • Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns.
  • Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and Azure SQL Database.
  • Experience working with large datasets and complex data pipelines.
  • Experience with data architecture design and data pipeline optimization.
  • Proven expertise with Databricks, including hands-on implementation experience and certifications.
  • Experience with SQL and NoSQL databases.
  • Experience with data quality and data governance processes. Experience with version control systems (e.g., Git).
  • Experience with Agile development methodologies.
  • Excellent communication, interpersonal, and problem-solving skills.
  • Experience with streaming data technologies (e.g., Kafka, Azure Event Hubs).
  • Experience with data visualisation tools (e.g., Tableau, Power BI).
  • Experience with DevOps tools and practices (e.g., Azure DevOps, Jenkins, Docker, Kubernetes).
  • Experience working in a financial services or economic data environment.
  • Azure certifications related to data engineering (e.g., Azure Data Engineer Associate).

For more info, please contact shameena@Lsarecruit.co.uk

Company
LSA Recruit
Location
Leeds, UK
Posted
Company
LSA Recruit
Location
Leeds, UK
Posted