platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
london (city of london), south east england, united kingdom
Mastek
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
analysts and stakeholders to translate business needs into technical solutions. Maintain clear documentation and contribute to internal best practices. Requirements Strong hands-on experience with PySpark (RDDs, DataFrames, SparkSQL). Proven ability to build and optimise ETL pipelines and dataflows. Familiar with Microsoft Fabric or similar lakehouse/data platform environments. Experience with Git, CI More ❯
business stakeholders to translate requirements into technical solutions Create, maintain, and update documentation and internal knowledge repository Your Profile Essential skills/knowledge/experience: Ability to write Spark code for large scale data processing, including RDDs, DataFrames, and SparkSQL Hands-on experience with lakehouses, dataflows, pipelines, and semantic models Ability to build More ❯
Reports Developer Curam Products Location: Durham, NC, OnsiteRate : Best competitive rate Position Overview Tha client is seeking an experienced Medicaid Reports Developer with strong expertise inOracle PL/SQL, Azure Synapse, andCuram Products to support the NC FAST Medicaid Projects. This role focuses on designing, developing, and maintaining advanced reporting and analytics solutions that drive data-driven decisions … requests and defect resolutions. Key Responsibilities Design, build, and maintain data warehouses, semantic models, and reporting solutions for NC FAST Medicaid initiatives. Develop and optimize Oracle PL/SQL code for data processing, transformation, and reporting. Create and manage Azure Synapse pipelines, serverless SQL pools, and Spark pools for large-scale data analytics. Develop … and user management. Hands-on experience with Azure Synapse Analytics, including pipelines, SQL pools, and Spark pools. Experience developing analytics solutions using Python, PySpark, and SparkSQL onAzure Cloud. Proficiency with Power BI for dashboard design and development. Experience with structured system development methodologies. Excellent analytical, organizational, and communication skills. Preferred Qualifications 4+ years of experience withIBM More ❯
amount of diverse data streams an organize them into our database; knowledge of data ingestion strategy for Azure platform. Analysing ETL or ELT napping logic and writing complex SQL queries to recreate logic. Experience in data platform deployment using MS Azure Synapse/Fabric Create Cl/CD pipelines tor Azure infrastructure, configuration, and app deployments. Demonstrate excellent … Azure Cloud, Fabric platform knowledge is preferred. Experience in data deployments in Cloud Knowledge of Network protocols preferred. Experience in Big Data components such as Azure Synapse, ADLS, SparkSQL, DB etc. Excellent Programming skills. Excellent communication skills. Technology Stack: Microsoft Fabric, Data Factory, Data Lake Store (gen 2), Databricks, Synapse Analytics, Cosmos DB, Azure SQLMore ❯
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
Strong understanding of data models and analytics; exposure to predictive modelling and machine learning is a plus. Proficient in SQL and Python, with bonus points for PySpark, SparkSQL, and Git. Skilled in data visualisation with tools such as Tableau or Power BI. Confident writing efficient code and troubleshooting sophisticated queries. Clear and adaptable communicator, able to explain technical More ❯
business stakeholders to translate requirements into technical solutions. Create, maintain, and update documentation and internal knowledge repositories. Your Profile Essential Skills/Knowledge/Experience Ability to write Spark code for large-scale data processing, including RDDs, DataFrames, and Spark SQL. Hands-on experience with lakehouses, dataflows, pipelines, and semantic models. Ability to build ETL workflows. More ❯