and turning data into powerful insights that drive transformation? We're looking for Data Engineers who have a broad range of data engineering skills, with a focus on Microsoft Databricks . Experience with Microsoft Fabric or Snowflake is also highly desirable. ABOUT THE ROLE As a Data Engineer, you will play a key role in delivering high-quality data solutions … data services. ESSENTIAL TECHNICAL SKILLS REQUIRED We are looking for a technically skilled data professional with a strong foundation in modern data platforms and engineering practices. Key competencies include: Databricks Platform Expertise : Proven experience designing and delivering data solutions using Databricks on Azure or AWS. Databricks Components : Proficient in Delta Lake, Unity Catalog, MLflow, and other core Databricks tools. Programming … with Kafka, Snowflake, Azure Data Factory, Azure Synapse, or Microsoft Fabric (highly desirable). Data Architecture Frameworks : Knowledge of Inmon, Kimball, and Data Vault methodologies. Nice to have certifications: Databricks Certified Data Engineer Associate Databricks Certified Data Engineer Professional ABOUT YOU You're not just a data expert-you're a problem solver, collaborator, and forward-thinker. You will be More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
new, innovative and sustainable products and solutions, both for today and for a new era of building. To support our progress, we are currently recruiting for a Data Engineer (Databricks & BI) to come and join our team at Ibstock Head Office, LE67 6HS. (Applicants must have Right to work in the UK - We are unable to provide sponsorship for this … role) Job Purpose: The Data Engineer/BI Developer will play a critical role in developing and refining our modern data lakehouse platform, with a primary focus on Databricks for data engineering, transformation, and analytics. The role will involve designing, developing, and maintaining scalable data solutions to support business intelligence and reporting needs. This platform integrates Databricks, Power BI, and … JDE systems to provide near real-time insights for decision-making. Key Accountabilities: Ensure the existing design, development, and expansion of a near real-time data platform integrating AWS, Databricks and on-prem JDE systems. Develop ETL processes to integrate and transform data from multiple sources into a centralised data platform. Develop and optimise complex queries and data pipelines. Optimise More ❯
Falls Church, Virginia, United States Hybrid / WFH Options
Epsilon Inc
Jira ensuring timely and high-quality deliverables. Strong written and verbal communication skills for effective team collaboration One or more of the following certifications are desired: AWS Certified Developer, Databricks, Agile/Scrum, Python Programmer Preferred Qualifications: Familiarity with Apache Spark or comparable distributed data processing frameworks, preferably for large-scale data transformations and analytics. Working knowledge of data governance … platforms (e.g., Collibra) and cloud-based analytics solutions (e.g., Databricks). Hands-on experience with modern BI tools (e.g., Qlik, Power BI, Tableau) for data visualization and insight generation. Awareness of data access control best practices (e.g., Immuta) and familiarity with role or attribute-based access control frameworks. Other: Must hold an Active DOD Secret, Top Secret or TS/ More ❯
Falls Church, Virginia, United States Hybrid / WFH Options
Epsilon Inc
leadership skills, with the ability to convey complex technical concepts to both technical and non-technical stakeholders. One or more of the following certifications are desired: AWS Certified Developer, Databricks, Agile/Scrum, Python Programmer Preferred Qualifications: DOD 8570 IAT Level II Certification may be required (GSEC, GICSP, CND, CySA+, Security+ CE, SSCP or CCNA-Security). Familiarity with Apache … Spark or comparable distributed data processing frameworks, preferably for large-scale data transformations and analytics. Working knowledge of data governance platforms (e.g., Collibra) and cloud-based analytics solutions (e.g., Databricks). Hands-on experience with modern BI tools (e.g., Qlik, Power BI, Tableau) for data visualization and insight generation. Awareness of data access control best practices (e.g., Immuta) and familiarity More ❯
to assess their data needs and recommend Azure-based solutions to streamline data processing and storage. Data Ecosystem Implementation : Design and implement solutions leveraging Azure Synapse Analytics, Data Factory, Databricks, Data Lake Storage, and related tools. Pipeline Optimization : Develop and maintain robust ETL/ELT pipelines to ensure efficient data ingestion and transformation. Technical Implementation Data Modeling and Warehousing : Design More ❯
Manassas, Virginia, United States Hybrid / WFH Options
Centene
years of related experience. Or equivalent experience acquired through accomplishments of applicable knowledge, duties, scope and skill reflective of the level of this position. Technical Skills: Proficiency in Databricks platform Advanced data pipeline design and development Data quality and governance Machine learning model development and maintenance Data integration processes Data security and privacy regulations Data visualization tools development Data warehouse More ❯
Richmond, Virginia, United States Hybrid / WFH Options
Centene
years of related experience. Or equivalent experience acquired through accomplishments of applicable knowledge, duties, scope and skill reflective of the level of this position. Technical Skills: Proficiency in Databricks platform Advanced data pipeline design and development Data quality and governance Machine learning model development and maintenance Data integration processes Data security and privacy regulations Data visualization tools development Data warehouse More ❯
Virginia Beach, Virginia, United States Hybrid / WFH Options
Centene
years of related experience. Or equivalent experience acquired through accomplishments of applicable knowledge, duties, scope and skill reflective of the level of this position. Technical Skills: Proficiency in Databricks platform Advanced data pipeline design and development Data quality and governance Machine learning model development and maintenance Data integration processes Data security and privacy regulations Data visualization tools development Data warehouse More ❯
Fairfax, Virginia, United States Hybrid / WFH Options
Centene
years of related experience. Or equivalent experience acquired through accomplishments of applicable knowledge, duties, scope and skill reflective of the level of this position. Technical Skills: Proficiency in Databricks platform Advanced data pipeline design and development Data quality and governance Machine learning model development and maintenance Data integration processes Data security and privacy regulations Data visualization tools development Data warehouse More ❯
Newport News, Virginia, United States Hybrid / WFH Options
Centene
years of related experience. Or equivalent experience acquired through accomplishments of applicable knowledge, duties, scope and skill reflective of the level of this position. Technical Skills: Proficiency in Databricks platform Advanced data pipeline design and development Data quality and governance Machine learning model development and maintenance Data integration processes Data security and privacy regulations Data visualization tools development Data warehouse More ❯
Salary: 50.000 - 60.000 € per year Requirements: • 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark • Strong programming skills in Python, with experience in data manipulation libraries (e.g., PySpark, Spark SQL) • Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables • Solid understanding of data warehousing principles, ETL … and German (min. B2 levels) • Ability to work independently as well as part of a team in an agile environment Responsibilities: As a Data Engineer with a focus on Databricks, you will play a key role in building modern, scalable, and high-performance data solutions for our clients. You'll be part of our growing Data & AI team and work … hands-on with the Databricks platform, supporting clients in solving complex data challenges. • Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python • Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based data lakes and warehouses • Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity More ❯
in the power of agile and cross-functional collaboration, bringing together people from various backgrounds to create outstanding digital solutions. Aufgaben As a Data Engineer with a focus on Databricks , you will play a key role in building modern, scalable, and high-performance data solutions for our clients. You'll be part of our growing Data & AI team and work … hands-on with the Databricks platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based data lakes and warehouses Leveraging the Databricks … validation processes and adherence to data governance policies Collaborating with data scientists and analysts to understand data needs and deliver actionable solutions Staying up to date with advancements in Databricks, data engineering, and cloud technologies to continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache More ❯
in the power of agile and cross-functional collaboration, bringing together people from various backgrounds to create outstanding digital solutions. Aufgaben As a Data Engineer with a focus on Databricks , you will play a key role in building modern, scalable, and high-performance data solutions for our clients. You'll be part of our growing Data & AI team and work … hands-on with the Databricks platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based data lakes and warehouses Leveraging the Databricks … validation processes and adherence to data governance policies Collaborating with data scientists and analysts to understand data needs and deliver actionable solutions Staying up to date with advancements in Databricks, data engineering, and cloud technologies to continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache More ❯
in the power of agile and cross-functional collaboration, bringing together people from various backgrounds to create outstanding digital solutions. Aufgaben As a Data Engineer with a focus on Databricks , you will play a key role in building modern, scalable, and high-performance data solutions for our clients. You'll be part of our growing Data & AI team and work … hands-on with the Databricks platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based data lakes and warehouses Leveraging the Databricks … validation processes and adherence to data governance policies Collaborating with data scientists and analysts to understand data needs and deliver actionable solutions Staying up to date with advancements in Databricks, data engineering, and cloud technologies to continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache More ❯
how your job looks like You are part of the IT Data & AI Team and play a key role in shaping, implementing, and enhancing our Data Lake using Azure Databricks Source Integration : Establish connections to diverse systems via APIs , ODBC , Event Hubs , and sFTP protocols. ETL/ELT Pipelines : Design and optimize data pipelines using Azure Data Factory and Databricks … role in driving our digital transformation. The experience and knowledge you bring Experience : At least 5+ years as an Azure Data Engineer. Technical Expertise : Proficiency with Azure Data Factory , Databricks and Azure Storage . Strong skills in SQL , Python , and data modeling techniques. Familiarity with data formats like Parquet and JSON. Experience with AI/ML model management on Azure … Databricks . Education : Bachelor's degree in IT, Computer Science, or a related field. Microsoft Certified: Azure Data Engineer Associate Language Skills : Fluency in Dutch and English is mandatory. French is a plus. Knowledge of PowerBI is a plus. The key competences we are looking for Analytical mindset with attention to detail. Pragmatic and customer-oriented approach to problem-solving. More ❯
engineers and scientists working across Africa, Europe, the UK and the US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data … RESPONSIBILITIES Data Pipeline Development: Lead the design, implement, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of data from various sources using tools such as databricks, python and pyspark. Data Architecture: Architect scalable and efficient data solutions using the appropriate architecture design, opting for modern architectures where possible. Data Modeling: Design and optimize data models and … or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines, version control systems like Git, and containerization (e.g., Docker). Experience More ❯
Falls Church, Virginia, United States Hybrid / WFH Options
Epsilon Inc
implement effective solutions Excellent communication, teamwork, and organizational skills, with a focus on innovation and continuous improvement One or more of the following certifications are desired: AWS Certified Developer, Databricks, Agile/Scrum, Python Programmer Preferred Qualifications: Familiarity with Apache Spark or comparable distributed data processing frameworks, preferably for large-scale data transformations and analytics. Working knowledge of data governance … platforms (e.g., Collibra) and cloud-based analytics solutions (e.g., Databricks). Hands-on experience with modern BI tools (e.g., Qlik, Power BI, Tableau) for data visualization and insight generation. Awareness of data access control best practices (e.g., Immuta) and familiarity with role or attribute-based access control frameworks. Other: Must hold an Active DOD Secret, Top Secret or TS/ More ❯
Techyard is supporting a growing Databricks-partnered consultancy in securing a Databricks Champion Solution Architect to lead the design and delivery of advanced data and AI solutions on the Databricks Lakehouse Platform. This strategic, high-impact role is ideal for a seasoned professional who can operate as both a hands-on architect and a trusted advisor—bridging business vision with … the consultancy’s data engineering capabilities as they scale their team and client base. Key Responsibilities: Architect and implement end-to-end, scalable data and AI solutions using the Databricks Lakehouse (Delta Lake, Unity Catalog, MLflow). Design and lead the development of modular, high-performance data pipelines using Apache Spark and PySpark. Champion the adoption of Lakehouse architecture (bronze … Collaborate with stakeholders, analysts, and data scientists to translate business needs into clean, production-ready datasets. Promote CI/CD, DevOps, and data reliability engineering (DRE) best practices across Databricks environments. Integrate with cloud-native services and orchestrate workflows using tools such as dbt, Airflow, and Databricks Workflows. Drive performance tuning, cost optimisation, and monitoring across data workloads. Mentor engineering More ❯
engineers and scientists working across Africa, Europe, the UK and the US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data … data solutions. RESPONSIBILITIES Data Pipeline Development: Design, implement, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of data from various sources using tools such as databricks, python and pyspark. Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data. ETL Processes: Develop and automate ETL workflows … or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines and version control systems like Git. Knowledge of ETL tools and More ❯
SAS Viya). An ability to write complex SQL queries. Project experience using one or more of the following technologies: Tableau, Python, Power BI, Cloud (Azure, AWS, GCP, Snowflake, Databricks). Project lifecycle experience, having played a leading role in the delivery of end-to-end projects, as 14, rue Pergolèse, Paris, Francewell as a familiarity with different development methodologies More ❯
We work with some of the UK's biggest companies and government departments to provide a pragmatic approach to technology, delivering bespoke software solutions and expert advice. Our clients are increasingly looking to us to help them make the best More ❯
This is an exciting contract opportunity for an SC Cleared Azure Data Engineer with a strong focus on Databricks to join an experienced team in a new customer engagement working at the forefront of data analytics and AI. This role offers the chance to take a key role in the design and delivery of advanced Databricks solutions within the Azure … ecosystem. Responsibilities: Design, build, and optimise end-to-end data pipelines using Azure Databricks, including Delta Live Tables. Collaborate with stakeholders to define technical requirements and propose Databricks-based solutions. Drive best practices for data engineering. Help clients realise the potential of data science, machine learning, and scaled data processing within Azure/Databricks ecosystem. Mentor junior engineers and support … Take ownership for the delivery of core solution components. Support with planning, requirements refinements, and work estimation. Skills & Experiences: Proven experience designing and implementing data solutions in Azure using Databricks as a core platform. Hands-on expertise in Delta Lake, Delta Live Tables and Databricks Workflows. Strong coding skills in Python and SQL, with experience in developing modular, reusable code More ❯
Job Summary : We are seeking a highly skilled and experienced Senior Data Engineer to join our team and contribute to the development and maintenance of our cutting-edge Azure Databricks platform for economic data. This platform is critical for our Monetary Analysis, Forecasting, and Modelling activities. The Senior Data Engineer will be responsible for building and optimising data pipelines, implementing … Development & Optimisation : Design, develop, and maintain robust and scalable data pipelines for ingesting, transforming, and loading data from various sources (e.g., APIs, databases, financial data providers) into the Azure Databricks platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark … or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and More ❯
Job Summary : We are seeking a highly skilled and experienced Senior Data Engineer to join our team and contribute to the development and maintenance of our cutting-edge Azure Databricks platform for economic data. This platform is critical for our Monetary Analysis, Forecasting, and Modelling activities. The Senior Data Engineer will be responsible for building and optimising data pipelines, implementing … Development & Optimisation : Design, develop, and maintain robust and scalable data pipelines for ingesting, transforming, and loading data from various sources (e.g., APIs, databases, financial data providers) into the Azure Databricks platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark … or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and More ❯
design. Experience architecting and building data applications using Azure, specifically a Data Warehouse and/or Data Lake. Technologies : Azure Data Factory, Azure Synapse Analytics, Azure Data Lake, Azure Databricks and Power BI. Experience with creating low-level designs for data platform implementations. ETL pipeline development for the integration with data sources and data transformations including the creation of supplementary … with APIs and integrating them into data pipelines. Strong programming skills in Python. Experience of data wrangling such as cleansing, quality enforcement and curation e.g. using Azure Synapse notebooks, Databricks, etc. Experience of data modelling to describe the data landscape, entities and relationships. Experience with data migration from legacy systems to the cloud. Experience with Infrastructure as Code (IaC) particularly More ❯