e.g., Airflow, dbt) and version control (Git). Solid understanding of data governance, security, and compliance frameworks . Nice to Have: Experience with data lake architectures (DeltaLake, Lakehouse). Familiarity with BI/visualization tools (Tableau, Power BI, Looker). Knowledge of streaming data tools (Kafka More ❯
hands-on technical role responsible for designing, developing, and maintaining data pipelines within the IT department. The pipelines will be realised in a modern lake environment and the engineer will collaborate in cross-functional teams to gather requirements and develop the conceptual data models. This role plays a crucial … scalability, and efficiency. Highly Desirable: Experience with Informatica ETL, Hyperion Reporting, and intermediate/advanced PL/SQL. Desirable Experience in a financial corporation Lake House/DeltaLake and Snowflake Experience with Spark clusters, both elastic permanent and transitory clusters Familiarity with data governance, data security More ❯
ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based data lakes and warehouses Leveraging the Databricks ecosystem (SQL, DeltaLake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable … Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables Solid understanding of data warehousing principles, ETL/ELT processes, data modeling and techniques, and database systems Proven experience with at least More ❯
ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based data lakes and warehouses Leveraging the Databricks ecosystem (SQL, DeltaLake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable … Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables Solid understanding of data warehousing principles, ETL/ELT processes, data modeling and techniques, and database systems Proven experience with at least More ❯
ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based data lakes and warehouses Leveraging the Databricks ecosystem (SQL, DeltaLake, Workflows, Unity Catalog) to deliver reliable and performant data workflows Integrating with cloud services such as Azure, AWS, or GCP to enable … Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables Solid understanding of data warehousing principles, ETL/ELT processes, data modeling and techniques, and database systems Proven experience with at least More ❯
Python, with experience in data manipulation libraries (e.g., PySpark, Spark SQL) • Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables • Solid understanding of data warehousing principles, ETL/ELT processes, data modeling and techniques, and database systems • Proven experience with at least … ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based data lakes and warehouses • Leveraging the Databricks ecosystem (SQL, DeltaLake, Workflows, Unity Catalog) to deliver reliable and performant data workflows • Integrating with cloud services such as Azure, AWS, or GCP to enable More ❯
enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need More ❯
enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including DeltaLake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need More ❯
to influence others Skills and Abilities Platforms & Tools Cloud Computing platforms (ADLS Gen2), Microsoft Stack (Synapse, DataBricks, Fabric, Profisee), Azure Service Bus, Power BI, DeltaLake, Azure DevOps, Azure Monitor, Azure Data Factory, SQL Server, Azure DataLake Storage, Azure App service, Azure ML is a plus Languages: Python, SQL, T-SQL More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
Aventum Group
to influence others Skills and Abilities Platforms & Tools Cloud Computing platforms (ADLS Gen2), Microsoft Stack (Synapse, DataBricks, Fabric, Profisee), Azure Service Bus, Power BI, DeltaLake, Azure DevOps, Azure Monitor, Azure Data Factory, SQL Server, Azure DataLake Storage, Azure App service, Azure ML is a plus Languages: Python, SQL, T-SQL More ❯
we do Passion for data and experience working within a data driven organization Hands-on experience with architecting, implementing, and performance tuning of: Data Lake technologies (e.g. DeltaLake, Parquet, Spark, Databricks) API & Microservices Message queues, streaming technologies, and event driven architecture NoSQL databases and query languages More ❯
Databricks, PySpark, and SQL . Experience with big data technologies , data lakes, and cloud computing. Proficiency in Azure Data Factory, Azure Synapse Analytics, and DeltaLake . Hands-on experience in machine learning, AI, and predictive analytics . Knowledge of CI/CD pipelines, DevOps practices, and infrastructure More ❯
they scale their team and client base. Key Responsibilities: Architect and implement end-to-end, scalable data and AI solutions using the Databricks Lakehouse (DeltaLake, Unity Catalog, MLflow). Design and lead the development of modular, high-performance data pipelines using Apache Spark and PySpark. Champion the More ❯
they scale their team and client base. Key Responsibilities: Architect and implement end-to-end, scalable data and AI solutions using the Databricks Lakehouse (DeltaLake, Unity Catalog, MLflow). Design and lead the development of modular, high-performance data pipelines using Apache Spark and PySpark. Champion the More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and DeltaLake optimisation. Experience with ETL/ELT processes for integrating diverse data sources. Experience in gathering, documenting, and refining requirements from key business More ❯
Strong experience designing and delivering data solutions in the Databricks Data Intelligence platform, either on Azure or AWS. Good working knowledge of Databricks components: DeltaLake, Unity Catalog, ML Flow, etc. Expertise in SQL, Python and Spark (Scala or Python). Experience working with relational SQL databases either on premises or More ❯
services and understands the processes and execution models in financial services is an asset • Experience in Azure SQL, Azure Data Factory, Azure DevOps, Databricks, DeltaLake is a must, certification is recommended • Experience in working with high-volume heterogeneous data sets 2 • Experience in agile software processes and More ❯
domains fully but should be able to show strong capability in their core areas: Cloud Data Platforms Azure Synapse Analytics, Microsoft Fabric, Azure Data Lake, Azure SQL Amazon Redshift, AWS Athena, AWS Glue Google BigQuery, Google Cloud Storage, Dataproc Artificial Intelligence & Machine Learning Azure OpenAI, Azure Machine Learning Studio … Foundry AWS SageMaker, Amazon Bedrock Google Vertex AI, TensorFlow, scikit-learn, Hugging Face Data Engineering & Big Data Azure Data Factory, Azure Databricks, Apache Spark, DeltaLake AWS Glue ETL, AWS EMR Google Dataflow, Apache Beam Business Intelligence & Analytics Power BI, Amazon QuickSight, Looker Studio Embedded analytics and interactive More ❯
complex data concepts to non-technical stakeholders. Preferred Skills: Experience with insurance platforms such as Guidewire, Duck Creek, or legacy PAS systems. Knowledge of DeltaLake, Apache Spark, and data pipeline orchestration tools. Exposure to Agile delivery methodologies and tools like JIRA, Confluence, or Azure DevOps. Understanding of More ❯
complex data concepts to non-technical stakeholders. Preferred Skills: Experience with insurance platforms such as Guidewire, Duck Creek, or legacy PAS systems. Knowledge of DeltaLake, Apache Spark, and data pipeline orchestration tools. Exposure to Agile delivery methodologies and tools like JIRA, Confluence, or Azure DevOps. Understanding of More ❯
complex data concepts to non-technical stakeholders. Preferred Skills: Experience with insurance platforms such as Guidewire, Duck Creek, or legacy PAS systems. Knowledge of DeltaLake, Apache Spark, and data pipeline orchestration tools. Exposure to Agile delivery methodologies and tools like JIRA, Confluence, or Azure DevOps. Understanding of More ❯
or CloudFormation. Experience with workflow orchestration tools (e.g., Airflow, Dagster). Good understanding of Cloud providers – AWS, Microsoft Azure, Google Cloud Familiarity with DBT, DeltaLake, Databricks Experience working in Agile environments with tools like Jira and Git. About Us We are Citation. We are far from your More ❯
Wilmslow, England, United Kingdom Hybrid / WFH Options
The Citation Group
or CloudFormation. Experience with workflow orchestration tools (e.g., Airflow, Dagster). Good understanding of Cloud providers – AWS, Microsoft Azure, Google Cloud Familiarity with DBT, DeltaLake, Databricks Experience working in Agile environments with tools like Jira and Git. About Us We are Citation. We are far from your More ❯
performance, scalability, and security. Collaborate with business stakeholders, data engineers, and analytics teams to ensure solutions are fit for purpose. Implement and optimise Databricks DeltaLake, Medallion Architecture, and Lakehouse patterns for structured and semi-structured data. Ensure best practices in Azure networking, security, and federated data access More ❯