OneLake). Advanced proficiency in Power BI, including DAX, Power Query (M), and data modelling. Deep understanding of data warehousing, ETL, and data lakehouse concepts. Strong working knowledge of Databricks, including Delta Lake and notebooks. Strong interpersonal skills with the ability to influence and communicate complex data topics clearly. Excellent analytical, organisational, and problem-solving abilities. Experience leading or mentoring More ❯
ELT pipelines. Understanding of data governance, security, and access control frameworks. Knowledge of batch and real-time data integration and experience with ODBC connectors or REST APIs. Familiarity with Databricks and/or Microsoft Fabric is a bonus. Experience 3+ years in a Data Architect or senior data engineering role. Proven record of designing and delivering cloud-based data platforms More ❯
City of London, London, United Kingdom Hybrid/Remote Options
LHH
and public sector clients (or internal clients within large organisations) through RFI/RFP responses, bid documentation, and client presentations. Hands-on experience with data science platforms such as Databricks , Dataiku , AzureML , or SageMaker , and machine learning frameworks such as TensorFlow , Keras , PyTorch , and scikit-learn . Expertise in cloud platforms ( AWS , Azure , Google Cloud ) and experience deploying solutions using More ❯
and public sector clients (or internal clients within large organisations) through RFI/RFP responses, bid documentation, and client presentations. Hands-on experience with data science platforms such as Databricks , Dataiku , AzureML , or SageMaker , and machine learning frameworks such as TensorFlow , Keras , PyTorch , and scikit-learn . Expertise in cloud platforms ( AWS , Azure , Google Cloud ) and experience deploying solutions using More ❯
design and long-term operation. Help refine and evolve engineering principles, development standards, and best practice. Skills & Experience: Hands-on experience with Azure data services such as Data Factory, Databricks, Data Lake, Azure SQL, Synapse, Data Catalog, and Purview. Strong Python and SQL skills for transformation and pipeline development. Solid understanding of Azure data storage/warehouse technologies including SQL More ❯
with Snowflake Secure Data Sharing and Snowflake Marketplace. Familiarity with Snowpark for Python/Java-based transformations. Understanding of role-based access control, data masking, and time travel features. Databricks: Hands-on experience with Apache Spark and Databricks Runtime. Proficiency in Delta Lake for ACID-compliant data lakes. Experience with Structured Streaming and Auto Loader. Familiarity with MLflow, Feature Store … and Model Registry. Use of Databricks notebooks for collaborative development in Python, SQL, or Scala. Successful applicants should also possess: Bachelors' degree in Data Science, Analytics, Information Technology, Computer Science, Statistics, Mathematics, Quantitative Economics, Engineering, or equivalent professional education. Minimum of 3 years' experience in a Data Engineering role with a blue-chip consulting firm or in the Data Office More ❯
managing developer platforms, tools, and services. Hands on experience with DevOps and engineering tools (GitLab, GitHub, Azure DevOps, Docker, Kubernetes). Proficiency with AI/ML and MLOps platforms (Databricks, Google Cloud Vertex AI, SageMaker). Familiarity with generative AI technologies and frameworks (OpenAI, Google Gemini, Hugging Face Transformers). Demonstrated success in developing and executing product strategies. Ability to More ❯
Understanding of UI/UX principles. Contributions to open-source projects. Knowledge or experience working with text-to-code language models Knowledge or experience with big data analytics platforms (Databricks being a plus) Knowledge or data processing file format such as parquet, avro, csv) Who You Are Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent More ❯
experience in leading client data engineering and integration projects for major clients Hands-on experience of designing and implementing Quantexa solutions for clients. Technical excellence in Scala, Python and Databricks Skills we’d love to see/Amazing Extras: Experience delivering Quantexa in Financial Services, Fraud Detection, AML, or KYC domains. Exposure to DevOps and CI/CD pipelines, including More ❯
City of London, London, United Kingdom Hybrid/Remote Options
KPMG UK
experience in leading client data engineering and integration projects for major clients Hands-on experience of designing and implementing Quantexa solutions for clients. Technical excellence in Scala, Python and Databricks Skills we’d love to see/Amazing Extras: Experience delivering Quantexa in Financial Services, Fraud Detection, AML, or KYC domains. Exposure to DevOps and CI/CD pipelines, including More ❯
Uxbridge, England, United Kingdom Hybrid/Remote Options
Pepper Advantage
Engineering, Data Science, or a related field. 5+ years of experience in data architecture, solution design, or similar roles. Strong experience with modern data platforms and technologies (e.g., Snowflake, Databricks, BigQuery, Azure/AWS/GCP data services). Deep knowledge of data modeling, APIs, event-driven architectures, and cloud-native data architectures. Proven ability to design and implement scalable More ❯
experience with AI/ML and AI enabled analytics (LLMs, RAG, agents). Strong hands on coding skills with Spark (PySpark/Scala), SQL, dbt, and modern data platforms (Databricks, Snowflake, BigQuery). Experience with cloud platforms (AWS preferred). Proven expertise in BI and semantic modeling using tools such as Power BI, Tableau, Looker, or Mode. Strong understanding of More ❯
Swindon, Wiltshire, South West, United Kingdom Hybrid/Remote Options
Neptune (Europe) Ltd
Familiarity with SSIS, SSMS, and reporting tools such as Power BI or equivalent platforms. Experience working with cloud and on-premise databases, including solutions like Google Cloud Platform, Snowflake, Databricks, or Amazon Redshift. A meticulous attention to detail and a logical, structured approach to work. The ability to prioritise effectively and manage multiple tasks under time pressure. Strong communication skills More ❯
pandas), with the ability to select the right tools for complex problems and set technical standards for the team Advanced, hands-on expertise in SQL and big data platformslike Databricks, used for sophisticated data manipulation, feature engineering, and optimizing complex data workflows Extensive, proven experience in MLOps: owning the end-to-end lifecycle of production models , including designing scalable and More ❯
engineering roles, preferably for a customer facing data product Expertise in designing and implementing large-scale data processing systems with data tooling such as Spark, Kafka, Airflow, dbt, Snowflake, Databricks, or similar Strong programming skills in languages such as SQL, Python, Go or Scala Demonstrable use and an understanding of effective use of AI tooling in your development process A More ❯
Newark, Nottinghamshire, East Midlands, United Kingdom Hybrid/Remote Options
The Wildlife Trust
people focused data engineering and analysis. You will have experience in a data engineering role, ideally with practical experience or the ability to upskill in cloud services like Azure, Databricks and ESRI, as well as excellent proven proficiency, SQL and Python. Ideally you would have a familiarity with developing pipelines which support Analysts who use RStudio Power BI and/ More ❯
Job Title: Azure Databricks Data Engineer Primary skills: Advanced SQL, Azure Databricks, Azure Data Factory, Azure Datalake. Secondary skills: Azure SQL, PySpark, Azure Synapse. Experience: 10+ Years of Experience Hybrid - 3 Days/Week onsite is MUST About the job We are looking for an experienced Databricks Data Engineer to design, develop, and manage data pipelines using Azure services such … as Databricks, Data Factory, and Datalake. The role involves building scalable ETL solutions, collaborating with cross-functional teams, and processing large volumes of data. You will work closely with business and technical teams to deliver robust data models and transformations in support of analytics and reporting needs. • Responsibilities: • Design and develop ETL pipelines using ADF for data ingestion and transformation. … status. • Requirements - Must Have: • 8+ years of experience in data ingestion, data processing, and analytical pipelines for big data and relational databases. • Extensive hands-on experience with Azure services: Databricks, Data Factory, ADLS, Synapse, and Azure SQL. • Experience in SQL, Python, and PySpark for data transformation and processing. • Strong understanding of DevOps, CI/CD deployments, and Agile methodologies. • Strong More ❯
Job Title: Azure Databricks Data Engineer Primary skills: Advanced SQL, Azure Databricks, Azure Data Factory, Azure Datalake. Secondary skills: Azure SQL, PySpark, Azure Synapse. Experience: 10+ Years of Experience Hybrid - 3 Days/Week onsite is MUST About the job We are looking for an experienced Databricks Data Engineer to design, develop, and manage data pipelines using Azure services such … as Databricks, Data Factory, and Datalake. The role involves building scalable ETL solutions, collaborating with cross-functional teams, and processing large volumes of data. You will work closely with business and technical teams to deliver robust data models and transformations in support of analytics and reporting needs. • Responsibilities: • Design and develop ETL pipelines using ADF for data ingestion and transformation. … status. • Requirements - Must Have: • 8+ years of experience in data ingestion, data processing, and analytical pipelines for big data and relational databases. • Extensive hands-on experience with Azure services: Databricks, Data Factory, ADLS, Synapse, and Azure SQL. • Experience in SQL, Python, and PySpark for data transformation and processing. • Strong understanding of DevOps, CI/CD deployments, and Agile methodologies. • Strong More ❯
APIs. Proficiency with Kafka and distributed streaming systems. Solid understanding of SQL and data modeling. Experience with containerization (Docker) and orchestration (Kubernetes). Working knowledge of Flink, Spark, or Databricks for data processing. Familiarity with AWS services (ECS, EKS, S3, Lambda, etc.). Basic scripting in Python for automation or data manipulation. Secondary Skills Experience with Datadog, Prometheus, or other More ❯
The Data Engineer will demonstrated experience as a Data Engineer, with a strong track record of success. Mastery of data management and processing tools, including Power BI, Data Factory, Databricks, SQL Server, and Oracle. Proficient in SQL and experienced in database administration. Familiarity with cloud platforms such as Azure and AWS. Excellent problem-solving and analytical skills, with strong attention More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Robert Half
The Data Engineer will demonstrated experience as a Data Engineer, with a strong track record of success. Mastery of data management and processing tools, including Power BI, Data Factory, Databricks, SQL Server, and Oracle. Proficient in SQL and experienced in database administration. Familiarity with cloud platforms such as Azure and AWS. Excellent problem-solving and analytical skills, with strong attention More ❯
City of London, London, United Kingdom Hybrid/Remote Options
twentyAI
in C#, SQL, REST APIs, and single-page web applications. Hands-on experience with Azure, Git, CI/CD, and DevOps tooling. Exposure to or interest in Python and Databricks as part of a modern data ecosystem. Familiarity with tools such as Jira, Docker, Kubernetes, Octopus, Artifactory, Oracle, and SQL Server. A problem-solver with a strong understanding of software More ❯
in C#, SQL, REST APIs, and single-page web applications. Hands-on experience with Azure, Git, CI/CD, and DevOps tooling. Exposure to or interest in Python and Databricks as part of a modern data ecosystem. Familiarity with tools such as Jira, Docker, Kubernetes, Octopus, Artifactory, Oracle, and SQL Server. A problem-solver with a strong understanding of software More ❯
data pipelines and warehouses, ensuring reliability, scalability, and top-tier data quality. The position involves hands-on work with technologies such as Azure, SQL/T-SQL, SSIS, and Databricks, alongside exposure to data modeling, integration, and DevOps processes. You'll collaborate closely with analytics, IT, and security teams to deliver secure, high-performing data environments and support advanced business More ❯