managing developer platforms, tools, and services. Hands on experience with DevOps and engineering tools (GitLab, GitHub, Azure DevOps, Docker, Kubernetes). Proficiency with AI/ML and MLOps platforms (Databricks, Google Cloud Vertex AI, SageMaker). Familiarity with generative AI technologies and frameworks (OpenAI, Google Gemini, Hugging Face Transformers). Demonstrated success in developing and executing product strategies. Ability to More ❯
Understanding of UI/UX principles. Contributions to open-source projects. Knowledge or experience working with text-to-code language models Knowledge or experience with big data analytics platforms (Databricks being a plus) Knowledge or data processing file format such as parquet, avro, csv) Who You Are Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent More ❯
experience in leading client data engineering and integration projects for major clients Hands-on experience of designing and implementing Quantexa solutions for clients. Technical excellence in Scala, Python and Databricks Skills we’d love to see/Amazing Extras: Experience delivering Quantexa in Financial Services, Fraud Detection, AML, or KYC domains. Exposure to DevOps and CI/CD pipelines, including More ❯
City of London, London, United Kingdom Hybrid/Remote Options
KPMG UK
experience in leading client data engineering and integration projects for major clients Hands-on experience of designing and implementing Quantexa solutions for clients. Technical excellence in Scala, Python and Databricks Skills we’d love to see/Amazing Extras: Experience delivering Quantexa in Financial Services, Fraud Detection, AML, or KYC domains. Exposure to DevOps and CI/CD pipelines, including More ❯
Uxbridge, England, United Kingdom Hybrid/Remote Options
Pepper Advantage
Engineering, Data Science, or a related field. 5+ years of experience in data architecture, solution design, or similar roles. Strong experience with modern data platforms and technologies (e.g., Snowflake, Databricks, BigQuery, Azure/AWS/GCP data services). Deep knowledge of data modeling, APIs, event-driven architectures, and cloud-native data architectures. Proven ability to design and implement scalable More ❯
experience with AI/ML and AI enabled analytics (LLMs, RAG, agents). Strong hands on coding skills with Spark (PySpark/Scala), SQL, dbt, and modern data platforms (Databricks, Snowflake, BigQuery). Experience with cloud platforms (AWS preferred). Proven expertise in BI and semantic modeling using tools such as Power BI, Tableau, Looker, or Mode. Strong understanding of More ❯
Swindon, Wiltshire, South West, United Kingdom Hybrid/Remote Options
Neptune (Europe) Ltd
Familiarity with SSIS, SSMS, and reporting tools such as Power BI or equivalent platforms. Experience working with cloud and on-premise databases, including solutions like Google Cloud Platform, Snowflake, Databricks, or Amazon Redshift. A meticulous attention to detail and a logical, structured approach to work. The ability to prioritise effectively and manage multiple tasks under time pressure. Strong communication skills More ❯
pandas), with the ability to select the right tools for complex problems and set technical standards for the team Advanced, hands-on expertise in SQL and big data platformslike Databricks, used for sophisticated data manipulation, feature engineering, and optimizing complex data workflows Extensive, proven experience in MLOps: owning the end-to-end lifecycle of production models , including designing scalable and More ❯
engineering roles, preferably for a customer facing data product Expertise in designing and implementing large-scale data processing systems with data tooling such as Spark, Kafka, Airflow, dbt, Snowflake, Databricks, or similar Strong programming skills in languages such as SQL, Python, Go or Scala Demonstrable use and an understanding of effective use of AI tooling in your development process A More ❯
Job Title: Azure Databricks Data Engineer Primary skills: Advanced SQL, Azure Databricks, Azure Data Factory, Azure Datalake. Secondary skills: Azure SQL, PySpark, Azure Synapse. Experience: 10+ Years of Experience Hybrid - 3 Days/Week onsite is MUST About the job We are looking for an experienced Databricks Data Engineer to design, develop, and manage data pipelines using Azure services such … as Databricks, Data Factory, and Datalake. The role involves building scalable ETL solutions, collaborating with cross-functional teams, and processing large volumes of data. You will work closely with business and technical teams to deliver robust data models and transformations in support of analytics and reporting needs. • Responsibilities: • Design and develop ETL pipelines using ADF for data ingestion and transformation. … status. • Requirements - Must Have: • 8+ years of experience in data ingestion, data processing, and analytical pipelines for big data and relational databases. • Extensive hands-on experience with Azure services: Databricks, Data Factory, ADLS, Synapse, and Azure SQL. • Experience in SQL, Python, and PySpark for data transformation and processing. • Strong understanding of DevOps, CI/CD deployments, and Agile methodologies. • Strong More ❯
Job Title: Azure Databricks Data Engineer Primary skills: Advanced SQL, Azure Databricks, Azure Data Factory, Azure Datalake. Secondary skills: Azure SQL, PySpark, Azure Synapse. Experience: 10+ Years of Experience Hybrid - 3 Days/Week onsite is MUST About the job We are looking for an experienced Databricks Data Engineer to design, develop, and manage data pipelines using Azure services such … as Databricks, Data Factory, and Datalake. The role involves building scalable ETL solutions, collaborating with cross-functional teams, and processing large volumes of data. You will work closely with business and technical teams to deliver robust data models and transformations in support of analytics and reporting needs. • Responsibilities: • Design and develop ETL pipelines using ADF for data ingestion and transformation. … status. • Requirements - Must Have: • 8+ years of experience in data ingestion, data processing, and analytical pipelines for big data and relational databases. • Extensive hands-on experience with Azure services: Databricks, Data Factory, ADLS, Synapse, and Azure SQL. • Experience in SQL, Python, and PySpark for data transformation and processing. • Strong understanding of DevOps, CI/CD deployments, and Agile methodologies. • Strong More ❯
APIs. Proficiency with Kafka and distributed streaming systems. Solid understanding of SQL and data modeling. Experience with containerization (Docker) and orchestration (Kubernetes). Working knowledge of Flink, Spark, or Databricks for data processing. Familiarity with AWS services (ECS, EKS, S3, Lambda, etc.). Basic scripting in Python for automation or data manipulation. Secondary Skills Experience with Datadog, Prometheus, or other More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Robert Half
The Data Engineer will demonstrated experience as a Data Engineer, with a strong track record of success. Mastery of data management and processing tools, including Power BI, Data Factory, Databricks, SQL Server, and Oracle. Proficient in SQL and experienced in database administration. Familiarity with cloud platforms such as Azure and AWS. Excellent problem-solving and analytical skills, with strong attention More ❯
The Data Engineer will demonstrated experience as a Data Engineer, with a strong track record of success. Mastery of data management and processing tools, including Power BI, Data Factory, Databricks, SQL Server, and Oracle. Proficient in SQL and experienced in database administration. Familiarity with cloud platforms such as Azure and AWS. Excellent problem-solving and analytical skills, with strong attention More ❯
in C#, SQL, REST APIs, and single-page web applications. Hands-on experience with Azure, Git, CI/CD, and DevOps tooling. Exposure to or interest in Python and Databricks as part of a modern data ecosystem. Familiarity with tools such as Jira, Docker, Kubernetes, Octopus, Artifactory, Oracle, and SQL Server. A problem-solver with a strong understanding of software More ❯
City of London, London, United Kingdom Hybrid/Remote Options
twentyAI
in C#, SQL, REST APIs, and single-page web applications. Hands-on experience with Azure, Git, CI/CD, and DevOps tooling. Exposure to or interest in Python and Databricks as part of a modern data ecosystem. Familiarity with tools such as Jira, Docker, Kubernetes, Octopus, Artifactory, Oracle, and SQL Server. A problem-solver with a strong understanding of software More ❯
data pipelines and warehouses, ensuring reliability, scalability, and top-tier data quality. The position involves hands-on work with technologies such as Azure, SQL/T-SQL, SSIS, and Databricks, alongside exposure to data modeling, integration, and DevOps processes. You'll collaborate closely with analytics, IT, and security teams to deliver secure, high-performing data environments and support advanced business More ❯
data pipelines and warehouses, ensuring reliability, scalability, and top-tier data quality. The position involves hands-on work with technologies such as Azure, SQL/T-SQL, SSIS, and Databricks, alongside exposure to data modeling, integration, and DevOps processes. You'll collaborate closely with analytics, IT, and security teams to deliver secure, high-performing data environments and support advanced business More ❯
and Copilot Studio. Exposure to AI technologies in the Microsoft ecosystem (e.g., Azure ML, AI Foundry, Copilot 365, OpenAI Service). Desirable: Working understanding of Azure Data Factory, Azure Databricks and Azure Data Lake. Familiarity with Microsoft Purview. Proven success in a technical/functional Pre-sales role within a consulting environment. The Person: Strong commercial skills. Experience in delivering More ❯
and Copilot Studio. Exposure to AI technologies in the Microsoft ecosystem (e.g., Azure ML, AI Foundry, Copilot 365, OpenAI Service). Desirable: Working understanding of Azure Data Factory, Azure Databricks and Azure Data Lake. Familiarity with Microsoft Purview. Proven success in a technical/functional Pre-sales role within a consulting environment. The Person: Strong commercial skills. Experience in delivering More ❯
Newcastle Upon Tyne, Tyne and Wear, England, United Kingdom Hybrid/Remote Options
Reed
Required Skills & Qualifications: Demonstrable experience in building data pipelines using Spark or Pandas. Experience with major cloud providers (AWS, Azure, or Google). Familiarity with big data platforms (EMR, Databricks, or DataProc). Knowledge of data platforms such as Data Lakes, Data Warehouses, or Data Meshes. Drive for self-improvement and eagerness to learn new programming languages. Ability to solve More ❯
Sheffield, South Yorkshire, England, United Kingdom Hybrid/Remote Options
Vivedia Ltd
in SQL and Python . Strong grasp of ETL/ELT pipelines , data modeling , and data warehousing . Experience with cloud platforms (AWS, Azure, GCP) and tools like Snowflake, Databricks, or BigQuery . Familiarity with streaming technologies (Kafka, Spark Streaming, Flink) is a plus. Tools & Frameworks: Airflow, dbt, Prefect, CI/CD pipelines, Terraform. Mindset: Curious, data-obsessed, and driven More ❯
scikit-learn, pandas, seaborn, and matplotlib. Significant experience writing and optimising complex SQL queries for data retrieval and manipulation in large-scale databases. Strong experience of Azure Data Stack (Databricks/Data Factory) Desirable Experience Exceptional ability to communicate complex insights effectively to stakeholders at all levels. A strong portfolio of Data Science projects demonstrating the ability to solve complex More ❯
of proficient experience with Python, strong experience in ML frameworks (TensorFlow, PyTorch, Scikit-learn) Experience with distributed training and optimization on GPUs (CUDA, RAPIDS) Familiarity with data pipelines (Spark, Databricks, Kafka) Hands-on experience with CI/CD for ML workflows and container orchestration (Docker, Kubernetes) Strong knowledge of algorithms, data structures, and ML system design Practical experience deploying on More ❯