web analytics, content management systems (CMS), subscription platforms, ad tech, and social media. Ability to automate and optimise data workflows, using modern ETL/ELT tools (e.g., Airflow, dbt, ApacheSpark) to ensure timely and reliable delivery of data. Experience building robust data models and reporting layers to support performance dashboards, user engagement analytics, ad revenue tracking, and … with 2+ years of hands-on experience in a data engineering role. Tools & Technologies: Databases: Proficient in relational SQL databases. Workflow Management Tools: Exposure to orchestration platforms such as Apache Airflow. Programming Languages: Skilled in one or more of the following languages, i.e.: Python, Java, Scala. Cloud Infrastructure: Understanding of cloud infrastructure such as GCP and tools within the More ❯
using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming Hands-on experience with multiple databases like PostgreSQL, Snowflake, Oracle, MS SQL Server, NOSQL (HBase/Cassandra, MongoDB) Experience in cloud data eco-system - AWS, Azure More ❯
and BI . Advanced skills in SQL and Python , with hands-on experience in relational databases across cloud and on-prem environments. Familiarity with modern data technologies such as ApacheSpark , Kafka , or Snowflake . A comprehensive understanding of the data engineering lifecycle, including Agile delivery , DevOps , Git , APIs , containers , microservices , and pipeline orchestration . Nice to have More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Fortice
between the data warehouse and other systems. Create deployable data pipelines that are tested and robust using a variety of technologies and techniques depending on the available technologies (Nifi, Spark) Build analytics tools that utilise the data pipeline to provide actionable insights into client requirements, operational efficiency, and other key business performance metrics. Complete onsite client visits and provide More ❯
Central London, London, England, United Kingdom Hybrid / WFH Options
Reed
align tech strategy with business objectives and cost efficiency. Security & Compliance : Strong understanding of GDPR, API authentication, and observability. Big Data : Experience with data lakes, warehouses, and tools like Spark, Kafka, and Airflow. ETL Expertise : Ability to evaluate and optimize data ingestion and transformation pipelines. DevOps & CI/CD : Hands-on experience with Jenkins, GitHub Actions, Terraform, and CloudFormation. More ❯
Employment Type: Full-Time
Salary: £120,000 - £150,000 per annum, Inc benefits
City of London, London, United Kingdom Hybrid / WFH Options
Anson Mccade
knowledge of Kafka, Confluent, Databricks, Unity Catalog, and cloud-native architecture. Skilled in Data Mesh, Data Fabric, and product-led data strategy design. Experience with big data tools (e.g., Spark), ETL/ELT, SQL/NoSQL, and data visualisation. Confident communicator with a background in consultancy, stakeholder management, and Agile delivery. Want to hear more? Message me anytime. Linked More ❯
Strong track record delivering production-grade ML models Solid grasp of MLOps best practices Confident speaking to technical and non-technical stakeholders 🛠️ Tech you’ll be using: Python, SQL, Spark, R MLflow, vector databases GitHub/GitLab/Azure DevOps Jira, Confluence 🎓 Bonus points for: MSc/PhD in ML or AI Databricks ML Engineer (Professional) certified More ❯
or AI, including leadership roles. Deep expertise in machine learning, NLP, and predictive modelling. Proficient in Python or R, cloud platforms (AWS, GCP, Azure), and big data tools (e.g. Spark). Strong business acumen, communication skills, and stakeholder engagement. If this role looks of interest, please apply here. Please note - this role cannot offer visa sponsorship. More ❯
priorities aimed at maximizing value through data utilization. Knowled g e/Experience Expertise in Commercial/Procurement Analytics. Experience in SAP (S/4 Hana). Experience with Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL … processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog le Bi g Query). Experience with DataOps practices and tools, includin g CI/CD for data pipelines. More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Omnis Partners
deployment. 🛠️ Key Responsibilities Build and maintain high-performance data pipelines to power AI/ML use cases Architect cloud-native data platforms using tools like Databricks , Airflow , Snowflake , and Spark Collaborate with AI/ML teams to align data processing with model requirements Develop ETL/ELT workflows to support feature engineering, model training, and inference Optimise data workflows … with Scala or Java Experience supporting AI/ML workflows and working with Data Scientists Exposure to cloud platforms: AWS , Azure , or GCP Hands-on with modern data tooling: Spark , Databricks , Snowflake , Airflow Solid grasp of data modelling, orchestration, and infrastructure-as-code (Terraform, Docker, CI/CD) Excellent communication and client-facing skills—comfortable leading on technical delivery More ❯
recommendation engines, NLP, and Computer Vision. Responsibilities: Design, develop, and productionize machine learning models across various applications. Work with Python (ideally production-level code) and other tools like SQL, Spark, and Databricks. Apply clustering, classification, regression, time series modelling, NLP, and deep learning. Develop recommendation engines and leverage third-party data enhancements. Implement MLOps/DevOps practices in cloud … to translate business challenges into data-driven solutions. Requirements: MSc or PhD Degree in Computer Science, Artificial Intelligence, Mathematics, Statistics or related fields. Strong Python skills (bonus: C++, SQL, Spark) Experience in ML algorithms (XGBoost, clustering, regression) Expertise in Time Series, NLP, Computer Vision, MLOps Knowledge of AWS/Azure/GCP, CI/CD, and Agile development Ability More ❯
Cloudera platform is used with hue, hive, HDFS and spark. Key Skills • Terraform, Shell/Powershell scripting - Strong skill expected! • IaC, GitLab, CI/CD pipelines • Azure Data Factory/Azure Data Bricks - is needed as well. Role Requirements • Core More ❯