South West London, London, United Kingdom Hybrid/Remote Options
ARC IT Recruitment Ltd
. AWS Platform Build: Demonstrable experience designing and building modern data platforms in AWS. ETL/Orchestration Expertise: Expertise in ETL/ELT design and data orchestration, specifically with Apache Airflow. SQL Mastery: Strong SQL skills with significant experience in query tuning and performance optimisation. Programming Proficiency: Proficiency in Python and Bash (for data processing, scripting, and automation). More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Arc IT Recruitment
. AWS Platform Build: Demonstrable experience designing and building modern data platforms in AWS. ETL/Orchestration Expertise: Expertise in ETL/ELT design and data orchestration, specifically with Apache Airflow. SQL Mastery: Strong SQL skills with significant experience in query tuning and performance optimisation. Programming Proficiency: Proficiency in Python and Bash (for data processing, scripting, and automation). More ❯
AWS (S3, Lambda, Glue, Redshift) and/or Azure (Data Lake, Synapse). Programming & Scripting:Proficiency in Python, SQL, PySpark etc. ETL/ELT & Streaming:Expertise in technologies like Apache Airflow, Glue, Kafka, Informatica, EventBridge etc. Industrial Data Integration:Familiarity with OT data schema originating from OSIsoft PI, SCADA, MES, and Historian systems. Information Modeling:Experience in defining semantic More ❯
have framework experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience, such as Python More ❯
with NoSQL systems (e.g. mongodb, dynamodb, elasticsearch) Experience with front-end web development technologies (react, angular) Experience with data streaming technologies (kafka) Experience/awareness of data engineering technologies (apache iceberg, trino, flink, python) Key Behaviours Demonstrated leadership skills, including team mentoring and project ownership Strong collaborative ethos Strong creative and innovative problem-solving skills Experience leading development teams More ❯
at least one cloud data platform (e.g. AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark). · Strong knowledge of data workflow solutions like Azure Data Factory, Apache NiFi, Apache Airflow etc · Good knowledge of stream and batch processing solutions like Apache Flink, Apache Kafka/· Good knowledge of log management, monitoring, and analytics More ❯
City Of Westminster, London, United Kingdom Hybrid/Remote Options
Additional Resources
of Kubernetes, Docker, and cloud-native data ecosystems. Demonstrable experience with Infrastructure as Code tools (Terraform, Ansible). Hands-on experience with PostgreSQL and familiarity with lakehouse technologies (e.g. Apache Parquet, Delta Tables). Exposure to Spark, Databricks, and data lake/lakehouse environments. Understanding of Agile development methods, CI/CD pipelines, GitHub, and automated testing. Practical experience More ❯
Westminster, City of Westminster, Greater London, United Kingdom Hybrid/Remote Options
Additional Resources
of Kubernetes, Docker, and cloud-native data ecosystems. Demonstrable experience with Infrastructure as Code tools (Terraform, Ansible). Hands-on experience with PostgreSQL and familiarity with lakehouse technologies (e.g. Apache Parquet, Delta Tables). Exposure to Spark, Databricks, and data lake/lakehouse environments. Understanding of Agile development methods, CI/CD pipelines, GitHub, and automated testing. Practical experience More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Additional Resources Ltd
of Kubernetes, Docker, and cloud-native data ecosystems. Demonstrable experience with Infrastructure as Code tools (Terraform, Ansible). Hands-on experience with PostgreSQL and familiarity with lakehouse technologies (e.g. Apache Parquet, Delta Tables). Exposure to Spark, Databricks, and data lake/lakehouse environments. Understanding of Agile development methods, CI/CD pipelines, GitHub, and automated testing. Practical experience More ❯
have framework experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Reporting tools (e.g. Tableau, PowerBI, Qlik) GDPR More ❯
Terraform, CloudFormation) and CI/CD workflows · If you have previous exposure to geospatial data, that would be advantageous but is not a requirement for the position. · Familiarity with Apache Spark or Databricks · Excellent communication and collaboration skills Benefits About Prevail Partners Prevail Partners delivers strategic advice, intelligence, specialist capabilities, and managed services to clients ranging from governments and More ❯
of automation IT WOULD BE NICE FOR THE SENIOR SOFTWARE ENGINEER TO HAVE.... Cloud based experience Microservice architecture or server-less architecture Big Data/Messaging technologies such as Apache Nifi/MiNiFi/Kafka TO BE CONSIDERED.... Please either apply by clicking online or emailing me directly to For further information please call me on . I can More ❯
of automation IT WOULD BE NICE FOR THE SENIOR SOFTWARE ENGINEER TO HAVE. Cloud based experience Microservice architecture or server-less architecture Big Data/Messaging technologies such as Apache Nifi/MiNiFi/Kafka TO BE CONSIDERED. Please either apply by clicking online or emailing me directly to For further information please call me on 07704 152 640. More ❯
of automation IT WOULD BE NICE FOR THE SENIOR SOFTWARE ENGINEER TO HAVE.... Cloud based experience Microservice architecture or server-less architecture Big Data/Messaging technologies such as Apache Nifi/MiNiFi/Kafka TO BE CONSIDERED.... Please either apply by clicking online or emailing me directly to For further information please call me on . I can More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Lorien
data storytelling and operational insights. Optimise data workflows across cloud and on-prem environments, ensuring performance and reliability. Skills & Experience: Strong experience in ETL pipeline development using tools like Apache Airflow, Informatica, or similar. Advanced SQL skills and experience with large-scale relational and cloud-based databases. Hands-on experience with Tableau for data visualisation and dashboarding. Exposure to More ❯
GCP/BigQuery or other cloud data warehouses (e.g., Snowflake, Redshift). Familiarity with data orchestration tools (e.g., Airflow). Experience with data visualisation platforms (such as Preset.io/Apache Superset or other). Exposure to CI/CD pipelines, ideally using GitLab CI. Background working with media, marketing, or advertising data. The Opportunity: Work alongside smart, supportive teammates More ❯
London, Oxford Circus, United Kingdom Hybrid/Remote Options
Datatech
GCP/BigQuery or other cloud data warehouses (e.g., Snowflake, Redshift). ·Familiarity with data orchestration tools (e.g., Airflow). ·Experience with data visualisation platforms (such as Preset.io/Apache Superset or other). ·Exposure to CI/CD pipelines, ideally using GitLab CI. ·Background working with media, marketing, or advertising data. The Opportunity: ·Work alongside smart, supportive teammates More ❯
MySQL) to modern NoSQL solutions (e.g., MongoDB, Cassandra). Focus on strategies that enhance data accessibility, integrity, and performance. Big Data Processing & Analytics: Utilise big data frameworks such as Apache Spark and Apache Flink to address challenges associated with large-scale data processing and analysis. These technologies are crucial for managing vast datasets and performing complex data transformations … as Databricks and Snowflake. Well-versed in various storage technologies including AWS S3, Google Cloud BigQuery, Cassandra, MongoDB, Neo4J, and HDFS. Adept in pipeline orchestration tools like AWS Glue, Apache Airflow, and dbt, as well as streaming technologies like Kafka, AWS Kinesis, Google Cloud Pub/Sub, and Azure Event Hubs. Data Storage Expertise: Knowledgeable in data warehousing technologies More ❯
. Solid understanding of DevOps principles and agile delivery. Excellent problem-solving skills and a proactive, team-oriented approach. Confident client-facing communication skills. Desirable Skills & Experience Experience with Apache NiFi and Node.js . Familiarity with JSON, XML, XSD, and XSLT . Knowledge of Jenkins, Maven, BitBucket, and Jira . Exposure to AWS and cloud technologies. Experience working within More ❯
teams to build scalable data pipelines and contribute to digital transformation initiatives across government departments. Key Responsibilities Design, develop and maintain robust data pipelines using PostgreSQL and Airflow or Apache Spark Collaborate with frontend/backend developers using Node.js or React Implement best practices in data modelling, ETL processes and performance optimisation Contribute to containerised deployments (Docker/Kubernetes More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Harnham - Data & Analytics Recruitment
Proven experience designing and implementing end-to-end MLOps processes in a production environment. Cloud ML Stack: Expert proficiency with Databricks and MLflow . Big Data/Coding: Expert Apache Spark and Python engineering experience on large datasets. Core Engineering: Strong experience with GIT for version control and building CI/CD/release pipelines. Data Fundamentals: Excellent SQL More ❯
implementing and consuming DTOs, Service/Repository patterns, etc. -Performance: Proven track record of identifying and fixing performance bottlenecks -Linux & DevOps: Comfortable with Ubuntu; experience with Docker, Nginx/Apache, CI/CD tools -Version Control: Git fluency; experience collaborating via Bitbucket workflows -Communication: Strong written and verbal English; ability to explain technical concepts clearly Nice-to-Have: - Experience More ❯
of containerisation and orchestration (e.g., Docker , Kubernetes , OpenShift ). Experience with CI/CD pipelines (e.g., Jenkins, TeamCity, Concourse). Familiarity with web/application servers such as NGINX, Apache, or JBoss. Exposure to monitoring and logging tools (ELK, Nagios, Splunk, DataDog, New Relic, etc.). Understanding of security and identity management (OAuth2, SSO, ADFS, Keycloak, etc.). Experience More ❯
to learn new technologies IT WOULD BE NICE FOR THE DATA ENGINEER TO HAVE.... Cloud based architectures Microservice architecture or server-less architecture Messaging/routing technologies such as Apache Nifi/RabbitMQ TO BE CONSIDERED.... Please either apply by clicking online or emailing me directly to For further information please call me on . I can make myself More ❯
integration of software IT WOULD BE NICE FOR THE BIG DATA ENGINEER TO HAVE.... Cloud based architectures Microservice architecture or server-less architecture Messaging/routing technologies such as Apache Nifi/RabbitMQ Experience of DevSecOps automated deployment tools such as Jenkins, Ansible, Docker TO BE CONSIDERED.... Please either apply by clicking online or emailing me directly to For More ❯