london (city of london), south east england, united kingdom Hybrid / WFH Options
Medialab Group
data warehouses (e.g. Snowflake, Redshift). Familiarity with data pipelining and orchestration systems (e.g. Airflow). Understanding of modern analytics architectures and data visualisation tools (we use Preset.io/Apache Superset). Exposure to CI/CD pipelines (GitLab CI preferred). Experience with advertising or media data is a plus. More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary More ❯
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary More ❯
london, south east england, united kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary More ❯
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with Apache Spark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me directly More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with Apache Spark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me directly More ❯
london, south east england, united kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with Apache Spark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me directly More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with Apache Spark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me directly More ❯
operations teams. Strong knowledge of Linux systems and SQL. Hands-on experience with modern observability and logging tools such as ELK Stack (Elasticsearch, Logstash, Kibana), Grafana, Prometheus, and ideally Apache Spark. A Computer Science degree from a top-tier university (or equivalent academic background). If this sounds like you, then apply now More ❯
Central London, London, United Kingdom Hybrid / WFH Options
Singular Recruitment
applications and high proficiency SQL for complex querying and performance tuning. ETL/ELT Pipelines: Proven experience designing, building, and maintaining production-grade data pipelines using Google Cloud Dataflow (Apache Beam) or similar technologies. GCP Stack: Hands-on expertise with BigQuery , Cloud Storage , Pub/Sub , and orchestrating workflows with Composer or Vertex Pipelines. Data Architecture & Modelling: Ability to More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet More ❯
management disciplines, including data integration, modeling, optimisation, data quality and Master Data Management. Experience with database technologies such as RDBMS (SQL Server, Oracle) or NoSQL (MongoDB). Knowledge in Apache technologies such as Spark, Kafka and Airflow to build scalable and efficient data pipelines. Have worked on migration projects and some experience with management systems such as SAP, ERP More ❯
Employment Type: Contract
Rate: £700 - £750/day £700-750 Per Day (Inside IR35)
data architectures, such as lakehouse. Experience with CI/CD pipelines, version control systems like Git, and containerization (e.g., Docker). Experience with ETL tools and technologies such as Apache Airflow, Informatica, or Talend. Strong understanding of data governance and best practices in data management. Experience with cloud platforms and services such as AWS, Azure, or GCP for deploying … and managing data solutions. Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. SQL (for database management and querying) Apache Spark (for distributed data processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP More ❯
to learn new technologies IT WOULD BE NICE FOR THE DATA ENGINEER TO HAVE.... Cloud based architectures Microservice architecture or server-less architecture Messaging/routing technologies such as Apache Nifi/RabbitMQ TO BE CONSIDERED.... Please either apply by clicking online or emailing me directly to For further information please call me on . I can make myself More ❯
integration of software IT WOULD BE NICE FOR THE BIG DATA ENGINEER TO HAVE.... Cloud based architectures Microservice architecture or server-less architecture Messaging/routing technologies such as Apache Nifi/RabbitMQ Experience of DevSecOps automated deployment tools such as Jenkins, Ansible, Docker TO BE CONSIDERED.... Please either apply by clicking online or emailing me directly to For More ❯
backend language Front-end development utilising JavaScript, HTML and CSS ideally with experience of a modern front-end JavaScript framework (ideally Vue.js.) Databases (MySQL, DynamoDB) and web servers (e.g., Apache, Nginx) Code versioning tools, such as Git Working in an Agile environment (we use Kanban) within cross-functional teams Analysing, contributing to and refining requirements Behaviour-Driven Development (BDD More ❯
Linux Infrastructure Engineering experience (RedHat Enterprise Linux preferred). Strong background in complex IT environments, ideally within financial services. Hands-on experience with enterprise Linux applications, including: Web servers (Apache, Nginx) Databases (MySQL/MariaDB) Containerisation (Docker, Kubernetes) Automation tools (Ansible) Solid understanding of system security, patching, and performance optimisation. Ability to work both independently and collaboratively in cross More ❯
front-end skills (HTML/CSS/JS/TS, React/Preact) plus PHP/Node.js Experience with WordPress and modern CMS platforms Knowledge of server management (Linux, Apache/Nginx, Docker, AWS) Design awareness and proficiency with Figma/Adobe Creative Suite A problem-solver who thrives at the intersection of code, design, and infrastructure A bit More ❯
familiarity with configuration management/deployment tools such as Ansible and Red Hat Satellite. Experience with firewall and switch configuration, virtualization technologies like VMware, and software technologies such as Apache, Docker, Hadoop, MySQL, and network services (DHCP, DNS, LDAP) is essential. Experience working within governance frameworks like the National Cyber Security Centre guidance and the Government Digital Service Technology More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the More ❯
optimise resources Essential Skills - Experience in lead data engineering or similar senior role, with strong leadership and architecture design experience - Advanced Python and SQL skills - Hands-on experience with Apache Kafka, Apache Spark, and Delta tables - Proficient in Airflow and message brokers (e.g. RabbitMQ, AWS SQS) - Expert-level knowledge of Docker and Kubernetes More ❯
systems, with a focus on data quality and reliability. Design and manage data storage solutions, including databases, warehouses, and lakes. Leverage cloud-native services and distributed processing tools (e.g., Apache Flink, AWS Batch) to support large-scale data workloads. Operations & Tooling Monitor, troubleshoot, and optimize data pipelines to ensure performance and cost efficiency. Implement data governance, access controls, and … ELT pipelines and data architectures. Hands-on expertise with cloud platforms (e.g., AWS) and cloud-native data services. Comfortable with big data tools and distributed processing frameworks such as Apache Flink or AWS Batch. Strong understanding of data governance, security, and best practices for data quality. Effective communicator with the ability to work across technical and non-technical teams. … Additional Strengths Experience with orchestration tools like Apache Airflow. Knowledge of real-time data processing and event-driven architectures. Familiarity with observability tools and anomaly detection for production systems. Exposure to data visualization platforms such as Tableau or Looker. Relevant cloud or data engineering certifications. What we offer: A collaborative and transparent company culture founded on Integrity, Innovation and More ❯
What you'll do Lead the design of scalable, secure data architectures on AWS. Build and optimise ETL/ELT pipelines for batch and streaming data. Deploy and manage Apache Spark jobs on Databricks and Delta Lake. Write production-grade Python and SQL for large-scale data transformations. Drive data quality, governance, and automation through CI/CD and … scientists, analysts, and business stakeholders. Mentor and guide data engineering teams. What we're looking for Proven experience in senior/principal data engineering roles. Expertise in AWS, Databricks, Apache Spark, Python, and SQL . Strong background in cloud-native data platforms, real-time processing, and data lakes. Hands-on experience with tools such as Airflow, Kafka, Docker, GitLab More ❯