Contract Senior Data Engineer (OUTSIDE IR35) - Databricks/ApacheSpark NEW CONTRACT VACANCY AVAILABLE - HYBRID NORTH WEST Contract position available for UK-based candidates UK-Based organisation - 2 days on-site in the North West Contract Senior Data Engineer 3 months (extensions likely) Outside IR35 Day rate: £450-500 To apply please email WHO ARE WE? We are … WILL YOU BE DOING? As a Senior Data Engineer, you will be responsible for designing, developing and optimizing real time streaming data platforms. You will be highly experienced with ApacheSpark and Databricks and will have used these in your most recent roles. As a Senior, you will play a key figure in taking control and leading the … project. You will need to be in our North West office twice per week. WE NEED YOU TO HAVE... Databricks ApacheSpark Databricks certified (preferable) TO BE CONSIDERED Please either apply by clicking online or emailing me directly to james.gambino @searcability.com By applying for this role, you give express consent for us to process & submit (subject to required More ❯
Contract Senior Data Engineer (OUTSIDE IR35) - Databricks/ApacheSpark NEW CONTRACT VACANCY AVAILABLE - HYBRID NORTH WEST Contract position available for UK-based candidates UK-Based organisation - 2 days on-site in the North West Contract Senior Data Engineer 3 months (extensions likely) Outside IR35 Day rate: £450-500 To apply please email WHO ARE WE? We are … WILL YOU BE DOING? As a Senior Data Engineer, you will be responsible for designing, developing and optimizing real time streaming data platforms. You will be highly experienced with ApacheSpark and Databricks and will have used these in your most recent roles. As a Senior, you will play a key figure in taking control and leading the … project. You will need to be in our North West office twice per week. WE NEED YOU TO HAVE... Databricks ApacheSpark Databricks certified (preferable) TO BE CONSIDERED... Please either apply by clicking online or emailing me directly to james.gambino @searcability.com By applying for this role, you give express consent for us to process & submit (subject to required More ❯
production issues. Optimize applications for performance and responsiveness. Stay Up to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like ApacheSpark, Databricks, Apache Pulsar, Apache Airflow, Temporal, and Apache Flink, sharing knowledge and suggesting improvements. Documentation: Contribute to clear and concise documentation for software, processes … Experience with cloud platforms like AWS, GCP, or Azure. DevOps Tools: Familiarity with containerization (Docker) and infrastructure automation tools like Terraform or Ansible. Real-time Data Streaming: Experience with Apache Pulsar or similar systems for real-time messaging and stream processing is a plus. Data Engineering: Experience with ApacheSpark, Databricks, or similar big data platforms for … processing large datasets, building data pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or Temporal for managing workflows and scheduling jobs in distributed systems. Stream Processing: Experience with Apache Flink or other stream processing frameworks is a plus. Desired Skills Asynchronous Programming: Familiarity with asynchronous programming tools like Celery or asyncio. Frontend Knowledge More ❯
of our clients data platform. This role is ideal for someone who thrives on building scalable data solutions and is confident working with modern tools such as Azure Databricks , Apache Kafka , and Spark . In this role, you'll play a key part in designing, delivering, and optimising data pipelines and architectures. Your focus will be on enabling … and want to make a meaningful impact in a collaborative, fast-paced environment, we want to hear from you !! Role and Responsibilities Designing and building scalable data pipelines using ApacheSpark in Azure Databricks Developing real-time and batch data ingestion workflows, ideally using Apache Kafka Collaborating with data scientists, analysts, and business stakeholders to build high … and Experience We're seeking candidates who bring strong technical skills and a hands-on approach to modern data engineering. You should have: Proven experience with Azure Databricks and ApacheSpark Working knowledge of Apache Kafka and real-time data streaming Strong proficiency in SQL and Python Familiarity with Azure Data Services and CI/CD pipelines More ❯
extract data from diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging … SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines, version control systems like Git, and containerization (e.g., Docker). Experience … with ETL tools and technologies such as Apache Airflow, Informatica, or Talend. Strong understanding of data governance and best practices in data management. Experience with cloud platforms and services such as AWS, Azure, or GCP for deploying and managing data solutions. Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. SQL More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
Be Doing You'll be a key contributor to the development of a next-generation data platform, with responsibilities including: Designing and implementing scalable data pipelines using Python and ApacheSpark Building and orchestrating workflows using AWS services such as Glue , Lambda , S3 , and EMR Serverless Applying best practices in software engineering: CI/CD , version control , automated … testing , and modular design Supporting the development of a lakehouse architecture using Apache Iceberg Collaborating with product and business teams to deliver data-driven solutions Embedding observability and quality checks into data workflows Participating in code reviews, pair programming, and architectural discussions Gaining domain knowledge in financial data and sharing insights with the team What They're Looking For … for experience with type hints, linters, and testing frameworks like pytest) Solid understanding of data engineering fundamentals: ETL/ELT, schema evolution, batch processing Experience or strong interest in ApacheSpark for distributed data processing Familiarity with AWS data tools (e.g., S3, Glue, Lambda, EMR) Strong communication skills and a collaborative mindset Comfortable working in Agile environments and More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
Be Doing You'll be a key contributor to the development of a next-generation data platform, with responsibilities including: Designing and implementing scalable data pipelines using Python and ApacheSpark Building and orchestrating workflows using AWS services such as Glue , Lambda , S3 , and EMR Serverless Applying best practices in software engineering: CI/CD , version control , automated … testing , and modular design Supporting the development of a lakehouse architecture using Apache Iceberg Collaborating with product and business teams to deliver data-driven solutions Embedding observability and quality checks into data workflows Participating in code reviews, pair programming, and architectural discussions Gaining domain knowledge in financial data and sharing insights with the team What They're Looking For … for experience with type hints, linters, and testing frameworks like pytest) Solid understanding of data engineering fundamentals: ETL/ELT, schema evolution, batch processing Experience or strong interest in ApacheSpark for distributed data processing Familiarity with AWS data tools (e.g., S3, Glue, Lambda, EMR) Strong communication skills and a collaborative mindset Comfortable working in Agile environments and More ❯
two of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD More ❯
Cleared: Required Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
enterpirse environment (Airflow), Infrastructure automation (Terraform), CI/CD platform (Github Actions & Admin), Password/Secret management (hashicorp vault). Strong Data related programming skills SQL/Python/Spark/Scala. Experience in Database technologies in relation to Data Warehousing/Data Lake/Lake housing patterns. Relevant experience when handling structured and non-structured data (Information Modeler More ❯
enterpirse environment (Airflow), Infrastructure automation (Terraform), CI/CD platform (Github Actions & Admin), Password/Secret management (hashicorp vault). Strong Data related programming skills SQL/Python/Spark/Scala. Experience in Database technologies in relation to Data Warehousing/Data Lake/Lake housing patterns. Relevant experience when handling structured and non-structured data (Information Modeler More ❯
/medical devices preferred but not required) Strong Python programming and data engineering skills (Pandas, PySpark, Dask) Proficiency with databases (SQL/NoSQL), ETL processes, and modern data frameworks (ApacheSpark, Airflow, Kafka) Solid experience with cloud platforms (AWS, GCP, or Azure) and CI/CD for data pipelines Understanding of data privacy and healthcare compliance (GDPR, HIPAA More ❯
further details or to enquire about other roles, please contact Nick Mandella at Harnham. KEYWORDS Python, SQL, AWS, GCP, Azure, Cloud, Databricks, Docker, Kubernetes, CI/CD, Terraform, Pyspark, Spark, Kafka, machine learning, statistics, Data Science, Data Scientist, Big Data, Artificial Intelligence, private equity, finance. More ❯
able to work across full data cycle. - Proven Experience working with AWS data technologies (S3, Redshift, Glue, Lambda, Lake formation, Cloud Formation), GitHub, CI/CD - Coding experience in ApacheSpark, Iceberg or Python (Pandas) - Experience in change and release management. - Experience in Database Warehouse design and data modelling - Experience managing Data Migration projects. - Cloud data platform development … the AWS services like Redshift, Lambda,S3,Step Functions, Batch, Cloud formation, Lake Formation, Code Build, CI/CD, GitHub, IAM, SQS, SNS, Aurora DB - Good experience with DBT, Apache Iceberg, Docker, Microsoft BI stack (nice to have) - Experience in data warehouse design (Kimball and lake house, medallion and data vault) is a definite preference as is knowledge of … other data tools and programming languages such as Python & Spark and Strong SQL experience. - Experience is building Data lake and building CI/CD data pipelines - A candidate is expected to understand and can demonstrate experience across the delivery lifecycle and understand both Agile and Waterfall methods and when to apply these. Experience: This position requires several years of More ❯
data ecosystem (e.g., Pandas, NumPy) and deep expertise in SQL for building robust data extraction, transformation, and analysis pipelines. Hands-on experience with big data processing frameworks such as ApacheSpark, Databricks, or Snowflake, with a focus on scalability and performance optimization. PREFERRED QUALIFICATIONS: Solid understanding of cloud infrastructure, particularly AWS, with practical experience using Docker, Kubernetes, and More ❯
Liverpool, Merseyside, North West, United Kingdom Hybrid / WFH Options
Forward Role
databases (SQL Server, MySQL) and NoSQL solutions (MongoDB, Cassandra) Hands-on knowledge of AWS S3 and associated big data services Extensive experience with big data technologies including Hadoop and Spark for large-scale dataset processing Deep understanding of data security frameworks, encryption protocols, access management and regulatory compliance Proven track record building automated, scalable ETL frameworks and data pipeline More ❯
optimising large-scale data systems Expertise in cloud-based data platforms (AWS, Azure, Google Cloud) and distributed storage solutions Proficiency in Python, PySpark, SQL, NoSQL, and data processing frameworks (Spark, Databricks) Expertise in ETL/ELT design and orchestration in Azure, as well as pipeline performance tuning & optimisation Competent in integrating relational, NoSQL, and streaming data sources Management of More ❯
optimising large-scale data systems Expertise in cloud-based data platforms (AWS, Azure, Google Cloud) and distributed storage solutions Proficiency in Python, PySpark, SQL, NoSQL, and data processing frameworks (Spark, Databricks) Expertise in ETL/ELT design and orchestration in Azure, as well as pipeline performance tuning & optimisation Competent in integrating relational, NoSQL, and streaming data sources Management of More ❯
Passion for data with extensive knowledge and experience in Machine Learning techniques. Expertise in key technologies related to Data Management. Proficiency in Python is required; knowledge of SQL and Spark is a plus. Experience with Cloud platforms, specifically Azure and Databricks. In-depth knowledge and experience in Data Analytics Architecture. Understanding of Data Governance processes and platforms. Experience with More ❯
in AWS. Strong expertise with AWS services, including Glue, Redshift, Data Catalog, and large-scale data storage solutions such as data lakes. Proficiency in ETL/ELT tools (e.g. ApacheSpark, Airflow, dbt). Skilled in data processing languages such as Python, Java, and SQL. Strong knowledge of data warehousing, data lakes, and data lakehouse architectures. Excellent analytical More ❯
of professional experience in data engineering roles, preferably for a customer facing data product Expertise in designing and implementing large-scale data processing systems with data tooling such as Spark, Kafka, Airflow, dbt, Snowflake, Databricks, or similar Strong programming skills in languages such as SQL, Python, Go or Scala Demonstrable use and an understanding of effective use of AI More ❯
Portsmouth, Hampshire, England, United Kingdom Hybrid / WFH Options
Computappoint
/Starburst Enterprise/Galaxy administration and CLI operations Container Orchestration : Proven track record with Kubernetes/OpenShift in production environments Big Data Ecosystem : Strong background in Hadoop, Hive, Spark, and cloud platforms (AWS/Azure/GCP) Systems Architecture : Understanding of distributed systems, high availability, and fault-tolerant design Security Protocols : Experience with LDAP, Active Directory, OAuth2, and More ❯
eg, LLM-based code assistants, retrieval-augmented generation). Experience with observability tools (Prometheus, Grafana, ELK, OpenTelemetry) and applying AI for intelligent alerting. Knowledge of big data frameworks (Kafka, Spark, Flink) for data-driven AI use cases. Background in finance, justice, or enterprise-scale digital transformation projects. What We Offer Opportunity to lead high-impact engineering teams delivering innovative More ❯
Reigate, Surrey, England, United Kingdom Hybrid / WFH Options
esure Group
exposure to cloud-native data infrastructures (Databricks, Snowflake) especially in AWS environments is a plus Experience in building and maintaining batch and streaming data pipelines using Kafka, Airflow, or Spark Familiarity with governance frameworks, access controls (RBAC), and implementation of pseudonymisation and retention policies Exposure to enabling GenAI and ML workloads by preparing model-ready and vector-optimised datasets More ❯
with excellent collaboration skills. Grit in the face of technical obstacles. Nice to have Building SDKs or client libraries to support API consumption. Knowledge of distributed data processing frameworks (Spark, Dask). Understanding of GPU orchestration and optimization in Kubernetes. Familiarity with MLOps and ML model lifecycle pipelines. Experience with AI model training and fine-tuning. Familiarity with event More ❯