Contract Senior Data Engineer (OUTSIDE IR35) - Databricks/ApacheSpark NEW CONTRACT VACANCY AVAILABLE - HYBRID NORTH WEST Contract position available for UK-based candidates UK-Based organisation - 2 days on-site in the North West Contract Senior Data Engineer 3 months (extensions likely) Outside IR35 Day rate: £450-500 To apply please email WHO ARE WE? We are … WILL YOU BE DOING? As a Senior Data Engineer, you will be responsible for designing, developing and optimizing real time streaming data platforms. You will be highly experienced with ApacheSpark and Databricks and will have used these in your most recent roles. As a Senior, you will play a key figure in taking control and leading the … project. You will need to be in our North West office twice per week. WE NEED YOU TO HAVE... Databricks ApacheSpark Databricks certified (preferable) TO BE CONSIDERED... Please either apply by clicking online or emailing me directly to james.gambino @searcability.com By applying for this role, you give express consent for us to process & submit (subject to required More ❯
production issues. Optimize applications for performance and responsiveness. Stay Up to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like ApacheSpark, Databricks, Apache Pulsar, Apache Airflow, Temporal, and Apache Flink, sharing knowledge and suggesting improvements. Documentation: Contribute to clear and concise documentation for software, processes … Experience with cloud platforms like AWS, GCP, or Azure. DevOps Tools: Familiarity with containerization (Docker) and infrastructure automation tools like Terraform or Ansible. Real-time Data Streaming: Experience with Apache Pulsar or similar systems for real-time messaging and stream processing is a plus. Data Engineering: Experience with ApacheSpark, Databricks, or similar big data platforms for … processing large datasets, building data pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or Temporal for managing workflows and scheduling jobs in distributed systems. Stream Processing: Experience with Apache Flink or other stream processing frameworks is a plus. Desired Skills Asynchronous Programming: Familiarity with asynchronous programming tools like Celery or asyncio. Frontend Knowledge More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
FDM Group
of the functions, leveraging cutting-edge technologies to deliver meaningful insights supporting Artificial Intelligence (AI) and Machine Learning (ML) driven solutions. Responsibilities Design, build, and optimize data pipelines using ApacheSpark and Snowflake Collaborate with data scientists and analysts to support AI/ML model development and deployment Work closely with stakeholders to understand business requirements and translate … analytical solutions Manage data warehouses ensuring data organisation and optimisation Monitor data systems for failures, enhancing database performance Requirements Minimum of 5 years’ experience as a Data Engineer with ApacheSpark and Snowflake in a production environment Strong understanding of AI/ML concepts, with demonstrable experience in supporting or implementing ML models Proficiency in Python or Scala More ❯
years of experience in data engineering or a related field, with a focus on building scalable data systems and platforms. Expertise in modern data tools and frameworks such as Spark, dbt, Airflow, Kafka, Databricks, and cloud-native services (AWS, GCP, or Azure) Understanding of data modeling, distributed systems, ETL/ELT pipelines, and streaming architectures Proficiency in SQL and More ❯
further details or to enquire about other roles, please contact Nick Mandella at Harnham. KEYWORDS Python, SQL, AWS, GCP, Azure, Cloud, Databricks, Docker, Kubernetes, CI/CD, Terraform, Pyspark, Spark, Kafka, machine learning, statistics, Data Science, Data Scientist, Big Data, Artificial Intelligence, private equity, finance. More ❯
able to work across full data cycle. - Proven Experience working with AWS data technologies (S3, Redshift, Glue, Lambda, Lake formation, Cloud Formation), GitHub, CI/CD - Coding experience in ApacheSpark, Iceberg or Python (Pandas) - Experience in change and release management. - Experience in Database Warehouse design and data modelling - Experience managing Data Migration projects. - Cloud data platform development … the AWS services like Redshift, Lambda,S3,Step Functions, Batch, Cloud formation, Lake Formation, Code Build, CI/CD, GitHub, IAM, SQS, SNS, Aurora DB - Good experience with DBT, Apache Iceberg, Docker, Microsoft BI stack (nice to have) - Experience in data warehouse design (Kimball and lake house, medallion and data vault) is a definite preference as is knowledge of … other data tools and programming languages such as Python & Spark and Strong SQL experience. - Experience is building Data lake and building CI/CD data pipelines - A candidate is expected to understand and can demonstrate experience across the delivery lifecycle and understand both Agile and Waterfall methods and when to apply these. Experience: This position requires several years of More ❯
Liverpool, Merseyside, North West, United Kingdom Hybrid / WFH Options
Forward Role
databases (SQL Server, MySQL) and NoSQL solutions (MongoDB, Cassandra) Hands-on knowledge of AWS S3 and associated big data services Extensive experience with big data technologies including Hadoop and Spark for large-scale dataset processing Deep understanding of data security frameworks, encryption protocols, access management and regulatory compliance Proven track record building automated, scalable ETL frameworks and data pipeline More ❯
optimising large-scale data systems Expertise in cloud-based data platforms (AWS, Azure, Google Cloud) and distributed storage solutions Proficiency in Python, PySpark, SQL, NoSQL, and data processing frameworks (Spark, Databricks) Expertise in ETL/ELT design and orchestration in Azure, as well as pipeline performance tuning & optimisation Competent in integrating relational, NoSQL, and streaming data sources Management of More ❯
in AWS. Strong expertise with AWS services, including Glue, Redshift, Data Catalog, and large-scale data storage solutions such as data lakes. Proficiency in ETL/ELT tools (e.g. ApacheSpark, Airflow, dbt). Skilled in data processing languages such as Python, Java, and SQL. Strong knowledge of data warehousing, data lakes, and data lakehouse architectures. Excellent analytical More ❯
Wilmslow, England, United Kingdom Hybrid / WFH Options
The Citation Group
we’d love you to have... Understanding of cloud computing security concepts Experience in relational cloud based database technologies like Snowflake, BigQuery, Redshift Experience in open source technologies like Spark, Kafka, Beam Good understanding of Cloud providers – AWS, Microsoft Azure, Google Cloud Familiarity with DBT, Delta Lake, Databricks Experience working in an agile environment Here’s a taste of More ❯
warrington, cheshire, north west england, united kingdom Hybrid / WFH Options
The Citation Group
we’d love you to have... Understanding of cloud computing security concepts Experience in relational cloud based database technologies like Snowflake, BigQuery, Redshift Experience in open source technologies like Spark, Kafka, Beam Good understanding of Cloud providers – AWS, Microsoft Azure, Google Cloud Familiarity with DBT, Delta Lake, Databricks Experience working in an agile environment Here’s a taste of More ❯
Reading, England, United Kingdom Hybrid / WFH Options
HD TECH Recruitment
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
slough, south east england, united kingdom Hybrid / WFH Options
HD TECH Recruitment
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
Portsmouth, Hampshire, England, United Kingdom Hybrid / WFH Options
Computappoint
/Starburst Enterprise/Galaxy administration and CLI operations Container Orchestration : Proven track record with Kubernetes/OpenShift in production environments Big Data Ecosystem : Strong background in Hadoop, Hive, Spark, and cloud platforms (AWS/Azure/GCP) Systems Architecture : Understanding of distributed systems, high availability, and fault-tolerant design Security Protocols : Experience with LDAP, Active Directory, OAuth2, and More ❯
Reigate, Surrey, England, United Kingdom Hybrid / WFH Options
esure Group
exposure to cloud-native data infrastructures (Databricks, Snowflake) especially in AWS environments is a plus Experience in building and maintaining batch and streaming data pipelines using Kafka, Airflow, or Spark Familiarity with governance frameworks, access controls (RBAC), and implementation of pseudonymisation and retention policies Exposure to enabling GenAI and ML workloads by preparing model-ready and vector-optimised datasets More ❯
Bedford, Bedfordshire, England, United Kingdom Hybrid / WFH Options
Reed Talent Solutions
source systems into our reporting solutions. Pipeline Development: Develop and configure meta-data driven data pipelines using data orchestration tools such as Azure Data factory and engineering tools like ApacheSpark to ensure seamless data flow. Monitoring and Failure Recovery: Implement monitoring procedures to detect failures or unusual data profiles and establish recovery processes to maintain data integrity. … in Azure data tooling such as Synapse Analytics, Microsoft Fabric, Azure Data Lake Storage/One Lake, and Azure Data Factory. Understanding of data extraction from vendor REST APIs. Spark/Pyspark or Python skills a bonus or a willingness to develop these skills. Experience with monitoring and failure recovery in data pipelines. Excellent problem-solving skills and attention More ❯
and Data Practice. You will have the following experience : 8+ years of experience in data engineering or cloud development. Strong hands-on experience with AWS services Proficiency in Databricks, ApacheSpark, SQL, and Python. Experience with data modeling, data warehousing, and DevOps practices. Familiarity with Delta Lake, Unity Catalog, and Databricks REST APIs. Excellent problem-solving and communication More ❯
and Data Practice. You will have the following experience : 8+ years of experience in data engineering or cloud development. Strong hands-on experience with AWS services Proficiency in Databricks, ApacheSpark, SQL, and Python. Experience with data modeling, data warehousing, and DevOps practices. Familiarity with Delta Lake, Unity Catalog, and Databricks REST APIs. Excellent problem-solving and communication More ❯
london (city of london), south east england, united kingdom
Capgemini
and Data Practice. You will have the following experience : 8+ years of experience in data engineering or cloud development. Strong hands-on experience with AWS services Proficiency in Databricks, ApacheSpark, SQL, and Python. Experience with data modeling, data warehousing, and DevOps practices. Familiarity with Delta Lake, Unity Catalog, and Databricks REST APIs. Excellent problem-solving and communication More ❯
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Fruition Group
best practices for data security and compliance. Collaborate with stakeholders and external partners. Skills & Experience: Strong experience with AWS data technologies (e.g., S3, Redshift, Lambda). Proficient in Python, ApacheSpark, and SQL. Experience in data warehouse design and data migration projects. Cloud data platform development and deployment. Expertise across data warehouse and ETL/ELT development in More ❯
Oxford, England, United Kingdom Hybrid / WFH Options
Akrivia Health
cloud technologies and modern engineering practices. ● Experience with the following technologies: o Cloud Provider: AWS o Languages: Python, PHP, Rust & SQL o Hosting: Kubernetes o Tooling & Analytics: Airflow, RabbitMQ, ApacheSpark, PowerBI ● Proven ability to complete projects according to outlined scope, budget, and timeline ● Experience with industry standard tools such as Microsoft products, Jira, confluence, project management tools More ❯
banbury, south east england, united kingdom Hybrid / WFH Options
Akrivia Health
cloud technologies and modern engineering practices. ● Experience with the following technologies: o Cloud Provider: AWS o Languages: Python, PHP, Rust & SQL o Hosting: Kubernetes o Tooling & Analytics: Airflow, RabbitMQ, ApacheSpark, PowerBI ● Proven ability to complete projects according to outlined scope, budget, and timeline ● Experience with industry standard tools such as Microsoft products, Jira, confluence, project management tools More ❯
Strong capability in documenting high-level designs, system workflows, and technical specifications. Deep understanding of architectural patterns (e.g., microservices, layered, serverless). Expert knowledge of big data tools (Kafka, Spark, Hadoop) and major cloud platforms (AWS, Azure, GCP) to inform technology recommendations. Proficiency in Java, C#, Python, or JavaScript/TypeScript. Experience with cloud platforms (AWS, Azure, Google Cloud More ❯
observability. Preferred Qualifications Exposure to machine learning workflows, model lifecycle management, or data engineering platforms. Experience with distributed systems, event-driven architectures (e.g., Kafka), and big data platforms (e.g., Spark, Databricks). Familiarity with banking or financial domain use cases, including data governance and compliance-focused development. Knowledge of platform security, monitoring, and resilient architecture patterns. More ❯
architecture, integration, governance frameworks, and privacy-enhancing technologies Experience with databases (SQL & NoSQL - Oracle, PostgreSQL, MongoDB), data warehousing, and ETL/ELT tools Familiarity with big data technologies (Hadoop, Spark, Kafka), cloud platforms (AWS, Azure, GCP), and API integrations Desirable: Data certifications (TOGAF, DAMA), government/foundational data experience, cloud-native platforms knowledge, AI/ML data requirements understanding More ❯