Contract Senior Data Engineer (OUTSIDE IR35) - Databricks/ApacheSpark NEW CONTRACT VACANCY AVAILABLE - HYBRID NORTH WEST Contract position available for UK-based candidates UK-Based organisation - 2 days on-site in the North West Contract Senior Data Engineer 3 months (extensions likely) Outside IR35 Day rate: £450-500 To apply please email WHO ARE WE? We are … WILL YOU BE DOING? As a Senior Data Engineer, you will be responsible for designing, developing and optimizing real time streaming data platforms. You will be highly experienced with ApacheSpark and Databricks and will have used these in your most recent roles. As a Senior, you will play a key figure in taking control and leading the … project. You will need to be in our North West office twice per week. WE NEED YOU TO HAVE... Databricks ApacheSpark Databricks certified (preferable) TO BE CONSIDERED... Please either apply by clicking online or emailing me directly to james.gambino @searcability.com By applying for this role, you give express consent for us to process & submit (subject to required More ❯
robust way possible! Diverse training opportunities and social benefits (e.g. UK pension schema) What do you offer? Strong hands-on experience working with modern Big Data technologies such as ApacheSpark, Trino, Apache Kafka, Apache Hadoop, Apache HBase, Apache Nifi, Apache Airflow, Opensearch Proficiency in cloud-native technologies such as containerization and Kubernetes More ❯
production issues. Optimize applications for performance and responsiveness. Stay Up to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like ApacheSpark, Databricks, Apache Pulsar, Apache Airflow, Temporal, and Apache Flink, sharing knowledge and suggesting improvements. Documentation: Contribute to clear and concise documentation for software, processes … Experience with cloud platforms like AWS, GCP, or Azure. DevOps Tools: Familiarity with containerization (Docker) and infrastructure automation tools like Terraform or Ansible. Real-time Data Streaming: Experience with Apache Pulsar or similar systems for real-time messaging and stream processing is a plus. Data Engineering: Experience with ApacheSpark, Databricks, or similar big data platforms for … processing large datasets, building data pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or Temporal for managing workflows and scheduling jobs in distributed systems. Stream Processing: Experience with Apache Flink or other stream processing frameworks is a plus. Desired Skills Asynchronous Programming: Familiarity with asynchronous programming tools like Celery or asyncio. Frontend Knowledge More ❯
extract data from diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging … SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines, version control systems like Git, and containerization (e.g., Docker). Experience … with ETL tools and technologies such as Apache Airflow, Informatica, or Talend. Strong understanding of data governance and best practices in data management. Experience with cloud platforms and services such as AWS, Azure, or GCP for deploying and managing data solutions. Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. SQL More ❯
to ensure alignment and impact. Champion a culture of learning, innovation, and continuous improvement within the team. Tech Stack: Python | SQL | Snowflake | AWS (S3, EC2, Terraform, Docker) | Airflow | dbt | ApacheSpark | Apache Iceberg | Postgres Requirements: Proven experience in a hands on data engineering leadership role. Strong background in modern data engineering (pipelines, modelling, transformations, governance). Experience More ❯
supporting multi-tenant SaaS data platforms with strategies for data partitioning, tenant isolation, and cost management Exposure to real-time data processing technologies such as Kafka, Kinesis, Flink, or Spark Streaming, alongside batch processing capabilities Strong knowledge of SaaS compliance practices and security frameworks Core Competencies Excellent problem-solving abilities with the capacity to translate requirements into production-grade More ❯
two of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD More ❯
Cleared: Required Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
field. Technical Skills Required Hands-on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). Experience with ApacheSpark or any other distributed data programming frameworks. Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. Experience with cloud infrastructure like AWS … CloudFormation. Hands-on development experience in an airline, e-commerce or retail industry Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. Experience implementing end-to-end monitoring, quality checks, lineage tracking and automated alerts to ensure reliable and trustworthy data across the platform. Experience of More ❯
West London, London, United Kingdom Hybrid / WFH Options
Young's Employment Services Ltd
a Senior Data Engineer, Tech Lead, Data Engineering Manager etc. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as ApacheSpark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in Python, Pyspark, Scala or Java. Experience operating More ❯
/medical devices preferred but not required) Strong Python programming and data engineering skills (Pandas, PySpark, Dask) Proficiency with databases (SQL/NoSQL), ETL processes, and modern data frameworks (ApacheSpark, Airflow, Kafka) Solid experience with cloud platforms (AWS, GCP, or Azure) and CI/CD for data pipelines Understanding of data privacy and healthcare compliance (GDPR, HIPAA More ❯
further details or to enquire about other roles, please contact Nick Mandella at Harnham. KEYWORDS Python, SQL, AWS, GCP, Azure, Cloud, Databricks, Docker, Kubernetes, CI/CD, Terraform, Pyspark, Spark, Kafka, machine learning, statistics, Data Science, Data Scientist, Big Data, Artificial Intelligence, private equity, finance. More ❯
further details or to enquire about other roles, please contact Nick Mandella at Harnham. KEYWORDS Python, SQL, AWS, GCP, Azure, Cloud, Databricks, Docker, Kubernetes, CI/CD, Terraform, Pyspark, Spark, Kafka, machine learning, statistics, Data Science, Data Scientist, Big Data, Artificial Intelligence, private equity, finance. More ❯
able to work across full data cycle. - Proven Experience working with AWS data technologies (S3, Redshift, Glue, Lambda, Lake formation, Cloud Formation), GitHub, CI/CD - Coding experience in ApacheSpark, Iceberg or Python (Pandas) - Experience in change and release management. - Experience in Database Warehouse design and data modelling - Experience managing Data Migration projects. - Cloud data platform development … the AWS services like Redshift, Lambda,S3,Step Functions, Batch, Cloud formation, Lake Formation, Code Build, CI/CD, GitHub, IAM, SQS, SNS, Aurora DB - Good experience with DBT, Apache Iceberg, Docker, Microsoft BI stack (nice to have) - Experience in data warehouse design (Kimball and lake house, medallion and data vault) is a definite preference as is knowledge of … other data tools and programming languages such as Python & Spark and Strong SQL experience. - Experience is building Data lake and building CI/CD data pipelines - A candidate is expected to understand and can demonstrate experience across the delivery lifecycle and understand both Agile and Waterfall methods and when to apply these. Experience: This position requires several years of More ❯
Liverpool, Merseyside, North West, United Kingdom Hybrid / WFH Options
Forward Role
databases (SQL Server, MySQL) and NoSQL solutions (MongoDB, Cassandra) Hands-on knowledge of AWS S3 and associated big data services Extensive experience with big data technologies including Hadoop and Spark for large-scale dataset processing Deep understanding of data security frameworks, encryption protocols, access management and regulatory compliance Proven track record building automated, scalable ETL frameworks and data pipeline More ❯
optimising large-scale data systems Expertise in cloud-based data platforms (AWS, Azure, Google Cloud) and distributed storage solutions Proficiency in Python, PySpark, SQL, NoSQL, and data processing frameworks (Spark, Databricks) Expertise in ETL/ELT design and orchestration in Azure, as well as pipeline performance tuning & optimisation Competent in integrating relational, NoSQL, and streaming data sources Management of More ❯
Passion for data with extensive knowledge and experience in Machine Learning techniques. Expertise in key technologies related to Data Management. Proficiency in Python is required; knowledge of SQL and Spark is a plus. Experience with Cloud platforms, specifically Azure and Databricks. In-depth knowledge and experience in Data Analytics Architecture. Understanding of Data Governance processes and platforms. Experience with More ❯
in AWS. Strong expertise with AWS services, including Glue, Redshift, Data Catalog, and large-scale data storage solutions such as data lakes. Proficiency in ETL/ELT tools (e.g. ApacheSpark, Airflow, dbt). Skilled in data processing languages such as Python, Java, and SQL. Strong knowledge of data warehousing, data lakes, and data lakehouse architectures. Excellent analytical More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
Ingeus
you require: Degree Qualification in Computer Science, Engineering or related field Proven experience as a Data Engineer Strong proficiency in Python and experience with data manipulation frameworks such as ApacheSpark In-depth knowledge of relational and non-relational databases, data modelling, and SQL Experience with cloud platforms including Azure Synapse and Fabric Proficiency in designing and implementing More ❯
Wilmslow, England, United Kingdom Hybrid / WFH Options
The Citation Group
we’d love you to have... Understanding of cloud computing security concepts Experience in relational cloud based database technologies like Snowflake, BigQuery, Redshift Experience in open source technologies like Spark, Kafka, Beam Good understanding of Cloud providers – AWS, Microsoft Azure, Google Cloud Familiarity with DBT, Delta Lake, Databricks Experience working in an agile environment Here’s a taste of More ❯
warrington, cheshire, north west england, united kingdom Hybrid / WFH Options
The Citation Group
we’d love you to have... Understanding of cloud computing security concepts Experience in relational cloud based database technologies like Snowflake, BigQuery, Redshift Experience in open source technologies like Spark, Kafka, Beam Good understanding of Cloud providers – AWS, Microsoft Azure, Google Cloud Familiarity with DBT, Delta Lake, Databricks Experience working in an agile environment Here’s a taste of More ❯
diverse set of ML and NLP models - Build and maintain batch and real-time feature computation pipelines capable of processing complex structured and unstructured data using technologies such as Spark, Apache Airflow, AWS SageMaker etc. - Contribute to the implementation of foundational ML infrastructure such as feature storage and engineering, asynchronous (batch) inference and evaluation - Apply your keen product … model training/monitoring and ML inference services - Proficiency in creating and optimizing high-throughput ETL/ELT pipelines using a Big Data processing engine such as DataBricks Workflows, Spark, Flink, Dask, dbt or similar - Experience building software and/or data pipelines in the AWS cloud (SageMaker Endpoints, ECS/EKS, EMR, Glue) Why Proofpoint Protecting people is More ❯
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
Reading, England, United Kingdom Hybrid / WFH Options
HD TECH Recruitment
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
slough, south east england, united kingdom Hybrid / WFH Options
HD TECH Recruitment
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯