In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … significant impact, we encourage you to apply! Job Responsibilities ETL/ELT Pipeline Development: Design, develop, and optimize efficient and scalable ETL/ELT pipelines using Python, PySpark, and Apache Airflow. Implement batch and real-time data processing solutions using Apache Spark. Ensure data quality, governance, and security throughout the data lifecycle. Cloud Data Engineering: Manage and optimize … effectiveness. Implement and maintain CI/CD pipelines for data workflows to ensure smooth and reliable deployments. Big Data & Analytics: Develop and optimize large-scale data processing pipelines using Apache Spark and PySpark. Implement data partitioning, caching, and performance tuning techniques to enhance Spark-based workloads. Work with diverse data formats (structured and unstructured) to support advanced analytics and More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
Requirements: 3+ Years data engineering experience Snowflake experience Proficiency across an AWS tech stack DevOps experience building and deploying using Terraform Nice to Have: DBT Data Modelling Data Vault Apache Airflow Benefits: Up to 10% Bonus Up to 14% Pensions Contribution 29 Days Annual Leave + Bank Holidays Free Company Shares Interviews ongoing don't miss your chance to More ❯
In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … data processing workloads Implement CI/CD pipelines for data workflows to ensure smooth and reliable deployments. Big Data & Analytics: Build and optimize large-scale data processing pipelines using Apache Spark and PySpark Implement data partitioning, caching, and performance tuning for Spark-based workloads. Work with diverse data formats (structured and unstructured) to support advanced analytics and machine learning More ❯
Bracknell, England, United Kingdom Hybrid / WFH Options
Circana, LLC
In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … data processing workloads Implement CI/CD pipelines for data workflows to ensure smooth and reliable deployments. Big Data & Analytics: Build and optimize large-scale data processing pipelines using Apache Spark and PySpark Implement data partitioning, caching, and performance tuning for Spark-based workloads. Work with diverse data formats (structured and unstructured) to support advanced analytics and machine learning More ❯
products or platforms Strong knowledge of SQL and experience with large-scale relational and/or NoSQL databases Experience testing data pipelines (ETL/ELT), preferably with tools like Apache Airflow, dbt, Spark, or similar Proficiency in Python or similar scripting language for test automation Experience with cloud platforms (AWS, GCP, or Azure), especially in data-related services Familiarity More ❯
in Microsoft Fabric and Databricks, including data pipeline development, data warehousing, and data lake management Proficiency in Python, SQL, Scala, or Java Experience with data processing frameworks such as Apache Spark, Apache Beam, or Azure Data Factory Strong understanding of data architecture principles, data modelling, and data governance Experience with cloud-based data platforms, including Azure and or More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills: Designing Databricks based solutions More ❯
ideally with experience using data processing frameworks such as Kafka, NoSQL, Airflow, TensorFlow, or Spark. Finally, experience with cloud platforms like AWS or Azure, including data services such as Apache Airflow, Athena, or SageMaker, is essential. This is a fantastic opportunity for a Data Engineer to join a rapidly expanding start-up at an important time where you will More ❯
in Python with libraries like TensorFlow, PyTorch, or Scikit-learn for ML, and Pandas, PySpark, or similar for data processing. Experience designing and orchestrating data pipelines with tools like Apache Airflow, Spark, or Kafka. Strong understanding of SQL, NoSQL, and data modeling. Familiarity with cloud platforms (AWS, Azure, GCP) for deploying ML and data solutions. Knowledge of MLOps practices More ❯
of CI/CD tools and technologies (e.g., Git, Gitlab, Jenkins, GCP, AWS) Knowledge of containerisation and microservice architecture Ability to develop dashboard UIs for publishing performance (e.g., Grafana, Apache Superset, etc.) Exposure to safety certification standards and processes We provide: Competitive salary, benchmarked against the market and reviewed annually Company share programme Hybrid and/or flexible work More ❯
Bracknell, Berkshire, United Kingdom Hybrid / WFH Options
Perforce Software, Inc
of Zend products. Participate in an on-call rotation Requirements: Experience using the AWS EC2 web console and APIs. Deep understanding of HTTP protocol, including web security, and troubleshooting. Apache or Nginx web server administration and configuration experience. Linux system administration experience (Red Hat, Rocky, Alma, Debian, Ubuntu, et. al.) Experience maintaining production RDBMS servers such as MySQL/ More ❯
VMWare General/Usage Technical Leadership & Design DevSecOps tooling and practices Application Security Testing SAFe (scaled agile) Processes Data Integration Focused Data Pipeline Orchestration, and ELT tooling such as Apache Airflow, Apark, NiFi, Airbyte and Singer. Message Brokers, streaming data processors, such as Apache Kafka Object Storage, such as S3, MinIO, LakeFS CI/CD Pipeline, Integration, ideally More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for Azure More ❯
experience leading data or platform teams in a production environment Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as Apache Spark, Kafka, Databricks, DBT or similar Familiarity with data warehousing, ETL/ELT processes, and analytics engineering Programming proficiency in Python, Scala or Java Experience operating in a cloud More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Cathcart Technology
with new methodologies to enhance the user experience. Key skills: ** Senior Data Scientist experience ** Commercial experience in Generative AI and recommender systems ** Strong Python and SQL experience ** Spark/Apache Airflow ** LLM experience ** MLOps experience ** AWS Additional information: This role offers a strong salary of up to £95,000 (Depending on experience/skill) with hybrid working (2 days More ❯
data engineering or a related field. Proficiency in programming languages such as Python, Spark, SQL. Strong experience with SQL databases. Expertise in data pipeline and workflow management tools (e.g., Apache Airflow, ADF). Experience with cloud platforms (Azure preferred) and related data services. There's no place quite like BFS and we're proud of that. And it's More ❯
Watford, Hertfordshire, South East, United Kingdom
ECS
skills in system testing, QA practices, and troubleshooting methodologies. Familiarity with integration tools (ESB, Web Services, WebSphere, SOAP). Knowledge of infrastructure: Windows Server/Client, RDS, IIS/Apache/WebSphere, web browsers. Experience in system upgrades, migrations, and CI/CD (Microsoft DevOps). Proficient in SQL, XML, and basic scripting (Python/JavaScript). Able to More ❯
Senior Software engineer to join their team in London on a full-time basis What You’ll Do Architect and implement high-performance data processing systems in Rust Leverage Apache Arrow and Parquet for in-memory and on-disk data efficiency Integrate and extend systems like DataFusion, ClickHouse, and DuckDB Design low-latency pipelines for analytical workloads Collaborate with More ❯
Senior Software engineer to join their team in London on a full-time basis What You’ll Do Architect and implement high-performance data processing systems in Rust Leverage Apache Arrow and Parquet for in-memory and on-disk data efficiency Integrate and extend systems like DataFusion, ClickHouse, and DuckDB Design low-latency pipelines for analytical workloads Collaborate with More ❯
Oxford, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Engineering (open to professionals from various data eng. backgrounds — data pipelines, ML Eng, data warehousing, analytics engineering, big data, cloud etc.) Technical Exposure: Experience with tools like SQL, Python, Apache Spark, Kafka, Cloud platforms (AWS/GCP/Azure), and modern data stack technologies Formal or Informal Coaching Experience: Any previous coaching, mentoring, or training experience — formal or informal More ❯
Dartford, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Engineering (open to professionals from various data eng. backgrounds — data pipelines, ML Eng, data warehousing, analytics engineering, big data, cloud etc.) Technical Exposure: Experience with tools like SQL, Python, Apache Spark, Kafka, Cloud platforms (AWS/GCP/Azure), and modern data stack technologies Formal or Informal Coaching Experience: Any previous coaching, mentoring, or training experience — formal or informal More ❯
Maidstone, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Engineering (open to professionals from various data eng. backgrounds — data pipelines, ML Eng, data warehousing, analytics engineering, big data, cloud etc.) Technical Exposure: Experience with tools like SQL, Python, Apache Spark, Kafka, Cloud platforms (AWS/GCP/Azure), and modern data stack technologies Formal or Informal Coaching Experience: Any previous coaching, mentoring, or training experience — formal or informal More ❯
Basingstoke, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Engineering (open to professionals from various data eng. backgrounds — data pipelines, ML Eng, data warehousing, analytics engineering, big data, cloud etc.) Technical Exposure: Experience with tools like SQL, Python, Apache Spark, Kafka, Cloud platforms (AWS/GCP/Azure), and modern data stack technologies Formal or Informal Coaching Experience: Any previous coaching, mentoring, or training experience — formal or informal More ❯
Milton Keynes, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Engineering (open to professionals from various data eng. backgrounds — data pipelines, ML Eng, data warehousing, analytics engineering, big data, cloud etc.) Technical Exposure: Experience with tools like SQL, Python, Apache Spark, Kafka, Cloud platforms (AWS/GCP/Azure), and modern data stack technologies Formal or Informal Coaching Experience: Any previous coaching, mentoring, or training experience — formal or informal More ❯
Southampton, England, United Kingdom Hybrid / WFH Options
Leonardo
and adaptable attitude, capable of acquiring new skills. Objective and logical with an enquiring and creative mind Microsoft - C#, .NET, SharePoint, SQL. Data Engineering – experience of one or more: Apache ecosystem, SQL, Python. Experience with Secure DevSECOps within an Agile/SAFe environment. Software development capability. Any form of prior development expertise would be beneficial, but the eagerness and More ❯