in Microsoft Azure is mandatory including components like Azure Data Factory, Azure Data Lake Storage, Azure SQL, Azure DataBricks, HD Insights, ML Service etc. Good knowledge of Python and Spark are required. Experience in ETL & ELT Good understanding of one scripting language Good understanding of how to enable analytics using cloud technology and ML Ops Experience in Azure Infrastructure More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
ability to explain technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring/security is necessary. Significant … MUST have the Right to Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Lead Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, On-Prem, Cloud, ETL, Azure Data Fabric, ADF, Databricks, Azure Data, Delta Lake, Data Lake. Please note that due to a More ❯
Science, Computer Science, or a related field. 5+ years of experience in data engineering and data quality. Strong proficiency in Python/Java, SQL, and data processing frameworks including Apache Spark. Knowledge of machine learning and its data requirements. Attention to detail and a strong commitment to data integrity. Excellent problem-solving skills and ability to work in a More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Widen the Net Limited
Senior Data Engineer/Data Analytics Engineer - Full Remote Working from anywhere in the UK SQL+Python+ETL+ Apache Airflow Our client is a global leading and fast growing high tech company: -Over 6,500 employees across 20+ offices; -300 Million+ active users on some of the platforms they developed; -Cutting edge AR and VR technologies, 3D printing, etc. They are … FinTech team! You will develop scalable data pipelines, ensure data quality, and support business decision-making with high-quality datasets. -Work across technology stack: SQL, Python, ETL, Big Query, Spark, Hadoop, Git, Apache Airflow, Data Architecture, Data Warehousing -Design and develop scalable ETL pipelines to automate data processes and optimize delivery -Implement and manage data warehousing solutions, ensuring … data integrity through rigorous testing and validation -Lead, plan and execute workflow migration and data orchestration using Apache Airflow -Focus on data engineering and data analytics Requirements: -5+ years of experience in SQL -5+ years of development in Python -MUST have strong experience in Apache Airflow -Experience with ETL tools, data architecture, and data warehousing solutions More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (ApacheSpark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apachespark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for More ❯
in Python with libraries like TensorFlow, PyTorch, or Scikit-learn for ML, and Pandas, PySpark, or similar for data processing. Experience designing and orchestrating data pipelines with tools like Apache Airflow, Spark, or Kafka. Strong understanding of SQL, NoSQL, and data modeling. Familiarity with cloud platforms (AWS, Azure, GCP) for deploying ML and data solutions. Knowledge of MLOps More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Mars
curious, keep learning, and help shape our digital platforms 🧠 What we’re looking for Proven experience as a Data Engineer in cloud environments (Azure ideal) Proficiency in Python, SQL, Spark, Databricks Familiarity with Hadoop, NoSQL, Delta Lake Bonus: Azure Functions, Logic Apps, Django, CI/CD tools 💼 What you’ll get from Mars A competitive salary & bonus Hybrid working More ❯
delivery Core Experience: Proven track record in data architecture , either from a delivery or enterprise strategy perspective Deep experience with cloud platforms (Azure, AWS, GCP) and modern data ecosystems (Spark, Databricks, Kafka, Snowflake) Strong understanding of Data Mesh , Data Fabric , and data product-led approaches Data modelling expertise (relational, dimensional) and familiarity with tools like Erwin , Sparx , Archi Experience More ❯
and frameworks Stakeholder Management Expertise in relational and dimensional modelling, including big data technologies. Exposure across all the SDLC process, including testing and deployment. Good knowledge of Python and Spark are required. Experience in ETL & ELT Good understanding of one scripting language Good understanding of how to enable analytics using cloud technology and ML Ops Experience in Azure Infrastructure More ❯
architecture patterns for LLMs, NLP, MLOps, RAG, APIs, and real-time data integration Strong background in working with cloud platforms (GCP, AWS, Azure) and big data technologies (e.g., Kafka, Spark, Snowflake, Databricks) Demonstrated ability to work across matrixed organizations and partner effectively with IT, security, and business stakeholders Experience collaborating with third-party tech providers and managing outsourced solution More ❯
Key Responsibilities Design and implement real-time data pipelines using tools like Apache Kafka, Apache Flink, or Spark Streaming. Develop and maintain event schemas using Avro, Protobuf, or JSON Schema. Collaborate with backend teams to integrate event-driven microservices. Ensure data quality, lineage, and observability across streaming systems. Optimize performance and scalability of streaming applications. Implement CI … development. Strong programming skills in Python, Java, or Scala. Hands-on experience with Kafka, Kinesis, or similar messaging systems. Familiarity with stream processing frameworks like Flink, Kafka Streams, or Spark Structured Streaming. Solid understanding of event-driven design patterns (e.g., event sourcing, CQRS). Experience with cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code tools. Knowledge of More ❯
ability to explain complex data concepts to non-technical stakeholders. Preferred Skills: Experience with insurance platforms such as Guidewire, Duck Creek, or legacy PAS systems. Knowledge of Delta Lake, ApacheSpark, and data pipeline orchestration tools. Exposure to Agile delivery methodologies and tools like JIRA, Confluence, or Azure DevOps. Understanding of regulatory data requirements such as Solvency II More ❯
Reading, England, United Kingdom Hybrid / WFH Options
Areti Group | B Corp™
support for Data Analysts with efficient and performant queries. • Skilled in optimizing data ingestion and query performance for MSSQL or other RDBMS. • Familiar with data processing frameworks such as Apache Spark. • Highly analytical and tenacious in solving complex problems. More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Areti Group | B Corp™
support for Data Analysts with efficient and performant queries. • Skilled in optimizing data ingestion and query performance for MSSQL or other RDBMS. • Familiar with data processing frameworks such as Apache Spark. • Highly analytical and tenacious in solving complex problems. More ❯
architecture patterns for LLMs, NLP, MLOps, RAG, APIs, and real-time data integration Strong background in working with cloud platforms (GCP, AWS, Azure) and big data technologies (e.g., Kafka, Spark, Snowflake, Databricks) Demonstrated ability to work across matrixed organizations and partner effectively with IT, security, and business stakeholders Experience collaborating with third-party tech providers and managing outsourced solution More ❯
architecture patterns for LLMs, NLP, MLOps, RAG, APIs, and real-time data integration Strong background in working with cloud platforms (GCP, AWS, Azure) and big data technologies (e.g., Kafka, Spark, Snowflake, Databricks) Demonstrated ability to work across matrixed organizations and partner effectively with IT, security, and business stakeholders Experience collaborating with third-party tech providers and managing outsourced solution More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Fortice
between the data warehouse and other systems. Create deployable data pipelines that are tested and robust using a variety of technologies and techniques depending on the available technologies (Nifi, Spark) Build analytics tools that utilise the data pipeline to provide actionable insights into client requirements, operational efficiency, and other key business performance metrics. Complete onsite client visits and provide More ❯
Lambda) . Strong background in data architecture , including data modeling, warehousing, real-time and batch processing, and big data frameworks. Proficiency with modern data tools and technologies such as Spark, Databricks, Kafka, or Snowflake (bonus). Knowledge of cloud security, networking, and cost optimization as it relates to data platforms. Experience in total cost of ownership estimation and managing More ❯
Lambda) . Strong background in data architecture , including data modeling, warehousing, real-time and batch processing, and big data frameworks. Proficiency with modern data tools and technologies such as Spark, Databricks, Kafka, or Snowflake (bonus). Knowledge of cloud security, networking, and cost optimization as it relates to data platforms. Experience in total cost of ownership estimation and managing More ❯
Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Social network you want to login/join with: SR2 | Socially Responsible Recruitment | Certified B Corporation Location: slough, United Kingdom Job Category: Other - EU work permit required: Yes col-narrow-right Job Views: 4 Posted: 16.06.2025 Expiry Date: 31.07.2025 col More ❯
code Experience working on distributed systems Strong knowledge of Kubernetes and Kafka Experience with Git, and Deployment Pipelines Having worked with at least one of the following stacks: Hadoop, ApacheSpark, Presto AWS Redshift, Azure Synapse or Google BigQuery Experience profiling performance issues in database systems Ability to learn and/or adapt quickly to complex issues Happy More ❯
modelling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Atarus
Partner with cross-functional teams to deliver robust data solutions 💡 What You’ll Bring Strong hands-on experience building streaming data platforms Deep understanding of tools like Kafka , Flink , Spark Streaming , etc. Proficiency in Python , Java , or Scala Cloud experience with AWS , GCP , or Azure Familiarity with orchestration tools like Airflow , Kubernetes Collaborative, solutions-focused mindset and a willingness More ❯
predictive modelling, machine learning, clustering and classification techniques. Fluency in a programming language (Python, C, C++, Java, SQL). Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau). More ❯
Analysts with efficient and performant SQL. Performance optimisation of data ingestion and query performance for MSSQL (or transferable skills from another RDBMS) Familiar with data processing frameworks such as Apache Spark. Experience of working with Terabyte data sets and managing rapid data growth. The benefits at APF: At AllPoints Fibre, we're all about looking after you. We offer More ❯