tools, and statistical packages. Strong analytical, problem-solving, and critical thinking skills. 8.Experience with social media analytics and understanding of user behaviour. 9.Familiarity with big data technologies, such as Apache Hadoop, ApacheSpark, or Apache Kafka. 10.Knowledge of AWS machine learning services, such as Amazon SageMaker and Amazon Comprehend. 11.Experience with data governance and security best More ❯
tools, and statistical packages. Strong analytical, problem-solving, and critical thinking skills. 8.Experience with social media analytics and understanding of user behaviour. 9.Familiarity with big data technologies, such as Apache Hadoop, ApacheSpark, or Apache Kafka. 10.Knowledge of AWS machine learning services, such as Amazon SageMaker and Amazon Comprehend. 11.Experience with data governance and security best More ❯
packages. 8. Strong analytical, problem-solving, and critical thinking skills. 9. Experience with social media analytics and understanding of user behaviour. 10. Familiarity with big data technologies, such as Apache Hadoop, ApacheSpark, or Apache Kafka. 11. Knowledge of AWS machine learning services, such as Amazon SageMaker and Amazon Comprehend. 12. Experience with data governance and More ❯
London, England, United Kingdom Hybrid / WFH Options
Luupli
tools, and statistical packages. Strong analytical, problem-solving, and critical thinking skills. 8.Experience with social media analytics and understanding of user behaviour. 9.Familiarity with big data technologies, such as Apache Hadoop, ApacheSpark, or Apache Kafka. 10.Knowledge of AWS machine learning services, such as Amazon SageMaker and Amazon Comprehend. 11.Experience with data governance and security best More ❯
Functions, Azure SQL Database, HDInsight, and Azure Machine Learning Studio. Data Storage & Databases: SQL & NoSQL Databases: Experience with databases like PostgreSQL, MySQL, MongoDB, and Cassandra. Big Data Ecosystems: Hadoop, Spark, Hive, and HBase. Data Integration & ETL: Data Pipelining Tools: Apache NiFi, Apache Kafka, and Apache Flink. ETL Tools: AWS Glue, Azure Data Factory, Talend, and ApacheMore ❯
London, England, United Kingdom Hybrid / WFH Options
bigspark
exclusive features. Senior Python Software Engineer - UK Remote About Us bigspark, a UK based consultancy delivering next level data platforms and solutions with a focus on exciting technologies including ApacheSpark, Apache Kafka and working on projects within Machine Learning, Data Engineering, Streaming and Data Science is looking for a Python Software Engineer to join our team More ❯
with DevOps practices for data engineering, including infrastructure-as-code (e.g., Terraform, CloudFormation), CI/CD pipelines, and monitoring (e.g., CloudWatch, Datadog). Familiarity with big data technologies like ApacheSpark, Hadoop, or similar. ETL/ELT tools and creating common data sets across on-prem (IBMDatastage ETL) and cloud data stores Leadership & Strategy: Lead Data Engineering team More ❯
obtain UK security clearance. We do not sponsor visas. Preferred Skills and Experience Public sector experience Knowledge of cloud platforms (IBM Cloud, AWS, Azure) Experience with big data frameworks (ApacheSpark, Hadoop) Data warehousing and BI tools (IBM Cognos, Tableau) Additional Details Seniority level: Mid-Senior level Employment type: Full-time Job function: Information Technology Industries: IT Services More ❯
Experience leading a small team of data engineers. Extensive knowledge as a Data Engineer. Proven success in designing and building data products on Databricks, Snowflake, GCP Big Data, Hadoop, Spark, etc. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication and team collaboration abilities. Programming skills in Python (PySpark preferred), Scala, or SQL. Experience designing and implementing enterprise-level … knowledge of ETL processes. Ability to write production-grade, automated testing code. Experience deploying via CI/CD platforms like Github Actions or Jenkins. Proficiency with distributed frameworks like Apache Spark. Experience with cloud platforms (AWS, Azure, GCP) and services (S3, Redshift, BigQuery). Knowledge of data modelling, database systems, and SQL optimisation. Other key criteria Knowledge of UK More ❯
tuning skills. Preferred Qualifications Strong communication skills and demonstrated ability to engage with business stakeholders and product teams. Experience in data modeling , data warehousing (e.g., Snowflake , AWS Glue , EMR , ApacheSpark ), and working with data pipelines . Leadership experience—whether technical mentorship, team leadership, or managing critical projects. Familiarity with Infrastructure as Code (IaC) tools like Terraform , CloudFormation More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
Lloyds Bank plc
working with relational and non-relational databases to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures. Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ). Proficiency in infrastructure as code (IaC) using Terraform . Experience with CI/CD pipelines and related tools/frameworks. Containerisation: Good knowledge of containers … AWS, or Azure . Good understanding of cloud storage, networking, and resource provisioning. It would be great if you had... Certification in GCP “Professional Data Engineer”. Certification in Apache Kafka (CCDAK). Proficiency across the data lifecycle. Working for us: Our focus is to ensure we are inclusive every day, building an organisation that reflects modern society and More ❯
Python, or C# with Spring Boot or .NET Core. Data Platforms: Warehouses: Snowflake, Google BigQuery, or Amazon Redshift. Analytics: Tableau, Power BI, or Looker for client reporting. Big Data: ApacheSpark or Hadoop for large-scale processing. AI/ML: TensorFlow or Databricks for predictive analytics. Integration Technologies: API Management: Apigee, AWS API Gateway, or MuleSoft. Middleware: Red More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
for data models, ETL processes, and BI solutions. Ensure data accuracy, integrity, and consistency across the data platform. Knowledge, Skills and Experience: Essentia l Strong expertise in Databricks and ApacheSpark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and Delta Lake optimisation. Experience More ❯
Ibstock, England, United Kingdom Hybrid / WFH Options
Ibstock Plc
comprehensive documentation for data models, ETL processes, and BI solutions. Ensure data accuracy, integrity, and consistency across the data platform. Knowledge, Skills and Experience: Strong expertise in Databricks and ApacheSpark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and Delta Lake optimisation. Experience More ❯
London, England, United Kingdom Hybrid / WFH Options
Circadia Technologies Ltd
frameworks such as Boost.Test, Google Test, etc. Nice to Haves: Experience with Azure services for managing GPT pipelines and multi-cloud infrastructure. Familiarity with big data technologies such as ApacheSpark, Kafka, and MSK for large-scale data processing. Experience with boost libraries (asio, beast). Advanced experience in cost optimization strategies for cloud infrastructure and database performance More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
strategies. Strong experience in IaC and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop. Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to create data pipelines on a More ❯
London, England, United Kingdom Hybrid / WFH Options
Trudenty
real-time data pipelines for processing large-scale data. Experience with ETL processes for data ingestion and processing. Proficiency in Python and SQL. Experience with big data technologies like Apache Hadoop and Apache Spark. Familiarity with real-time data processing frameworks such as Apache Kafka or Flink. MLOps & Deployment: Experience deploying and maintaining large-scale ML inference More ❯
London, England, United Kingdom Hybrid / WFH Options
Foundever
and monitoring systems. Skills/Abilities/Knowledge Proficiency in data modeling and database management. Strong programming skills in Python and SQL. Knowledge of big data technologies like Hadoop, Spark, and NoSQL databases. Deep experience with ETL processes and data pipeline development. Strong understanding of data warehousing concepts and best practices. Experience with cloud platforms such as AWS and … Science or Engineering Languages Excellent command of English. French and Spanish language skills are a bonus. Tools and Applications Programming languages and tools: Python, SQL. Big data technologies: Hadoop, Spark, NoSQL databases. ETL and data pipeline tools: AWS Glue, Airflow. Cloud platforms: AWS, Azure. Data visualization tools and data modeling software. Version control systems and collaborative development platforms. Our More ❯
priorities aimed at maximizing value through data utilization. Knowled g e/Experience Expertise in Commercial/Procurement Analytics. Experience in SAP (S/4 Hana). Experience with Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL … processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog le Bi g Query). Experience with DataOps practices and tools, includin g CI/CD for data pipelines. More ❯
priorities aimed at maximizing value through data utilization. Knowled g e/Experience Expertise in Commercial/Procurement Analytics. Experience in SAP (S/4 Hana). Experience with Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL … processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog le Bi g Query). Experience with DataOps practices and tools, includin g CI/CD for data pipelines. More ❯
of experience in data engineering or a related field, with a focus on building scalable data systems and platforms. Strong expertise with modern data tools and frameworks such as Spark, dbt, Airflow OR Kafka, Databricks, and cloud-native services (AWS, GCP, or Azure). Deep understanding of data modeling, distributed systems, streaming architectures, and ETL/ELT pipelines. Proficiency More ❯
London, England, United Kingdom Hybrid / WFH Options
Methods
with data modelling, data warehousing, and lakehouse architectures. - Knowledge of DevOps practices, including CI/CD pipelines and version control (eg, Git). - Understanding of big data technologies (eg, Spark, Hadoop) is a plus. Seniority level Seniority level Mid-Senior level Employment type Employment type Contract Job function Job function Information Technology Referrals increase your chances of interviewing at More ❯
EMR, Glue). Familiarity with programming languages such as Python or Java. Understanding of data warehousing concepts and data modeling techniques. Experience working with big data technologies (e.g., Hadoop, Spark) is an advantage. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Benefits Enhanced leave - 38 days inclusive of 8 UK Public Holidays Private Health Care including More ❯
London, England, United Kingdom Hybrid / WFH Options
ZILO™
Redshift, EMR, Glue) Familiarity with programming languages such as Python or Java Understanding of data warehousing concepts and data modeling techniques Experience working with big data technologies (e.g., Hadoop, Spark) is an advantage Excellent problem-solving and analytical skills Strong communication and collaboration skills Benefits Enhanced leave - 38 days inclusive of 8 UK Public Holidays Private Health Care including More ❯
e.g., AWS, Azure, Google Cloud). Knowledge of machine learning techniques and frameworks. Experience with version control systems (e.g., Git). Familiarity with big data technologies (e.g., Snowflake, Hadoop, Spark) #J-18808-Ljbffr More ❯