Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Yelp USA
to the experimentation and development of new ad products at Yelp. Design, build, and maintain efficient data pipelines using large-scale processing tools like ApacheSpark to transform ad-related data. Manage high-volume, real-time data streams using Apache Kafka and process them with frameworks like … Apache Flink. Estimate timelines for projects, feature enhancements, and bug fixes. Work with large-scale data storage solutions, including Apache Cassandra and various data lake systems. Collaborate with cross-functional teams, including engineers, product managers and data scientists, to understand business requirements and translate them into effective system … a proactive approach to identifying opportunities and recommending scalable, creative solutions. Exposure to some of the following technologies: Python, AWS Redshift, AWS Athena/Apache Presto, Big Data technologies (e.g S3, Hadoop, Hive, Spark, Flink, Kafka etc), NoSQL systems like Cassandra, DBT is nice to have. What you More ❯
to Have: AWS Certified Data Engineer, or AWS Certified Data Analytics, or AWS Certified Solutions Architect Experience with big data tools and technologies like ApacheSpark, Hadoop, and Kafka Knowledge of CI/CD pipelines and automation tools such as Jenkins or GitLab CI About Adastra For more More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
and BI solutions. Ensure data accuracy, integrity, and consistency across the data platform. Knowledge, Skills and Experience: Essentia l Strong expertise in Databricks and ApacheSpark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development More ❯
AWS (S3, Glue, Redshift, SageMaker) or other cloud platforms. Familiarity with Docker, Terraform, GitHub Actions, and Vault for managing secrets. Proficiency in SQL, Python, Spark, or Scala to work with data. Experience with databases used in Data Warehousing, Data Lakes, and Lakehouse setups, including both structured and unstructured data. More ❯
HBase, Elasticsearch). Build, operate, maintain, and support cloud infrastructure and data services. Automate and optimize data engineering pipelines. Utilize big data technologies (Databricks, Spark). Develop custom security applications, APIs, AI/ML models, and advanced analytic technologies. Experience with threat detection in Azure Sentinel, Databricks, MPP Databases More ❯
knowledge of data modelling, warehousing, and real-time analytics. Proficiency in SQL, Python, Java, or similar programming languages. Familiarity with big data technologies (e.g., Spark, Hadoop) and BI tools (e.g., Power BI, Tableau). Excellent problem-solving and stakeholder engagement skills. Desirable: Experience in research-driven or complex data More ❯
Nottingham, Nottinghamshire, East Midlands, United Kingdom Hybrid / WFH Options
Profile 29
proposal development Experience in Data & AI architecture and solution design Experience working for a consultancy or agency Experience with data engineering tools (SQL, Python, Spark) Hands-on experience with cloud platforms (Azure, AWS, GCP) Hands-on experience with data platforms (Azure Synapse, Databricks, Snowflake) Ability to translate clients business More ❯
mansfield, midlands, united kingdom Hybrid / WFH Options
Profile 29
proposal development Experience in Data & AI architecture and solution design Experience working for a consultancy or agency Experience with data engineering tools (SQL, Python, Spark) Hands-on experience with cloud platforms (Azure, AWS, GCP) Hands-on experience with data platforms (Azure Synapse, Databricks, Snowflake) Ability to translate clients business More ❯
derby, midlands, united kingdom Hybrid / WFH Options
Profile 29
proposal development Experience in Data & AI architecture and solution design Experience working for a consultancy or agency Experience with data engineering tools (SQL, Python, Spark) Hands-on experience with cloud platforms (Azure, AWS, GCP) Hands-on experience with data platforms (Azure Synapse, Databricks, Snowflake) Ability to translate clients business More ❯
Birmingham, England, United Kingdom Hybrid / WFH Options
Talent
Python, R, or SQL. • Experience with machine learning frameworks (e.g., Scikit-learn, TensorFlow, PyTorch). • Proficiency in data manipulation and analysis (e.g., Pandas, NumPy, Spark). • Knowledge of data visualization tools (e.g., Power BI, Tableau, Matplotlib). • Understanding of statistical modelling, hypothesis testing, and A/B testing. • Experience More ❯
Knowledge of cloud platforms (e.g., Azure). Familiarity with containerization is a plus (e.g., Docker, Kubernetes). Knowledge of big data technologies (e.g., Hadoop, Spark). Knowledge of data lifecycle management. Strong problem-solving skills and attention to detail. Ability to work in an agile development environment. Excellent communication More ❯
or development team Strong hands-on experience and understanding of working in a cloud environment such as AWS Experience with EMR (Elastic Map Reduce), Spark Strong experience with CI/CD pipelines with Jenkins Experience with the following technologies: SpringBoot, Gradle, Terraform, Ansible, GitHub/GitFlow, PCF/OCP More ❯
Nottingham, Nottinghamshire, United Kingdom Hybrid / WFH Options
Experian Group
Significant experience of programming using Scala and Python Experience of using Terraform to provision and deploy cloud services and components Experience of developing on ApacheSpark Experience of developing with AWS cloud services including (but not limited to) AWS Glue, S3, Step Functions, Lambdas, EventBridge and SQS BDD More ❯
Familiarity with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation. Knowledge of data engineering and experience with big data technologies like Hadoop, Spark, or Kafka. Experience with CI/CD pipelines and automation, such as using Jenkins, GitLab, or CircleCI. ABOUT BUSINESS UNIT IBM Consulting is IBM More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Investigo
advanced visualisations, ML model interpretation, and KPI tracking. Deep knowledge of feature engineering, model deployment, and MLOps best practices. Experience with big data processing (Spark, Hadoop) and cloud-based data science environments. Other: Ability to integrate ML workflows into large-scale data pipelines. Strong experience in data preprocessing, feature More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Investigo
advanced visualisations, ML model interpretation, and KPI tracking. Deep knowledge of feature engineering, model deployment, and MLOps best practices. Experience with big data processing (Spark, Hadoop) and cloud-based data science environments. Other: Ability to integrate ML workflows into large-scale data pipelines. Strong experience in data preprocessing, feature More ❯
on a contract basis. You will help design, develop, and maintain secure and scalable data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi These roles are supporting our clients team in Worcester (fully onsite), and requires active UK DV clearance. Key Responsibilities: Design, develop, and maintain … secure and scalable data pipelines using the Elastic Stack (Elasticsearch, Logstash, Kibana) and Apache NiFi Implement data ingestion, transformation, and integration processes, ensuring data quality and security Collaborate with data architects and security teams to ensure compliance with security policies and data governance standards Manage and monitor large-scale … Engineer in secure or regulated environments. Expertise in the Elastic Stack (Elasticsearch, Logstash, Kibana) for data ingestion, transformation, indexing, and visualization. Strong experience with Apache NiFi for building and managing complex data flows and integration processes. Knowledge of security practices for handling sensitive data, including encryption, anonymization, and access More ❯
Northampton, Northamptonshire, East Midlands, United Kingdom Hybrid / WFH Options
Data Inc. (UK) Ltd
a similar Data Engineering role before sharing their details with us. Keywords for Search: When reviewing CVs, please look for relevant technologies such as: Spark, Hadoop, Big Data, Scala, Spark-Scala, Data Engineer, ETL, AWS (S3, EMR, Glue ETL) . Interview Process: the client will conduct an interview More ❯
Leicester, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
IO Associates
data science techniques , including regression, classification, and machine learning. Experience with deep learning or generative AI is a plus but not essential. Proficiency in (Spark)SQL and Python . Experience with PySpark is beneficial but not required. Experience designing and implementing robust testing frameworks . Strong analytical skills with More ❯
workflows professional motorsports organization. Experience using simulation tools to optimize vehicle performance. Experience with machine learning libraries. Experience with big data tools (e.g. Hadoop, Spark, SQL, and NoSQL Database experience). About GM Our vision is a world with Zero Crashes, Zero Emissions and Zero Congestion and we embrace More ❯
workflows professional motorsports organization. Experience using simulation tools to optimize vehicle performance. Experience with machine learning libraries. Experience with big data tools (e.g. Hadoop, Spark, SQL, and NoSQL Database experience). About GM Our vision is a world with Zero Crashes, Zero Emissions and Zero Congestion and we embrace More ❯
workflows professional motorsports organization. Experience using simulation tools to optimize vehicle performance. Experience with machine learning libraries. Experience with big data tools (e.g. Hadoop, Spark, SQL, and NoSQL Database experience). About GM Our vision is a world with Zero Crashes, Zero Emissions and Zero Congestion and we embrace More ❯
workflows professional motorsports organization. Experience using simulation tools to optimize vehicle performance. Experience with machine learning libraries. Experience with big data tools (e.g. Hadoop, Spark, SQL, and NoSQL Database experience). About GM Our vision is a world with Zero Crashes, Zero Emissions and Zero Congestion and we embrace More ❯
workflows professional motorsports organization. Experience using simulation tools to optimize vehicle performance. Experience with machine learning libraries. Experience with big data tools (e.g. Hadoop, Spark, SQL, and NoSQL Database experience). About GM Our vision is a world with Zero Crashes, Zero Emissions and Zero Congestion and we embrace More ❯
workflows professional motorsports organization. Experience using simulation tools to optimize vehicle performance. Experience with machine learning libraries. Experience with big data tools (e.g. Hadoop, Spark, SQL, and NoSQL Database experience). About GM Our vision is a world with Zero Crashes, Zero Emissions and Zero Congestion and we embrace More ❯