Manchester, North West, United Kingdom Hybrid / WFH Options
INFUSED SOLUTIONS LIMITED
culture. Key Responsibilities Design, build, and maintain scalable data solutions to support business objectives. Work with Microsoft Fabric to develop robust data pipelines. Utilise ApacheSpark and the Spark API to handle large-scale data processing. Contribute to data strategy, governance, and architecture best practices. Identify and … approaches. Collaborate with cross-functional teams to deliver projects on time . Key Requirements ? Hands-on experience with Microsoft Fabric . ? Strong expertise in ApacheSpark and Spark API . ? Knowledge of data architecture, engineering best practices, and governance . ? DP-600 & DP-700 certifications are highly More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Smart DCC
you be doing? Design and implement efficient ETL processes for data extraction, transformation, and loading. Build real-time data processing pipelines using platforms like Apache Kafka or cloud-native tools. Optimize batch processing workflows with tools like ApacheSpark and Flink for scalable performance. Infrastructure Automation: Implement … Integrate cloud-based data services with data lakes and warehouses. Build and automate CI/CD pipelines with Jenkins, GitLab CI/CD, or Apache Airflow. Develop automated test suites for data pipelines, ensuring data quality and transformation integrity. Monitoring & Performance Optimization: Monitor data pipelines with tools like Prometheus More ❯
Thornton-Cleveleys, Lancashire, North West, United Kingdom
Victrex Manufacturing Ltd
architecture, data modelling, ETL/ELT processesand data pipeline development. Competency with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Hadoop, Spark, Kafka etc). Excellent communication and leadership skills, with the ability to engage and influence stakeholders at all levels. Insightful problem-solving skills and More ❯
architecture, data modelling, ETL/ELT processesand data pipeline development. Competency with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Hadoop, Spark, Kafka etc). Excellent communication and leadership skills, with the ability to engage and influence stakeholders at all levels. Insightful problem-solving skills and More ❯
Utilise Azure Databricks and adhere to code-based deployment practices Essential Skills: Over 3 years of experience with Databricks (including Lakehouse, Delta Lake, PySpark, Spark SQL) Strong proficiency in SQL with 5+ years of experience Extensive experience with Azure Data Factory Proficiency in Python programming Excellent stakeholder/client More ❯
Knutsford Contract Role Job Description: AWS Services: Glue, Lambda, IAM, Service Catalogue, Cloud Formation, Lake Formation, SNS, SQS, Event Bridge Language & Scripting: Python and Spark ETL: DBT Good to Have: Airflow, Snowflake, Big Data (Hadoop), and Teradata Responsibilities: Serve as the primary point of contact for all AWS related More ❯
Cassandra, and Redis. In-depth knowledge of ETL/ELT pipelines, data transformation, and storage optimization. Skilled in working with big data frameworks like Spark, Flink, and Druid. Hands-on experience with both bare metal and AWS environments. Strong programming skills in Python, Java, and other relevant languages. Proficiency More ❯
related field. - Proven experience (5+ years) in developing and deploying data engineering pipelines and products - Strong proficiency in Python - Experienced in Hadoop, Kafka or Spark - Experience leading/mentoring junior team members - Strong communication and interpersonal skills, with the ability to effectively communicate complex technical concepts to both technical More ❯
Altrincham, Cheshire, United Kingdom Hybrid / WFH Options
INRIX, Inc
one or more of the following would also be beneficial : Scala AWS services like Kinesis, RDS, Elasticache, S3, Athena, Data pipeline, Glue, Lambda, EMR, Spark, EC2, ECS, CloudWatch or Elastic Beanstalk Jenkins or similar tools would also be a plus A team player - we highly regard collaboration. Knowledge or More ❯
s degree/PhD in Computer Science, Machine Learning, Applied Statistics, Physics, Engineering or related field Strong mathematical and statistical skills Experience with Python, Spark and SQL Experience implementing and validating a range of machine learning and optimization techniques Effective scientific communication for varied audiences Autonomy and ownership of More ❯
stakeholder relationship management. Ability to analyze large structured and unstructured datasets, including intelligence, fraud, and business data, using tools like Python, Jupyter Notebook, Hadoop, Spark, and REST APIs. Knowledge of descriptive and prescriptive analysis, understanding data distributions, machine learning algorithms, and building KPIs based on defined problems. Strong foundation More ❯
s degree/PhD in Computer Science, Machine Learning, Applied Statistics, Physics, Engineering or related field Strong mathematical and statistical skills Experience with Python, Spark, and SQL Experience implementing and validating a range of machine learning and optimization techniques Effective scientific communication for varied audiences Autonomy and ownership of More ❯
or Development, Content Distribution/CDN, Scripting/Automation, Database Architecture, Cloud Architecture, Cloud Migrations, IP Networking, IT Security, Big Data/Hadoop/Spark, Operations Management, Service Oriented Architecture etc. Experience in a 24x7 operational services or support environment. Experience with AWS Cloud services and/or other More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Qodea
GCP). Proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle. Experience with big data technologies such as Hadoop, Spark, or Hive. Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow. Proficiency in at least one More ❯
Our systems are self-healing, responding gracefully to extreme loads or unexpected input. We use modern languages like Kotlin and Scala, Data technologies Kafka, Spark, MLflow, Kubeflow, VastStorage, StarRocks and agile development practices. Most importantly, we hire great people from around the world and empower them to be successful. More ❯
is ideal for someone who enjoys blending technical precision with innovation. You’ll: Build and manage ML pipelines in Databricks using MLflow, Delta Lake, Spark, and Mosaic AI. Train and deploy generative models (LLMs, GANs, VAEs) for NLP, content generation, and synthetic data. Architect scalable solutions using Azure, AWS More ❯
ECOM are pleased to be exclusively recruiting for a Senior Data Engineer here in Manchester. You'll join a team where your work reaches millions. This role is within a forward-thinking company leading, offering a dynamic environment where you More ❯
As a Senior BI Developer, you will be at the forefront of creating Analytical Solutions and insights into a wide range of business processes throughout the organisation and playing a core role in our strategic initiatives to enhance data-driven More ❯
to join its innovative team. This role requires hands-on experience with machine learning techniques and proficiency in data manipulation libraries such as Pandas, Spark, and SQL. As a Data Scientist at PwC, you will work on cutting-edge projects, using data to drive strategic insights and business decisions. … e.g. Sklearn) and (Deep learning frameworks such as Pytorch and Tensorflow). Understanding of machine learning techniques. Experience with data manipulation libraries (e.g. Pandas, Spark, SQL). Git for version control. Cloud experience (we use Azure/GCP/AWS). Skills we'd also like to hear about More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Yelp USA
Summary Yelp engineering culture is driven by our values : we're a cooperative team that values individual authenticity and encourages creative solutions to problems. All new engineers deploy working code their first week, and we strive to broaden individual impact More ❯