and Azure Machine Learning Studio. Data Storage & Databases: SQL & NoSQL Databases: Experience with databases like PostgreSQL, MySQL, MongoDB, and Cassandra. Big Data Ecosystems: Hadoop, Spark, Hive, and HBase. Data Integration & ETL: Data Pipelining Tools: Apache NiFi, Apache Kafka, and Apache Flink. ETL Tools: AWS Glue, Azure … Data Factory, Talend, and Apache Airflow. AI & Machine Learning: Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, and MXNet. AI Services: AWS SageMaker, Azure Machine Learning, Google AI Platform. DevOps & Infrastructure as Code: Containerization: Docker and Kubernetes. Infrastructure Automation: Terraform, Ansible, and AWS CloudFormation. API & Microservices: API Development: RESTful API design More ❯
engineering, including infrastructure-as-code (e.g., Terraform, CloudFormation), CI/CD pipelines, and monitoring (e.g., CloudWatch, Datadog). Familiarity with big data technologies like ApacheSpark, Hadoop, or similar. ETL/ELT tools and creating common data sets across on-prem (IBMDatastage ETL) and cloud data stores Leadership More ❯
do not sponsor visas. Preferred Skills and Experience Public sector experience Knowledge of cloud platforms (IBM Cloud, AWS, Azure) Experience with big data frameworks (ApacheSpark, Hadoop) Data warehousing and BI tools (IBM Cognos, Tableau) Additional Details Seniority level: Mid-Senior level Employment type: Full-time Job function More ❯
communication skills and demonstrated ability to engage with business stakeholders and product teams. Experience in data modeling , data warehousing (e.g., Snowflake , AWS Glue , EMR , ApacheSpark ), and working with data pipelines . Leadership experience—whether technical mentorship, team leadership, or managing critical projects. Familiarity with Infrastructure as Code More ❯
communication skills and demonstrated ability to engage with business stakeholders and product teams. Experience in data modeling , data warehousing (e.g., Snowflake , AWS Glue , EMR , ApacheSpark ), and working with data pipelines . Leadership experience—whether technical mentorship, team leadership, or managing critical projects. Familiarity with Infrastructure as Code More ❯
Boot or .NET Core. Data Platforms: Warehouses: Snowflake, Google BigQuery, or Amazon Redshift. Analytics: Tableau, Power BI, or Looker for client reporting. Big Data: ApacheSpark or Hadoop for large-scale processing. AI/ML: TensorFlow or Databricks for predictive analytics. Integration Technologies: API Management: Apigee, AWS API More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
and BI solutions. Ensure data accuracy, integrity, and consistency across the data platform. Knowledge, Skills and Experience: Essentia l Strong expertise in Databricks and ApacheSpark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development More ❯
Ibstock, England, United Kingdom Hybrid / WFH Options
Ibstock Plc
ETL processes, and BI solutions. Ensure data accuracy, integrity, and consistency across the data platform. Knowledge, Skills and Experience: Strong expertise in Databricks and ApacheSpark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development More ❯
London, England, United Kingdom Hybrid / WFH Options
Circadia Technologies Ltd
Test, etc. Nice to Haves: Experience with Azure services for managing GPT pipelines and multi-cloud infrastructure. Familiarity with big data technologies such as ApacheSpark, Kafka, and MSK for large-scale data processing. Experience with boost libraries (asio, beast). Advanced experience in cost optimization strategies for More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop. Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to More ❯
plus Puppet, SaltStack), Terraform, CloudFormation; Programming Languages and Frameworks Node.js, React/Material-UI (plus Angular), Python, JavaScript; Big Data Processing and Analysis e.g., Apache Hadoop (CDH), ApacheSpark; Operating Systems Red Hat Enterprise Linux, CentOS, Debian, or Ubuntu. More ❯
plus Puppet, SaltStack), Terraform, CloudFormation; Programming Languages and Frameworks Node.js, React/Material-UI (plus Angular), Python, JavaScript; Big Data Processing and Analysis e.g., Apache Hadoop (CDH), ApacheSpark; Operating Systems Red Hat Enterprise Linux, CentOS, Debian, or Ubuntu. #J-18808-Ljbffr More ❯
or a related field, with a focus on building scalable data systems and platforms. Strong expertise with modern data tools and frameworks such as Spark , dbt , Airflow OR Kafka , Databricks , and cloud-native services (AWS, GCP, or Azure). Deep understanding of data modeling , distributed systems , streaming architectures , and More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
JR United Kingdom
or a related field, with a focus on building scalable data systems and platforms. Strong expertise with modern data tools and frameworks such as Spark , dbt , Airflow , Kafka , Databricks , and cloud-native services (AWS, GCP, or Azure). Deep understanding of data modeling , distributed systems , streaming architectures , and ETL More ❯
London, England, United Kingdom Hybrid / WFH Options
Methods
and lakehouse architectures. - Knowledge of DevOps practices, including CI/CD pipelines and version control (eg, Git). - Understanding of big data technologies (eg, Spark, Hadoop) is a plus. Seniority level Seniority level Mid-Senior level Employment type Employment type Contract Job function Job function Information Technology Referrals increase More ❯
programming languages such as Python or Java. Understanding of data warehousing concepts and data modeling techniques. Experience working with big data technologies (e.g., Hadoop, Spark) is an advantage. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Benefits Enhanced leave - 38 days inclusive of 8 UK Public More ❯
of experience with data modeling, data warehousing, ETL/ELT pipelines and BI tools. - Experience with cloud-based big data technology stacks (e.g., Hadoop, Spark, Redshift, S3, Glue, SageMaker etc.) - Knowledge of data management and data storage principles. - Experience in at least one modern object-oriented programming language (Python More ❯
London, England, United Kingdom Hybrid / WFH Options
Rein-Ton
solutions for quality assurance. Qualifications: Proven experience as a Data Engineer, especially with data pipelines. Proficiency in Python, Java, or Scala; experience with Hadoop, Spark, Kafka. Experience with Databricks, Azure AI Services, and cloud platforms (AWS, Google Cloud, Azure). Strong SQL and NoSQL database skills. Problem-solving skills More ❯
Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building and leading. You must be: Willing to work on client sites, potentially for More ❯
S3, Glue, Redshift, SageMaker) or other cloud platforms Familiarity with Docker, Terraform, GitHub Actions, and Vault for managing secrets Coding skills in SQL, Python, Spark, or Scala Experience with databases used in Data Warehousing, Data Lakes, and Lakehouse setups, and working with both structured and unstructured data Experience in More ❯
modeling, and ETL/ELT processes. Proficiency in programming languages such as Python, Java, or Scala. Experience with big data technologies such as Hadoop, Spark, and Kafka. Familiarity with cloud platforms like AWS, Azure, or Google Cloud. Excellent problem-solving skills and the ability to think strategically. Strong communication More ❯
Learning (ML): • Deep understanding of machine learning principles, algorithms, and techniques. • Experience with popular ML frameworks and libraries like TensorFlow, PyTorch, scikit-learn, or Apache Spark. • Proficiency in data preprocessing, feature engineering, and model evaluation. • Knowledge of ML model deployment and serving strategies, including containerization and microservices. • Familiarity with More ❯
and R, and ML libraries (TensorFlow, PyTorch, scikit-learn). Hands-on experience with cloud platforms (Azure ML) and big data ecosystems (e.g., Hadoop, Spark). Strong understanding of CI/CD pipelines, DevOps practices, and infrastructure automation. Familiarity with database systems (SQL Server, Snowflake) and API integrations. Strong More ❯
London, England, United Kingdom Hybrid / WFH Options
Morgan Advanced Materials
mixture of Enterprise and SME environments. Proficiency in Python, SQL, Azure Data Factory, Azure Synapse Analytics, Azure Data Lakes, and big data technologies like Apache Spark. Experience with DevOps practices and CI/CD pipelines in an Azure environment is a plus. Certification in Azure (e.g., Microsoft Certified: Azure More ❯
Familiarity with SQL and database management systems (e.g., PostgreSQL, MySQL). Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data tools (e.g., Spark, Hadoop) is a plus. Prior experience in financial data analysis is highly preferred. Understanding financial datasets, metrics, and industry trends. Preferred Qualifications: Experience with More ❯