across multiple sources. Integrate services via RESTful APIs and manage structured/unstructured data formats. Data Visualisation & Automation Build interactive dashboards in BI tools such as Power BI or Apache Superset. Ensure dashboard performance and real-time accessibility for business users. Business Reporting & Data Insights Conduct data validation and quality assurance to ensure reliable insights. Translate complex data into More ❯
warehouse knowledge Redshift and Snowflake preferred Working with IaC – Terraform and Cloud Formation Working understanding of scripting languages including Python and Shell Experience working with streaming technologies inc. Kafka, Apache Flink Experience working with a ETL environments Experience working with a confluent cloud platform More ❯
techniques You might also have: Familiarity with data visualization tools and libraries (e.g., Power BI) Background in database administration or performance tuning Familiarity with data orchestration tools, such as Apache Airflow Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing Experience with ServiceNow integration More ❯
warehouse knowledge Redshift and Snowflake preferred Working with IaC - Terraform and Cloud Formation Working understanding of scripting languages including Python and Shell Experience working with streaming technologies inc. Kafka, Apache Flink Experience working with a ETL environments Experience working with a confluent cloud platform Disclaimer: This vacancy is being advertised by either Advanced Resource Managers Limited, Advanced Resource Managers More ❯
Oozie) Experience deploying and operating Cloudera workloads on AWS (EC2, S3, IAM, CloudWatch) Strong proficiency in Scala, Java and HiveQL; Python or Bash scripting experience preferred Strong proficiency in Apache Spark & Scala programming for data processing and transformation. Hands on experience with Cloudera distribution of Hadoop. Hands-on experience implementing business-rules processing using Drools. Able to work with More ❯
Advanced-level Python for data applications and high proficiency in SQL (query tuning, complex joins) Hands-on experience designing and deploying ETL/ELT pipelines using Google Cloud Dataflow (Apache Beam) or similar tools Proficiency in data architecture, data modeling, and scalable storage design Solid engineering practices: Git and CI/CD for data systems Highly Desirable Skills GCP More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Singular Recruitment
Advanced-level Python for data applications and high proficiency in SQL (query tuning, complex joins) Hands-on experience designing and deploying ETL/ELT pipelines using Google Cloud Dataflow (Apache Beam) or similar tools Proficiency in data architecture, data modeling, and scalable storage design Solid engineering practices: Git and CI/CD for data systems Highly Desirable Skills GCP More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Singular Recruitment
Advanced-level Python for data applications and high proficiency in SQL (query tuning, complex joins) Hands-on experience designing and deploying ETL/ELT pipelines using Google Cloud Dataflow (Apache Beam) or similar tools Proficiency in data architecture, data modeling, and scalable storage design Solid engineering practices: Git and CI/CD for data systems Highly Desirable Skills GCP More ❯
regarding technical constraints, trade-offs, and delivery timelines Who you are Technical: Strong hands-on experience and expertise in Python data science ecosystem. Experience with distributed computing frameworks like Apache Spark (PySpark) or similar is preferred Data Science/ML: Strong experience in exploratory data analysis, machine learning algorithms, statistical inference, and experimental design, with desirable experience in optimisation More ❯
techniques You might also have: Familiarity with data visualization tools and libraries (e.g., Power BI) Background in database administration or performance tuning Familiarity with data orchestration tools, such as Apache Airflow Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries More ❯
of application deployment tools such as Ansible Experience with DevSecOps and CI/CD pipelines using tools such as Jenkins or Gitlab Runner Experience with Junit-Mockito, ApacheHTTPDServer, and Spring Boot Extensive knowledge of Unix/Linux Environments Expertise in Test Automation using Selenium, Test Complete, Cucumber, Apache JMeter or similar tools Familiarity with Cloud Computing More ❯
tools, particularly Terraform. Experience with network design, administration, and troubleshooting. Knowledge of programming languages (e.g., JavaScript, Node.js, PHP). Experience with version control systems, ideally Git. Web server configuration (Apache, Nginx). Database management (MySQL, MongoDB), including high availability and backup solutions. Hands-on experience managing cloud providers, with significant experience in AWS and Google Cloud Platform (GCP). More ❯
Git, Terraform, GitLab, Jira, etc. Solid experience with developing in Linux environment Experience with complex data structures and database and analytics technologies such as Redis, Postgres, MySQL, DynamoDB and Apache Druid Ideally have experience of: C/C++, systemd, monit, cgroups/runc/libcontainer or docker Experience in adapting and improving engineering best practices (e.g. test-driven development More ❯
REST APIs and integration techniques Familiarity with data visualization tools and libraries (e.g., Power BI) Background in database administration or performance tuning Familiarity with data orchestration tools, such as Apache Airflow Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing More ❯
Glasgow, Scotland, United Kingdom Hybrid / WFH Options
NLB Services
REST APIs and integration techniques · Familiarity with data visualization tools and libraries (e.g., Power BI) · Background in database administration or performance tuning · Familiarity with data orchestration tools, such as Apache Airflow · Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing More ❯
milton, central scotland, united kingdom Hybrid / WFH Options
NLB Services
REST APIs and integration techniques · Familiarity with data visualization tools and libraries (e.g., Power BI) · Background in database administration or performance tuning · Familiarity with data orchestration tools, such as Apache Airflow · Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing More ❯
paisley, central scotland, united kingdom Hybrid / WFH Options
NLB Services
REST APIs and integration techniques · Familiarity with data visualization tools and libraries (e.g., Power BI) · Background in database administration or performance tuning · Familiarity with data orchestration tools, such as Apache Airflow · Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing More ❯
or another language such as Python Good knowledge of developing in a Linux environment Working knowledge of Git version control and GitLabs CI/CD pipelines Experience working with Apache NiFi Some exposure to front-end elements like JavaScript, TypeScript or React Some data interrogation with ElasticSearch and Kibana Exposure to working with Atlassian products Looking for a role More ❯
Winchester, Hampshire, England, United Kingdom Hybrid / WFH Options
Ada Meher
days a week – based business need. To Be Considered: Demonstrable expertise and experience working on large-scale Data Engineering projects Strong experience in Python/PySpark, Databricks & Apache Spark Hands on experience with both batch & streaming pipelines Strong experience in AWS and associated tooling (Eg, S3, Glue, Redshift, Lambda, Terraform etc) Experience designing Data Engineering platforms from scratch Alongside More ❯
City of London, London, England, United Kingdom Hybrid / WFH Options
Ada Meher
days a week – based business need. To Be Considered: Demonstrable expertise and experience working on large-scale Data Engineering projects Strong experience in Python/PySpark, Databricks & Apache Spark Hands on experience with both batch & streaming pipelines Strong experience in AWS and associated tooling (Eg, S3, Glue, Redshift, Lambda, Terraform etc) Experience designing Data Engineering platforms from scratch Alongside More ❯
data integrity and compliance throughout. Key Requirements Active SC Clearance (used within the last 12 months) Proven experience with Databricks (including notebooks, clusters, and job orchestration) Strong knowledge of Apache Spark , PySpark , and distributed data processing Experience building and optimising ETL pipelines and data workflows Familiarity with Delta Lake , SQL , and data modelling best practices Ability to work with More ❯
experience in a commercial environment, working on AI/ML applications Multi cloud exposure (Azure/AWS/GCP) . Some of the following - Pytorch, GPT/BERT, RAG, Apache Airflow, Power Automate, Azure logic apps, RPA/Zapier, HuggingFace, LangChain... Background in Data Science or Software Engineering The values and ethos of this business Innovation with real purpose More ❯
United Kingdom, Wolstanton, Staffordshire Hybrid / WFH Options
Uniting Ambition
experience in a commercial environment, working on AI/ML applications Multi cloud exposure (Azure/AWS/GCP) . Some of the following - Pytorch, GPT/BERT, RAG, Apache Airflow, Power Automate, Azure logic apps, RPA/Zapier, HuggingFace, LangChain... Background in Data Science or Software Engineering The values and ethos of this business Innovation with real purpose More ❯
and a solid understanding of CI/CD pipelines, DevSecOps workflows, and automated policy enforcement tools (e.g., Snyk, GitHub Actions, Jenkins, Sonatype, etc.). Knowledge of software licensing (MIT, Apache, GPL, etc.) and IP risk management. Open Source license-risk engineering and experience building & enforcing technology standards, risk frameworks, & software asset policies. Control the adoption, contribution, and distribution of More ❯
Dev, QA, Production, and DR environments Contribute to SDLC automation using tools such as JIRA, Bamboo, and Ansible Qualifications & Experience Strong proficiency in Java; experience with Spring, Hibernate, and Apache Ignite is a plus Skilled in writing complex SQL queries Familiarity with Fidessa Equities platforms (ETP/CTAC) is advantageous Experience with Unix/Linux command-line and basic More ❯