Deep understanding in software architecture, object-oriented design principles, and data structures Extensive experience in developing microservices using Java, Python Experience in distributed computing frameworks like - Hive/Hadoop, Apache Spark. Good experience in Test driven development and automating test cases using Java/Python Experience in SQL/NoSQL (Oracle, Cassandra) database design Demonstrated ability to be proactive … HR related applications Experience with following cloud services: AWS Elastic Beanstalk, EC2, S3, CloudFront, RDS, DynamoDB, VPC, Elastic Cache, Lambda Working experience with Terraform Experience in creating workflows for Apache Airflow About Roku Roku pioneered streaming to the TV. We connect users to the streaming content they love, enable content publishers to build and monetize large audiences, and provide More ❯
/ec2), Infrastructure automation (Terraform), and CI/CD platform (Github Actions & Admin), Password/Secret management (hashicorp vault). Strong Data related programming skills SQL/Python/Spark/Scala. Database technologies in relation to Data Warehousing/Data Lake/Lake housing patterns and relevant experience when handling structured and non-structured data Machine Learning - Experience More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
KPMG UK
having resided in the UK for at least the past 5 years and being a UK national or dual UK national. Experience in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Experience with the design, build and maintenance of data pipelines and infrastructure More ❯
PhD degree in Computer Science, Engineering, Mathematics, Physics or a related field. Hands-on experience with LLMs, RAG, LangChain, or LlamaIndex. Experience with big data technologies such as Hadoop, Spark, or Kafka. The estimated total compensation range for this position is $75,000 - $90,000 ( USD base plus bonus). Actual compensation for the position is based on a More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
BlackRock, Inc
experience we are looking for includes: DevOps automation, idempotent deployment testing, and continuous delivery pipelines Networking and security protocols, load balancers, API Gateways ETL tooling and workflow engines (e.g., Spark, Airflow, Dagster, Flyte) Accelerated compute libraries and hardware (e.g. pytorch, NVIDIA GPUs) Data modeling, and strategies for cleaning and validating data at scale Performance tuning on RDBMS or Big More ❯
DV (MOD) Cleared Data Engineer - Elastic Stack & Apache NiFi Location: Bristol | Contract Type: £430.00 pd (Outside IR35) | Working Pattern: Hybrid (3 - 4 days on-site) Are you a contract Data Engineer with a knack for designing secure, high-performance data solutions? We're on the lookout for a technical expert in the Elastic Stack and Apache NiFi to … impact-ideal for professionals with a strong track record in regulated sectors. What You'll Be Doing Designing and deploying scalable, secure data pipelines using Elasticsearch, Logstash, Kibana, and Apache NiFi Handling real-time data ingestion and transformation with an emphasis on integrity and availability Collaborating with architects and cybersecurity stakeholders to align with governance and compliance needs Monitoring … Minimum 3 years' experience as a Data Engineer in sensitive or regulated industries Proficiency in the full Elastic Stack for data processing, analytics, and visualisation Hands-on expertise with Apache NiFi in designing sophisticated data workflows Solid scripting capabilities using Python, Bash, or similar Familiarity with best practices in data protection (encryption, anonymisation, access control) Experience managing large-scale More ❯
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Noir
They're Looking For: Experience in a data-focused role, with a strong passion for working with data and delivering value to stakeholders. Strong proficiency in SQL, Python, and ApacheSpark , with hands-on experience using these technologies in a production environment. Experience with Databricks and Microsoft Azure is highly desirable. Financial Services experience is a plus but More ❯
Bath, England, United Kingdom Hybrid / WFH Options
Noir
They’re Looking For: * Experience in a data-focused role, with a strong passion for working with data and delivering value to stakeholders. * Strong proficiency in SQL, Python, and ApacheSpark, with hands-on experience using these technologies in a production environment. * Experience with Databricks and Microsoft Azure is highly desirable. * Financial Services experience is a plus but More ❯
Are A degree in computer science, engineering, mathematics or a related technical field Experience with object-oriented programming preferred General familiarity with some of the technologies we use: Python, ApacheSpark/PySpark, Java/Spring Amazon Web Services SQL, relational databases Understanding of data structures and algorithms Interest in data modeling, visualisation, and ETL pipelines Knowledge of More ❯
systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: ApacheSpark and the Hadoop Ecosystem Edge technologies e.g. NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable for Data More ❯
of data points per day and create a highly available data processing and REST services to distribute data to different consumers across PWM. Technologies used include: Data Technologies: Kafka, Spark, Hadoop, Presto, Alloy - a data management and data governance platform Programming Languages: Java, Scala, Scripting Database Technologies: MongoDB, ElasticSearch, Cassandra, MemSQL, Sybase IQ/ASE Micro Service Technologies: REST … new tech stacks SKILLS AND EXPERIENCE WE ARE LOOKING FOR Computer Science, Mathematics, Engineering or other related degree at bachelors level Java, Scala, Scripting, REST, Spring Boot, Jersey Kafka, Spark, Hadoop, MongoDB, ElasticSearch, MemSQL, Sybase IQ/ASE 3+ years of hands-on experience on relevant technologies ABOUT GOLDMAN SACHS At Goldman Sachs, we commit our people, capital and More ❯
of experience in data engineering or a related field, with a focus on building scalable data systems and platforms. Strong expertise in modern data tools and frameworks such as Spark, dbt, Airflow, Kafka, Databricks, and cloud-native services (AWS, GCP, or Azure) Deep understanding of data modeling, distributed systems, ETL/ELT pipelines, and streaming architectures Proficiency in SQL More ❯
diagnose issues. Requirements Ideally a strong degree in computer science or a relevant area. Excellent coding skills specifically in Python. Very desirable commercial technical experience with tools such as Spark, Databricks, Airflow, Docker etc Commercial Containerisation & Infrastructure as code experience Previous work in a CI/CD environment AWS is the preferred cloud platform - Azure and/or GCP More ❯
cloud-native environments. · Familiarity with containerization (Docker, Kubernetes) and DevOps pipelines. · Exposure to security operations center (SOC) tools and SIEM platforms. · Experience working with big data platforms such as Spark, Hadoop, or Elastic Stack. #J-18808-Ljbffr More ❯
Greater Manchester, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
need to apply for the role: Solid hands-on experience with Azure data tools, including Data Factory, Data Lake, Synapse Analytics, and Azure SQL. Strong proficiency with Databricks, including Spark, Delta Lake, and notebooks. Skilled in Python and SQL for data transformation and processing. Experience with Git and modern CI/CD workflows. Strong analytical mindset and effective communication More ❯
Peterborough, England, United Kingdom Hybrid / WFH Options
Compare the Market
practices Familiarity with Agile collaboration tools such as Jira and Confluence Comfort working with distributed teams Effective team communication skills Knowledge of Data Engineering practices and technologies, especially Kafka, Spark, and AWS in Java or Python Experience with BDD, TDD, and CI/CD practices Knowledge of GoCD, Jenkins, and Github is preferred Strong SQL skills and familiarity with More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
need to apply for the role: Solid hands-on experience with Azure data tools, including Data Factory, Data Lake, Synapse Analytics, and Azure SQL. Strong proficiency with Databricks, including Spark, Delta Lake, and notebooks. Skilled in Python and SQL for data transformation and processing. Experience with Git and modern CI/CD workflows. Strong analytical mindset and effective communication More ❯
containerisation (Docker) Strong knowledge of cloud platforms like Azure, AWS or GCP for deploying and managing ML models Familiarity with data engineering tools and practices, e.g., distributed computing (e.g., Spark, Ray), cloud-based data platforms (e.g., Databricks) and database management (e.g., SQL) Strong communication skills, capability to present technical concepts to technical and non-technical stakeholders Experience in developing More ❯
no oversight Experience performing data analytics on AWS platforms Experience in writing efficient SQL's, implementing complex ETL transformations on big data platform. Experience in a Big Data technologies (Spark, Impala, Hive, Redshift, Kafka, etc.) Experience in data quality testing; adept at writing test cases and scripts, presenting and resolving data issues Experience with Databricks, Snowflake, Iceberg are required More ❯
West Bromwich, England, United Kingdom Hybrid / WFH Options
ADLIB Recruitment | B Corp™
across both on-premise and cloud-based data systems Clear communicator, able to translate complex data concepts to cross-functional teams Bonus points for experience with: Big data tools (Spark, Hadoop), ETL workflows, or high-throughput data streams Genomic data formats and tools Cold and hot storage management, ZFS/RAID systems, or tape storage AI/LLM tools More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
KPMG United Kingdom
resided in the UK for at least the past 5 years and being a UK national or dual UK national. Extensive experience in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Experience with the design, build and maintenance of data pipelines and infrastructure. More ❯
Easter Howgate, Midlothian, United Kingdom Hybrid / WFH Options
Leonardo UK Ltd
DataOps methodologies and tools, including experience with CI/CD pipelines, containerisation, and workflow orchestration. Familiar with ETL/ELT frameworks, and experienced with Big Data Processing Tools (e.g. Spark, Airflow, Hive, etc.) Knowledge of programming languages (e.g. Java, Python, SQL) Hands-on experience with SQL/NoSQL database design Degree in STEM, or similar field; a Master's More ❯
projects. Requirements Expertise in development languages including but not limited to: Java/J2EE, Scala/Python, XML, JSON, SQL, Spring/Spring Boot. Expertise with RESTful web services, Spark, Kafka etc. Experience with relational SQL, NoSQL databases and cloud technologies such as AWS/Azure/Google Cloud Platform (GCP), Kubernetes, and Docker. Extensive knowledge of object-oriented More ❯
Ops. Ideally, you’ll also be technically skilled in most or all of the below: Expert knowledge of Python and SQL, inc. the following libraries: Numpy, Pandas, PySpark and Spark SQL Expert knowledge of ML Ops frameworks in the following categories: experiment tracking and model metadata management (e.g. MLflow) orchestration of ML workflows (e.g. Metaflow) data and pipeline versioning More ❯