experience working as a Software Engineer on large software applications Proficient in many of the following technologies – Python, REST, PyTorch, TensorFlow, Docker, FastAPI, Selenium, React, TypeScript, Redux, GraphQL, Kafka, Apache Spark. Experience working with one or more of the following database systems – DynamoDB, DocumentDB, MongoDB Demonstrated expertise in unit testing and tools – JUnit, Mockito, PyTest, Selenium. Strong working knowledge … experience working as a Software Engineer on large software applications Proficient in many of the following technologies – Python, REST, PyTorch, TensorFlow, Docker, FastAPI, Selenium, React, TypeScript, Redux, GraphQL, Kafka, Apache Spark. Experience working with one or more of the following database systems – DynamoDB, DocumentDB, MongoDB Demonstrated expertise in unit testing and tools – JUnit, Mockito, PyTest, Selenium. Strong working knowledge More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
Requirements: 3+ Years data engineering experience Snowflake experience Proficiency across an AWS tech stack DevOps experience building and deploying using Terraform Nice to Have: DBT Data Modelling Data Vault Apache Airflow Benefits: Up to 10% Bonus Up to 14% Pensions Contribution 29 Days Annual Leave + Bank Holidays Free Company Shares Interviews ongoing don't miss your chance to More ❯
City of London, England, United Kingdom Hybrid / WFH Options
Jefferson Frank
the businesses data arm. Requirements: * 3+ Years data engineering experience * Snowflake experience * Proficiency across an AWS tech stack * DBT Expertise * Terraform Experience Nice to Have: * Data Modelling * Data Vault * Apache Airflow Benefits: * Up to 10% Bonus * Up to 14% Pensions Contribution * 29 Days Annual Leave + Bank Holidays * Free Company Shares Interviews ongoing don't miss your chance to More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
Requirements: 3+ Years data engineering experience Snowflake experience Proficiency across an AWS tech stack DevOps experience building and deploying using Terraform Nice to Have: DBT Data Modelling Data Vault Apache Airflow Benefits: Up to 10% Bonus Up to 14% Pensions Contribution 29 Days Annual Leave + Bank Holidays Free Company Shares Interviews ongoing don't miss your chance to More ❯
Science or a related field. Experience working on and shipping live service games. Experience working on Spring Boot projects. Experience deploying software/services on Kubernetes. Experience working with Apache Spark and Iceberg. More ❯
Gloucester, Gloucestershire, South West, United Kingdom Hybrid / WFH Options
Omega Resource Group
GitLab) Contributing across the software development lifecycle from requirements to deployment Tech Stack Includes: Java, Python, Linux, Git, JUnit, GitLab CI/CD, Oracle, MongoDB, JavaScript/TypeScript, React, Apache NiFi, Elasticsearch, Kibana, AWS, Hibernate, Atlassian Suite What's on Offer: Hybrid working and flexible schedules (4xFlex) Ongoing training and career development Exciting projects within the UK's secure More ❯
the next generation of personalized generative voice products at scale. What You'll Do Build large-scale speech and audio data pipelines using frameworks like Google Cloud Platform and Apache Beam Work on machine learning projects powering new generative AI experiences and helping to build state-of-the-art text-to-speech models Learn and contribute to the team More ❯
pipelines Hands-on experience with Agile (Scrum) methodologies Database experience with Oracle and/or MongoDB Experience using the Atlassian suite : Bitbucket, Jira, and Confluence Desirable Skills Knowledge of Apache NiFi Front-end development with React (JavaScript/TypeScript) Working knowledge of Elasticsearch and Kibana Experience developing for cloud environments, particularly AWS (EC2, EKS, Fargate, IAM, S3, Lambda) Understanding More ❯
Snowflake, Elastic, Redshift, Data Bricks, Splunk, etc.). Strong and demonstrable experience writing regular expressions and/or JSON parsing, etc. Strong experience in log processing (Cribl, Splunk, Elastic, Apache NiFi etc.). Expertise in the production of dashboard/insight delivery. Be able to demonstrate a reasonable level of security awareness (An understanding of basic security best practices More ❯
Cheltenham, England, United Kingdom Hybrid / WFH Options
Babcock
MongoDB . Building and testing with frameworks like JUnit or Jest and using Git for version control. Experience with CI/CD tools like GitLab , Jenkins , Concourse , or even Apache NiFi , Elasticsearch , or Kibana . Agile development experience ( SCRUM , Kanban ). Building software for the cloud (we're especially keen if you know AWS , or have worked with microservices More ❯
skills Proficiency in multiple programming languages Technologies: Scala, Java, Python, Spark, Linux, shell scripting, TDD (JUnit), build tools (Maven/Gradle/Ant) Experience with process scheduling platforms like Apache Airflow Willingness to work with proprietary technologies like Slang/SECDB Understanding of compute resources and performance metrics Knowledge of distributed computing frameworks like DASK and cloud processing Experience More ❯
as well as partner and customer organisations. Requirements Experience with ETL pipeline solutions for the ingestion, transformation, and serving of data utilising technologies such as AWS Step Functions or Apache Airflow. Good understanding of data modelling, algorithm, and data transformation techniques to work with data platforms. Good knowledge of common databases ( RDBMS and NoSQL), Graph Databases (such as GraphDB More ❯
London, England, United Kingdom Hybrid / WFH Options
Artefact
leading data projects in a fast-paced environment. Key Responsibilities Design, build, and maintain scalable and robust data pipelines using SQL, Python, Databricks, Snowflake, Azure Data Factory, AWS Glue, Apache Airflow and Pyspark. Lead the integration of complex data systems and ensure consistency and accuracy of data across multiple platforms. Implement continuous integration and continuous deployment (CI/CD More ❯
of our core tech stack and the ability to oversee end-to-end solutions while leading projects are essential. Required Tools and Technologies: Microsoft Azure and cloud computing concepts Apache Spark – Databricks, Microsoft Fabric, or other Spark engines Python SQL – complex high-performance queries Azure Data Factory or other orchestration tools Azure Data Lake storage and Delta Lake Unity More ❯
Familiarity with scientific data standards, ontologies, and best practices for metadata capture. Understanding of data science workflows in computational chemistry, bioinformatics, or AI/ML-driven research. Orchestration & ETL: Apache Airflow, Prefect Scientific Libraries (Preferred): RDKit, Open Babel, CDK Seniority level Seniority level Mid-Senior level Employment type Employment type Full-time Job function Job function Engineering, Research, and More ❯
London, England, United Kingdom Hybrid / WFH Options
DATAPAO
industries) on some of our most complex projects - individually or by leading small delivery teams. Our projects are fast-paced, typically 2 to 4 months long, and primarily use Apache Spark/Databricks on AWS/Azure. You will manage customer relationships either alone or with a Project Manager, and support our pre-sales, mentoring, and hiring efforts. What More ❯
driven performance analysis and optimizations. Strong communication skills and the ability to work in a team. Strong analytical and problem-solving skills. PREFERRED QUALIFICATIONS Experience with Kubernetes deployment architectures Apache NiFi experience Experience building trading controls within an investment bank ABOUT GOLDMAN SACHS At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and More ❯
Science, Computer Science, or a related field. 5+ years of experience in data engineering and data quality. Strong proficiency in Python/Java, SQL, and data processing frameworks including Apache Spark. Knowledge of machine learning and its data requirements. Attention to detail and a strong commitment to data integrity. Excellent problem-solving skills and ability to work in a More ❯
experience working as a Software Engineer on large software applications Proficient in many of the following technologies – Python, REST, PyTorch, TensorFlow, Docker, FastAPI, Selenium, React, TypeScript, Redux, GraphQL, Kafka, Apache Spark. Experience working with one or more of the following database systems – DynamoDB, DocumentDB, MongoDB Demonstrated expertise in unit testing and tools – JUnit, Mockito, PyTest, Selenium. Strong working knowledge More ❯
What would be advantageous: Strong understanding of financial markets. Experienceworking with hierarchical referencedata models. Provenexpertise in handling high-throughput, real-time marketdata streams Familiarity with distributed computing frameworks suchas Apache Spark Operational experience supporting real time systems. Equal Opportunity Workplace We are proud to be an equal opportunity workplace. We do not discriminate based upon race, religion, color, national More ❯
Proven desire to expand your cloud/platform engineering capabilities Experience working with Big Data Experience of data storage technologies: Delta Lake, Iceberg, Hudi Proven knowledge and understanding of Apache Spark, Databricks or Hadoop Ability to take business requirements and translate these into tech specifications Competence in evaluating and selecting development tools and technologies Sound like the role you More ❯
London, England, United Kingdom Hybrid / WFH Options
Solirius Reply
have framework experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience, such as Python More ❯
Python, or C# with Spring Boot or .NET Core. Data Platforms: Warehouses: Snowflake, Google BigQuery, or Amazon Redshift. Analytics: Tableau, Power BI, or Looker for client reporting. Big Data: Apache Spark or Hadoop for large-scale processing. AI/ML: TensorFlow or Databricks for predictive analytics. Integration Technologies: API Management: Apigee, AWS API Gateway, or MuleSoft. Middleware: Red Hat More ❯
in Python with libraries like TensorFlow, PyTorch, or Scikit-learn for ML, and Pandas, PySpark, or similar for data processing. Experience designing and orchestrating data pipelines with tools like Apache Airflow, Spark, or Kafka. Strong understanding of SQL, NoSQL, and data modeling. Familiarity with cloud platforms (AWS, Azure, GCP) for deploying ML and data solutions. Knowledge of MLOps practices More ❯
MySQL, PostgreSQL, or Oracle Experience with big data technologies such as Hadoop, Spark, or Hive Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow Proficiency in Python and at least one other programming language such as Java, or Scala Willingness to mentor more junior members of the team Strong analytical and problem More ❯