work is largely down to you. It can be entirely Back End. Otherwise, the stack includes Redux Saga, Ag-Grid, Node, TypeScript, gRPC, protobuf, Apache Ignite, Apache Airflow and AWS. As the application suite grows and advances in complexity, there is a decent amount of interaction with the more »
Java, Python, Spark, Linux and shell scripting, TDD (JUnit), build tools (Maven/Gradle/Ant) Experience in working with process scheduling platforms like Apache Airflow. Open to working in GS proprietary technology like Slang/SECDB An understanding of compute resources and the ability to interpret performance metrics more »
Java, Python, Spark, Linux and shell scripting, TDD (JUnit), build tools (Maven/Gradle/Ant) Experience in working with process scheduling platforms like Apache Airflow. Open to working in GS proprietary technology like Slang/SECDB An understanding of compute resources and the ability to interpret performance metrics more »
factor app development standards. Experience building modern enterprise applications and deploying to public or private clouds including AWS. Experience in distributed cache systems like Apache Ignite or Redis. Experience in big data platforms and technologies such as Hadoop, Hive, HDFS, Presto/Starburst, Spark, and Kafka. Experience in Spring more »
or similar.Experience with data warehouse technologies and relevant data modeling best practices.Experience building data pipelines/ETL (or ELT) and familiarity with design principles (Apache Airflow is a big plus)Experience with at least one of major programming languages (e.g. Python, Scala, Java,..) Experience with business requirements gathering more »
factor app development standards. Experience building modern enterprise applications and deploying to public or private clouds including AWS. Experience in distributed cache systems like Apache Ignite or Redis. Experience in big data platforms and technologies such as Hadoop, Hive, HDFS, Presto/Starburst, Spark, and Kafka. Experience in Spring more »
Design and Design Patterns Deep understanding of software architecture in particular of distributed systems. Deep understanding of algorithms and data structures. Desirable skills: REST, Apache Beam, Docker, Kubernetes Experience on valuation adjustments, in particular xVA Experience on low latency systems using gRPC and protobuf Responsibilities: Design, build and implementation more »
on data storage, processing, management and capturing Data analytics and insights Hands-on experience with ETL tools and processes, and orchestration platforms such as Apache Airflow or AWS Step Functions with data pipeline tools like Airflow/Databricks. Proven experience with AWS data services, including S3, Glue, Redshift, RDS more »
designing and building robust, scalable, distributed data systems and pipelines, using open source and public cloud technologies. Strong experience with data orchestration tools: e.g. Apache Airflow, Dagster. Experience with one or more big data storage and processing technologies: e.g. DBT, Spark (PySpark/Spark SQL), SQL, Athena/Trino. more »
London, England, United Kingdom Hybrid / WFH Options
Darwin Recruitment
Scala or Java. SQL, MySQL, PostgreSQL, SQL Server. Big Data (Spark, Hadoop). Datawarehousing (Snowflake, Redshift). Data Modeling and Schema design. Data Integration (Apache Nifi, Informatica, Talend). Cloud - AWS, Azure, GCP. Kafka, Airflow, FastAPI. Terraform, Docker, CI/CD. This opportunity will really suit those with a more »
Experience working with popular front-end frameworks such as React and Vue.js. Knowledge of database technologies MySQL and platforms such as Airtable Exposure to Apache and/or Nginx. Linux operating system knowledge is a bonus. Familiarity with version control systems such as Git. Strong problem-solving skills and more »
. Solid experience with Python or Java for data manipulation. Understanding of data modeling , data warehousing , and database optimization . Familiarity with tools like Apache Spark , Airflow , Kafka , or similar technologies. Strong problem-solving skills with a focus on data quality and performance. Experience working in Agile environments and more »
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
aPriori
similar role. Strong understanding of data structures, algorithms, and software engineering principles. Expertise in SQL and NoSQL databases, and workflow management tools such as Apache Airflow. Experience with Data Catalogs, Data Governance, and Data Lineage (e.g., DataPlex, DataHub). Proficient in programming languages such as Python, Java, or Scala. more »
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
aPriori
leadership role. Strong understanding of data structures, algorithms, and software engineering principles. Expertise in SQL and NoSQL databases, and workflow management tools such as Apache Airflow. Experience with Data Catalogs, Data Governance and Data lineage (e.g. DataPlex, DataHub) Proficient in programming languages such as Python, Java, or Scala. Preferred more »
with cloud data platforms (Azure, AWS, GCP). Familiarity with machine learning and AI concepts. Experience in data platform migrations and integration tools (eg, Apache Airflow). Knowledge of Python or other programming languages. Certification in relevant data engineering or cloud platforms. Personal Attributes: High level of accuracy and more »
Python Good knowledge of developing in a Linux environment Working knowledge of Git version control and GitLabs CI/CD pipelines Experience working with Apache NiFi Some exposure to front-end elements like JavaScript, TypeScript or React Some data interrogation with ElasticSearch and Kibana Exposure to working with Atlassian more »
Experience with data orchestration tools (e.g., Airflow, Prefect, Dagster) Knowledge of Postgres, GraphQL, and other data manipulation tools Familiarity with big data concepts, including Apache Iceberg and data lakes Experience with geospatial data formats (Parquet/GeoParquet, GeoJSON, Shapefiles) Proficiency in DevOps tools (Git, Docker, Jenkins) Understanding of networking more »
Fivetran. Strong knowledge of API integration and JSON transformation Experience with event streaming (e.g., Azure Service Bus, Kafka, RabbitMQ) Familiarity with orchestration platforms (e.g., Apache Airflow, Dagster, Prefect) Proficient with CI/CD principles and Git-based workflows Experience with cloud services, particularly Azure, and containerisation tools like Kubernetes more »
Gloucester, Gloucestershire, United Kingdom Hybrid / WFH Options
Nicholas Howard Ltd
in mentoring junior team members Experience in Oracle/Relational Databases and/or Mongo Experience in GitLab CI/CD Pipelines Knowledge of Apache NiFi Experience in JavaScript/TypeScript & React Experience of Elasticsearch and Kibana Knowledge of Hibernate Proficiency in the use of Atlassian Suite - Bitbucket, Jira more »
in optimising data pipelines and enhancing the client’s data capabilities. Technology Stack Our client's tech stack includes Python, Docker, Dagster, dbt, Fivetran, Apache Iceberg, AWS Athena, S3, Glue, Redshift, ECS, and Looker. Quality assurance is a top priority, with rigorous testing methodologies applied throughout the data engineering more »
have experience architecting data pipelines and are self-sufficient in getting the data you need to build and evaluate models, using tools like Dataflow, Apache Beam, or Spark. You have a good understanding of MLOps and the model lifecycle (AB/Testing, experimentation, push and maintaining models in production more »
data from a wide variety of corporate structured and unstructured sources, 3rd party providers and publicly available data. Leverage cloud data processing frameworks like Apache Spark, Databricks, or similar for handling, processing and analysing complex, high-volume data both structured and unstructured. Ability to script repetitive data tasks in more »
data from a wide variety of corporate structured and unstructured sources, 3rd party providers and publicly available data. Leverage cloud data processing frameworks like Apache Spark, Databricks, or similar for handling, processing and analysing complex, high-volume data both structured and unstructured. Ability to script repetitive data tasks in more »
data science and analytics concepts Expert-level industry experience with data engineering tool usage, including but not limited to: Microsoft Azure SQL Python Key Apache tools (Spark, Kafka etc) Docker Industry-leading ETL tools Practical data warehousing Strong experience with AWS Strong familiarity with GCP and Snowflake Experience creating more »