AWS ecosystems like Lambdas, step functions and ECS services. Experience of Dremio is a nice to have Experience with data stack technologies, such as Apache Iceberg & Spark. Exposure to Apache Airflow, Prefect, Dagster, DBT. Expertise in data analysis with exposure to data services (such as Glue, Lake Formation more »
step functions and ECS services. Strong understanding of AWS ecosystems like Lambdas, step functions and ECS services. Experience with data stack technologies, such as Apache Iceberg & Spark. Exposure to Apache Airflow, Prefect, Dagster, DBT. Expertise in data analysis with exposure to data services (such as Glue, Lake Formation more »
and transformation. 4. Develop and maintain ETL workflows, scripts, and data processing jobs using programming languages (e.g., Python, Java, Scala) and ETL tools (e.g., Apache Spark, Apache Airflow). 5. Identify and address data quality issues and implement data cleansing, validation, and enrichment processes. 6. Collaborate with software more »
that incorporate various data backends, query languages and ORM frameworks. Experience designing and building ETL pipelines built around libraries and frameworks like Pandas and Apache Spark. Strong API design skills and a familiarity with building web applications. A proponent of great testing, first-class observability and automating everything. Familiarity more »
engineers of varying levels of experience. Flexibility and willingness to adapt to new software and techniques. Nice to Have Experience working with projects in Apache Spark, Databricks of similar. Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities? A technical expert and leader on the Petcare more »
CMake Proficiency in developing cross-platform SDKs for Windows, macOS, Linux, WebAssembly and Embedded Platforms Knowledge of machine learning frameworks such as ONNXRuntime or Apache TVM Experience deploying and optimising real time embedded audio algorithms Familiarity with audio codecs, audio formats and audio streaming protocols is preferred Willingness to more »
development (ideally AWS) and container technologies Strong communication and interpersonal skills Experience managing projects and working with external third party teams Ideally experience with Apache Spark or Apache Flink (but not essential) Please note, this role is unable to provide sponsorship. If this role sounds of interest and more »
work is largely down to you. It can be entirely Back End. Otherwise, the stack includes Redux Saga, Ag-Grid, Node, TypeScript, gRPC, protobuf, Apache Ignite, Apache Airflow and AWS. As the application suite grows and advances in complexity, there is a decent amount of interaction with the more »
problem-solving and communication skills Proficiency in scripting for automating deployment and maintenance tasks. Understanding of DAG (Directed Acyclic Graph) models and experience with Apache Airflow for managing complex data processing work-flows. Solid understanding of software development best practices, including version control (Git), testing, and code review processes. more »
learning management systems or content management systems) Strong knowledge of customer centric service management processes Experience with web hosting platforms and security standards (e.g. Apache) Demonstrated ability to adapt to an ever-changing technical landscape. Extensive experience of working with a diverse range of stakeholders and external partners to more »
London, England, United Kingdom Hybrid / WFH Options
Aventum Group
Profisee), Snowflake Data Integration, Azure Service Bus, Deltalake, BigQuery, Azure DevOps, Azure Monitor, Azure Data Factory, SQL Server, Azure DataLake Storage, Azure App Service, Apache Airflow, Apache Iceberg, Apache Spark, Apache Hudi, Apache Kafka, Power BI, BigQuery, Azure ML is a plus Experience with Azure more »
field (STEM) Technical proficiency in cloud-based data solutions (AWS, Azure or GCP), engineering languages including Python, SQL, Java, and pipeline management tools e.g., Apache Airflow. Familiarity with big data technologies, Hadoop, or Spark. If this opportunity is of interest, or you know anyone who would be interested in more »
multithreading, database access, performance tuning and design patterns. Experience in a diverse set of technologies including SQL, Spring, Spring Boot, Hibernate, JPA, Junit, Mockito, Apache Spark, Storm and related technologies. Practical experience in developing software products/solutions that are deployed on cloud (as PaaS, SaaS) using a client more »
testing, and maintenance of data pipelines and data storage systems on Google Cloud Platform (GCP). You will be working with technologies such as Apache Airflow, BigQuery, Python, and SQL to transform and load large data sets, ensuring high data quality and accessibility for business intelligence and analytics purposes. more »
in: Building a modular Kubernetes-centric platform, with Pulumi, Terraform, and Argo. Implementing service mesh and configuration management for microservices. Operating critical infrastructure like Apache Pulsar or Kafka and Keycloak. Developing a multi-Cloud approach supporting Azure, Alibaba, and GCP. Implementing collection, dashboards, and alerts for logs and metrics. more »
data engineering or a similar role. > Proficiency in programming languages such as Python, Java, or Scala. > Strong experience with data processing frameworks such as Apache Spark, Apache Flink, or Hadoop. > Hands-on experience with cloud platforms such as AWS, Google Cloud, or Azure. > Experience with data warehousing technologies more »