Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on more »
and Saltstack CI/CD: Jenkins, GitLab CI/CD Data/Messaging: Amazon Aurora (Postgres), ElastiCache (Redis), AmazonMQ (RabbitMQ) API: Tyk API Gateway, Apache Monitoring/Logging: Datadog and SumoLogic Security: IAM (Identity Access Management), Security Groups, mTLS Other: VPC and general networking In return, they would be more »
London, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
Work with Hadoop, Spark, and other platforms for large-scale data processing. Real-Time Data Streaming: Develop and manage pipelines using CDC, Kafka, and Apache Spark. Database Management: Handle SQL databases like Oracle, MySQL, or PostgreSQL. Data Governance: Ensure data quality, security, and compliance with best practices. Ideal Candidate more »
Senior QA Automation Tester - Cypress Job Title: QA Automation Engineer Location: South West London Salary: £60,000 per year + Benefits Job type: Permanent Role: Hybrid A long established Fintech company based in London, renowned for its cutting-edge payment more »
pipelines Know your way around Unix based operating system Experience working with any major cloud provider (AWS, GCP, Azure) Fluency in English Experience using Apache Airflow Experience using Docker Experience using Apache Spark Benefits: Salary £40-50K per annum dependant on skills and experience 25 Days annual more »
Linux environments. Knowledge of data modeling, database design, and query optimization techniques. Experience with real-time data processing, streaming, and analytics technologies (e.g., Kafka, Apache Flink). Familiarity with financial markets, trading systems, and quantitative analysis is a plus. Excellent problem-solving, analytical, and communication skills, with the ability more »
GoLang. - Significant experience with Hadoop, Spark and other distributed processing platforms and frameworks. - Experience working with Open table/storage formats like delta lake, apache iceberg or apache hudi. - Experience of developing and managing real time data streaming pipelines using Change data capture (CDC), Kafka and Apachemore »
Recent and proven experience of using Red Hat Linux (or others Unix flavours) including scripting in a commercial environment Experience supporting applications (Java, .NET, Apache, IIS); Desirable: Knowledge of Microsoft Windows Server. Experience with working within financial industry; Experience with working within an ITIL framework; Experience with working with more »
Central London, London, United Kingdom Hybrid / WFH Options
Hireful
the following: - Experience of working in an Agile product delivery framework - Experience with PHP 8+ and the Laravel framework - Experience with Linux, NGINX (or Apache), MySQL server. LEMP/LAMP stack. - Experience of writing unit tests with test frameworks (PHPUnit, Codeception, etc.) Although we have a dedicated QA Test more »
the following: - Experience of working in an Agile product delivery framework - Experience with PHP 8+ and the Laravel framework - Experience with Linux, NGINX (or Apache), MySQL server. LEMP/LAMP stack. - Experience of writing unit tests with test frameworks (PHPUnit, Codeception, etc.) Although we have a dedicated QA Test more »
of atomic concepts Compiler technologies such as building interpreters, compilers and DSLs Relational Databases and Massively Parallel Database systems Big Data Technologies such as Apache Spark, Apache Arrow Software Development in a Commercial Environment Qualification & Skills: Development Tools and Methodologies Experience of TDD and BDD in a commercial … Exposure to continuous build and deployment solutions such as Jenkins Able to work within an agile environment delivering software incrementally Nice to have: Clickhouse, Apache Spark, Kafka, Postgres, OLAP Thank you for considering us more »
engineers of varying levels of experience. Flexibility and willingness to adapt to new software and techniques. Nice to Have Experience working with projects in Apache Spark, Databricks of similar. Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities? A technical expert and leader on the Petcare more »
Platforms. Upskill business users in analytical standards and data science tools and techniques. Experienced using one or more analytical tools e.g. R, Python, Tableau, Apache Spark, etc. Highly skilled in Azure Machine Learning and Azure DataBricks Knowledge and experience of how to maintain data, tools and processes to generate more »
Key responsibilities: Develop robust architectures and designs for big data platform and applications within the Apache Hadoop ecosystem. Implement and deploy big data platform and solutions on-premises and in hybrid cloud environments. Read, understand, and modify open-source code to implement bug fixes and perform upgrades. Ensure all … applications. Your Profile Key Skills/Knowledge/Experience: Proven experience in architecting, designing, building, and deploying big data platforms and applications using the Apache Hadoop ecosystem in hybrid cloud and private cloud scenarios. Experience with hybrid cloud big data platform designs and deployments, especially in AWS, Azure, or … Google Cloud Platform. Experience in large-scale data platform builds and application migrations. Expert knowledge of Apache Hadoop ecosystem and associated Apache projects (eg, HDFS, Hive, HBase, Spark, Ranger, Kafka, Yarn etc.). Proficiency in Kubernetes for container orchestration. Strong understanding of security practices within big data environments. more »
Hackney, Greater London, Shoreditch, United Kingdom
Talent Smart
role. Proven experience with Snowflake data warehouse, including data loading, transformations, and performance tuning. Strong expertise in ETL tools and processes (e.g., Talend, Informatica, Apache Nifi, etc.). Experience with data visualization tools, particularly Power BI. Excellent problem-solving and analytical skills. Strong communication skills, with the ability to more »
designing and building platforms, and supporting applications both in cloud environments and on-premises. These resources are expected to be open-source contributors to Apache projects, have an in-depth understanding of the code behind the Apache ecosystem, and be capable of identifying and fixing complex issues during more »
developing and optimising ETL pipelines. Version Control: Experience with Git for code collaboration and change tracking. Data Pipeline Tools: Proficiency with tools such as Apache Airflow. Cloud Platforms: Familiarity with AWS, Azure, Snowflake, and GCP. Visualisation: Tableau or PowerBI Delivery Tools: Familiarity with agile backlogs, code repositories, automated builds more »
pipelines solutions for the ingestion, transformation, and serving of data, as well as solutions for the orchestration of pipeline components (e.g. AWS Step Functions, Apache Airflow). Good understanding of data modelling, algorithm, and data transformation techniques to work with data platforms. Working knowledge of cloud development practices (AWS more »
a production setting. Knowledge of developing real-time data stream systems (ideally Kafka). Proven track record in developing data systems using PySpark and Apache Spark for batch processing. Capable of managing data intake from various sources, including data streams, unstructured data, relational databases, and NoSQL databases. Extensive knowledge more »