libraries. Knowledge of Azure or other cloud services and ability to implement solutions utilizing them. Familiarity with databases (e.g., MySQL, MongoDB), web servers (e.g. Apache) and UI/UX design. Building and scaling infrastructure services using Microsoft Azure Experience of using core cloud application infrastructure services including identity platforms more »
data components such as Azure Data Factory, Azure SQL DB, Azure Data Lake, etc. Strong Python and SQL skills for data manipulation Experience with Apache Spark and/or Databricks. Familiarity with BI visualization tools like Power BI Experience in managing end-to-end analytics pipelines (batch and streaming … such as Azure Data Engineer Associate are desirable. Knowledge of data ingestion methods for real-time and batch processing Proficiency in PySpark and debugging Apache Spark workloads. What’s in it for you? Annual bonus scheme – up to 10% Excellent pension scheme Flexible working Enhanced family friendly policies Ongoing more »
business context. Commercially minded, thinking about ways to increase revenue & profitability. Proficiency in data manipulation tools (Python, Pandas, Spark, SQL) and data visualization tools (Apache Superset, Tableau, Power BI, ggplot2) and MS Excel. Grasp of pricing strategies, market dynamics, and consumer behaviour in the online space is a plus. more »
. • Troubleshooting networks issues (tcpdump/Wireshark). • Scripting capabilities (SH/Bash/Python/Perl). • Configuration of common services (DNS/Apache/NGINX/Postfix/Squid/SSH/iptables). • Understanding of clustering services, enabling High Availability failover. • Experience with enterprise hardware and more »
engineer will supply advice to analytical users on how they can access and utilise the new datasets. Qualities Comfortable with Python - ideally experience with Apache Spark and Pyspark Previous data analytics software experience Able to scope new integrations and translate analytical user needs into technical requirement. UK based – data more »
Terraform/Docker/Kubernetes. Write software using either Java/Scala/Python . The following are nice to have, but not required - Apache Spark jobs and pipelines. Experience with any functional programming language. Database design concepts. Writing and analysing SQL queries. Application overVIOOH Our recruitment team will more »
and Saltstack CI/CD: Jenkins, GitLab CI/CD Data/Messaging: Amazon Aurora (Postgres), ElastiCache (Redis), AmazonMQ (RabbitMQ) API: Tyk API Gateway, Apache Monitoring/Logging: Datadog and SumoLogic Security: IAM (Identity Access Management), Security Groups, mTLS Other: VPC and general networking In return, they would be more »
Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on more »
London, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
Work with Hadoop, Spark, and other platforms for large-scale data processing. Real-Time Data Streaming: Develop and manage pipelines using CDC, Kafka, and Apache Spark. Database Management: Handle SQL databases like Oracle, MySQL, or PostgreSQL. Data Governance: Ensure data quality, security, and compliance with best practices. Ideal Candidate more »
step functions and ECS services. Strong understanding of AWS ecosystems like Lambdas, step functions and ECS services. Experience with data stack technologies, such as Apache Iceberg & Spark. Exposure to Apache Airflow, Prefect, Dagster, DBT. Expertise in data analysis with exposure to data services (such as Glue, Lake Formation more »
Basingstoke, England, United Kingdom Hybrid / WFH Options
Intec Select
cross-functionally across the business to understand the requirements of the products Designing and implementing performance related data ingestion pipelines from multiple sources using Apache Spark Integrating end-to-end data pipelines ensuring a high level of quality is maintained Working with an Agile delivery/DevOps methodology to more »
build, operate and manage a complex production environment. Familiarity with RedHat based Linux versions Experience of Web Application servers architectures, security, protocols and technologies (Apache Web Server, HAProxy, Tomcat) configuration and optimization Understanding of DR/BCP business processes Comfortable liaising with business users as well as technical teams more »
catalogue, data quality management platforms). Experience with structured and unstructured data models. Experience with one or more of the following web servers advantageous (Apache, Nginx, MS-IIS). Experience with one or more of the following databases advantageous (MySQL, MongoDB, Sybase). Experience with one or more of more »
Greater London, England, United Kingdom Hybrid / WFH Options
CommuniTech Recruitment Group
AWS ecosystems like Lambdas, step functions and ECS services. Experience of Dremio is a nice to have Experience with data stack technologies, such as Apache Iceberg & Spark. Exposure to Apache Airflow, Prefect, Dagster, DBT. Expertise in data analysis with exposure to data services (such as Glue, Lake Formation more »
comfortable designing and constructing bespoke solutions and components from scratch to solve the hardest problems. Adept in Java, Scala, and big data technologies like Apache Kafka and Apache Spark, they bring a deep understanding of engineering best practices. This role involves scoping and sizing, and indeed estimating and … be considered. Key responsibilities of the role are summarised below Design and implement large-scale data processing systems using distributed computing frameworks such as Apache Kafka and Apache Spark. Architect cloud-based solutions capable of handling petabytes of data. Lead the automation of CI/CD pipelines for more »
tools (e.g., Docker, Kubernetes). CI/CD pipelines and tools (e.g. DBT, Jenkins, GitLab CI) Desirable: Experience with analytics tools and frameworks (e.g., Apache Spark, Hadoop). SQL Sagemaker, DataRobot Google Cloud and Azure Data platform metadata driven frameworks to ingest, transform and manage data more »
Microservice frameworks • working knowledge of client-side web technologies (React, JavaScript) • experience with Messaging frameworks (like Tibco, Kafka) • experience with web servers running Tomcat, Apache • exposure to Azure Cloud services (like Azure AKS, CI/CD) • knowledge of open-source market edge technologies like Cache frameworks, Monitoring tools etc. more »
within the EU Fusion programme and connections to international HPC communities, showcasing contributions made to the field. Experience in workflow management systems such as Apache Airflow. Familiarity with Research Data Management methodologies,modern database technologies including SQL, NoSQL and Graph Databases, and parallel file access technologies such as MPI more »
Greater London, England, United Kingdom Hybrid / WFH Options
Validis
to leverage CI/CD tools to streamline data pipeline development and deployment. Proven expertise in designing and implementing ETL pipelines using tools like Apache Airflow, Luigi, Spark, or similar frameworks. Strong understanding of data warehousing concepts and data modelling techniques. Experience with SQL and proficiency in writing complex more »
GoLang. - Significant experience with Hadoop, Spark and other distributed processing platforms and frameworks. - Experience working with Open table/storage formats like delta lake, apache iceberg or apache hudi. - Experience of developing and managing real time data streaming pipelines using Change data capture (CDC), Kafka and Apachemore »
pipelines Know your way around Unix based operating system Experience working with any major cloud provider (AWS, GCP, Azure) Fluency in English Experience using Apache Airflow Experience using Docker Experience using Apache Spark Benefits: Salary £40-50K per annum dependant on skills and experience 25 Days annual more »
step functions and ECS services. Strong understanding of AWS ecosystems like Lambdas, step functions and ECS services. Experience with data stack technologies, such as Apache Iceberg & Spark. Exposure to Apache Airflow, Prefect, Dagster, DBT. Expertise in data analysis with exposure to data services (such as Glue, Lake Formation more »
4.Monitoring and Logging: Implement and maintain monitoring, logging, and alerting solutions. Key technologies: AWS, VPN, VPC Peering, EC2, S3, Lambda, Aurora, Docker/Kubernetes. Apache Airflow, AWS networking concepts such as VPN, VPC peering, subnets, security groups, NAT gateways. AWS CloudWatch or equivalent. Kafka or similar data streaming components. more »