Northampton, Northamptonshire, East Midlands, United Kingdom
Experis
It, Express>It, Metadata Hub, and PDL. Hands-on experience with SQL , Unix/Linux shell scripting , and data warehouse concepts . Familiarity with big data ecosystems (Hadoop, Hive, Spark) and cloud platforms (AWS, Azure, GCP) is a plus. Proven ability to troubleshoot complex ETL jobs and resolve performance issues. Experience working with large-scale datasets and enterprise data More ❯
demonstrate the following experience: Commercial experience gained in a Data Engineering role on any major cloud platform (Azure, AWS or GCP) Experience in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Some experience with the design, build and maintenance of data pipelines and More ❯
Continuous Flows, >ConductIt, >ExpressIt, Metadata Hub, and PDL. * Hands-on experience with SQL, Unix/Linux Shell Scripting, and data warehouse concepts. * Familiarity with big data ecosystems (Hadoop, Hive, Spark) and cloud platforms (AWS, Azure, GCP) is a plus. * Proven ability to troubleshoot complex ETL jobs and resolve performance issues. * Experience working with large-scale datasets and enterprise data More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Gemba Advantage
Java, TypeScript, Python, and Go Web libraries and frameworks such as React and Angular Designing, building, and maintaining CI/CD pipelines Big data technologies, such as NiFi, Hadoop, Spark Cloud and containerization technologies such as AWS, OpenShift, Kubernetes, Docker DevOps methodologies, such as infrastructure as code and GitOps Database technologies, e.g. relational databases, Elasticsearch, Mongo Why join Gemba More ❯
Research/Statistics or other quantitative fields. Experience in NLP, image processing and/or recommendation systems. Hands on experience in data engineering, working with big data framework like Spark/Hadoop. Experience in data science for e-commerce and/or OTA. We welcome both local and international applications for this role. Full visa sponsorship and relocation assistance More ❯
fine-tuning, or deployment. Technical Skills: Proficiency in Python and frameworks such as PyTorch , TensorFlow , or JAX . Strong familiarity with distributed computing and data engineering tools (e.g., SLURM , ApacheSpark , Airflow ). Hands-on experience with LLM training, fine-tuning, and deployment (e.g., Hugging Face , LLamafactory , NVIDIA NeMo ). Preferred Qualifications Advanced degree (MS/PhD) in More ❯
fine-tuning, or deployment. Technical Skills: Proficiency in Python and frameworks such as PyTorch , TensorFlow , or JAX . Strong familiarity with distributed computing and data engineering tools (e.g., SLURM , ApacheSpark , Airflow ). Hands-on experience with LLM training, fine-tuning, and deployment (e.g., Hugging Face , LLamafactory , NVIDIA NeMo ). Preferred Qualifications Advanced degree (MS/PhD) in More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, ApacheSpark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, ApacheSpark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, ApacheSpark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, ApacheSpark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great More ❯
of SQL and Python You have strong hands-on experience of building scalable data pipelines in cloud based environments using tools such as DBT, AWS Glue, AWS Lake Formation, ApacheSpark and Amazon Redshift You have a good knowledge of data modelling, ELT design patterns, data governance and security best practices You're collaborative and pragmatic with great More ❯
testing of etl (extract, transform, load) processes and data warehousing. 3. Strong understanding of sql for data querying and validation. 4. Knowledge of big data technologies such as hadoop, spark, or kafka is a plus. 5. Familiarity with scripting languages like python, java, or shell scripting. 6. Excellent analytical and problem-solving skills with a keen attention to detail. More ❯
testing of etl (extract, transform, load) processes and data warehousing. 3. Strong understanding of sql for data querying and validation. 4. Knowledge of big data technologies such as hadoop, spark, or kafka is a plus. 5. Familiarity with scripting languages like python, java, or shell scripting. 6. Excellent analytical and problem-solving skills with a keen attention to detail. More ❯
london (city of london), south east england, united kingdom
HCLTech
testing of etl (extract, transform, load) processes and data warehousing. 3. Strong understanding of sql for data querying and validation. 4. Knowledge of big data technologies such as hadoop, spark, or kafka is a plus. 5. Familiarity with scripting languages like python, java, or shell scripting. 6. Excellent analytical and problem-solving skills with a keen attention to detail. More ❯
and technology teams. Exposure to low-latency or real-time systems. Experience with cloud infrastructure (AWS, GCP, or Azure). Familiarity with data engineering tools such as Kafka, Airflow, Spark, or Dask. Knowledge of equities, futures, or FX markets. Company Rapidly growing hedge fund offices globally including London Salary & Benefits The salary range/rates of pay is dependent More ❯
on leadership and communication, ensuring all key builds and improvements flow through this individual. Working with a modern tech stack including AWS, Snowflake, Python, SQL, DBT, Airflow, Spark, Kafka, and Terraform, you'll drive automation and end-to-end data solutions that power meaningful insights. Ideal for ambitious, proactive talent from scale-up or start-up environments, this position More ❯
Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. You don't need to More ❯
Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. You don't need to More ❯
london (city of london), south east england, united kingdom
algo1
Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. You don't need to More ❯
e.g. AWS, Azure. Good knowledge of Linux, it's development environments and tools Have experience in object-oriented methodologies, design patterns Understanding of Big Data technologies such as Hadoop, Spark Understanding of security implications and secure coding Proven grasp of software development lifecycle best-practices, agile methods, and conventions, including Source Code Management, Continuous Integration Practical experience with agile More ❯
Databricks platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and … best practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
Databricks platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and … best practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
london (city of london), south east england, united kingdom
Mastek
Databricks platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and … best practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯