of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and More ❯
data modelling, warehousing, and performance optimisation. Proven experience with cloud platforms (AWS, Azure, or GCP) and their data services. Hands-on experience with big data frameworks (e.g. Apache Spark, Hadoop). Strong knowledge of data governance, security, and compliance. Ability to lead technical projects and mentor junior engineers. Excellent problem-solving skills and experience in agile environments. Desirable: Experience More ❯
data modelling, warehousing, and performance optimisation. Proven experience with cloud platforms (AWS, Azure, or GCP) and their data services. Hands-on experience with big data frameworks (e.g. Apache Spark, Hadoop). Strong knowledge of data governance, security, and compliance. Ability to lead technical projects and mentor junior engineers. Excellent problem-solving skills and experience in agile environments. Desirable: Experience More ❯
SQL and data modeling concepts.Experience with ETL tools (Informatica, Talend, Apache Airflow, or similar)Hands-on experience with Python or Scala for data processing.Familiarity with big data technologies like Hadoop, Spark, Hive, or KafkaExperience with cloud data platforms (AWS Redshift, Azure Data Factory, Google BigQuery, SnowflakeKnowledge of data warehousing concepts and data architecture best practices Good to Have : Experience More ❯
ETL processes. Proficiency in Python. Experience with cloud platforms (AWS, Azure, or GCP). Knowledge of data modelling, warehousing, and optimisation. Familiarity with big data frameworks (e.g. Apache Spark, Hadoop). Understanding of data governance, security, and compliance best practices. Strong problem-solving skills and experience working in agile environments. Desirable: Experience with Docker/Kubernetes, streaming data (Kafka More ❯
ETL processes. Proficiency in Python. Experience with cloud platforms (AWS, Azure, or GCP). Knowledge of data modelling, warehousing, and optimisation. Familiarity with big data frameworks (e.g. Apache Spark, Hadoop). Understanding of data governance, security, and compliance best practices. Strong problem-solving skills and experience working in agile environments. Desirable: Experience with Docker/Kubernetes, streaming data (Kafka More ❯
experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience, such as Python Reporting tools More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Solirius Reply
experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience, such as Python Reporting tools More ❯
Luton, England, United Kingdom Hybrid/Remote Options
easyJet
Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data lineage frameworks like Great Expectations, dbt More ❯
communication skills We're excited if you have 7+ years of experience delivering multi tier, highly scalable, distributed web applications Experience working with Distributed computing frameworks knowledge: Hive/Hadoop, Apache Spark, Kafka, Airflow Working with programming languages Python , Java, SQL. Working on building ETL (Extraction Transformation and Loading) solution using PySpark Experience in SQL/NoSQL database design More ❯
to support analytical and business goals. Monitor, troubleshoot , and enhance data performance and infrastructure. Key Skills & Experience: Strong experience with SQL/NoSQL databases, data warehousing, and big data (Hadoop, Spark). Proficient in Python, Java, or Scala with solid OOP and design pattern understanding. Expertise in ETL tools, DevOps and orchestration frameworks (Airflow, Apache NiFi). Hands-on More ❯
Elizabeth, New Jersey, United States Hybrid/Remote Options
ALTA IT Services
of experience may be considered in lieu of a degree. Proficiency in programming languages like Python or Java, strong SQL skills, and knowledge of big data tools like ApacheHadoop, Spark, or Kafka. Experience with cloud platforms (AWS, Azure, GCP) and data warehousing solutions (Snowflake, Redshift, BigQuery) Self-driven and have demonstrated the ability to work independently with minimum More ❯
tools and libraries (e.g., Power BI) Background in database administration or performance tuning Familiarity with data orchestration tools, such as Apache Airflow Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing Strong analytical skills, including a thorough understanding of how to interpret customer business requirements and translate them into technical designs and solutions. Strong communication More ❯
in data engineering, with a strong emphasis on data design and architecture. Proven proficiency in SQL and experience with relational databases. Practical experience with big data technologies such as Hadoop or Spark. In-depth understanding of data warehousing concepts and ETL frameworks. Familiarity with cloud platforms including AWS, Azure, or GCP. Strong analytical and problem-solving skills, with the More ❯
Azure Data FactoryExperience in Unix/Linux environments and shell scripting.Excellent problem-solving, analytical, and communication skills. Preferred SkillsExperience with Informatica Intelligent Cloud Services (IICSExposure to big data platforms (Hadoop, Spark) or data lake architectures.Familiarity with Agile or Scrum methodologies.Background in finance, healthcare, or retail data environments is a plus.Experience integrating data from cloud platforms (AWS, Azure, GCP More ❯
highly desired) in CS, IT or a quantitative data field of study OR equivalent exp. certifications (e.g., Microsoft Certified: Azure Database Administrator Associate), experience with Big Data technologies (Spark, Hadoop), and DevOps More ❯
Manchester, Lancashire, United Kingdom Hybrid/Remote Options
CHEP UK Ltd
such as Python, R, and SQL for data analysis and model development. Experience working with cloud computing platforms including AWS and Azure, and familiarity with distributed computing frameworks like Hadoop and Spark. Deep understanding of supply chain operations and the ability to apply data science methods to solve real-world business problems effectively. Strong foundational knowledge in mathematics and More ❯
Glasgow, Scotland, United Kingdom Hybrid/Remote Options
NLB Services
tools and libraries (e.g., Power BI) · Background in database administration or performance tuning · Familiarity with data orchestration tools, such as Apache Airflow · Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing · Strong analytical skills, including a thorough understanding of how to interpret customer business requirements and translate them into technical designs and solutions. · Strong communication More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Areti Group | B Corp™
Security project experience. Experience with Palantir Foundry (full training provided). Familiarity with AI/ML Ops pipelines , real-time analytics, or edge deployments. Big Data stack knowledge (e.g., Hadoop, Spark, Kafka). GenAI/LLM experience (e.g., AWS Bedrock, LangChain). Why this is a great move 🌳 Mission & impact: Work on projects where data-driven decisions have real More ❯
Security project experience. Experience with Palantir Foundry (full training provided). Familiarity with AI/ML Ops pipelines , real-time analytics, or edge deployments. Big Data stack knowledge (e.g., Hadoop, Spark, Kafka). GenAI/LLM experience (e.g., AWS Bedrock, LangChain). Why this is a great move 🌳 Mission & impact: Work on projects where data-driven decisions have real More ❯
Garner, North Carolina, United States Hybrid/Remote Options
Butterball
predictions. 10. Implements best practices for the full lifecycle of machine learning models, including deployment, monitoring, and retraining. 11. Manages infrastructure of cloud and big data technologies (e.g., Spark, Hadoop) to ensure predictive systems are scalable and efficient. 12. Ensures the integrity and quality of the data that feeds into the predictive models. 13. Develops dashboards and self-service More ❯
/GCP/Azure). Hands-on experience with Unix-based command line and DevOps tools (Git, Docker, Kubernetes). Hands-on experience with big data technologies (e.g. Spark, Hadoop, Databricks). Experience with coaching/mentoring other engineers. Prior experience in Management Consulting is a strong plus. Willingness to travel and work at local and international clients. Fluency More ❯
Experience with cloud based infrastructures (AWS/GCP/Azure). Knowledge of Unix command line and DevOps tools (Git, Docker, Kubernetes). Experience with big data technologies (Spark, Hadoop, Databricks). Experience coaching/mentoring other engineers. Prior experience in management consulting is a strong plus. Willingness to travel and work at local and international clients. Fluency in More ❯
/GCP/Azure). Hands-on experience with Unix-based command line and DevOps tools (Git, Docker, Kubernetes). Hands-on experience with big data technologies (e.g. Spark, Hadoop, Databricks). Experience with coaching/mentoring other engineers. Prior experience in Management Consulting is a strong plus. Willingness to travel and work at local and international clients. Fluency More ❯
/GCP/Azure). Hands-on experience with Unix-based command line and DevOps tools (Git, Docker, Kubernetes). Hands-on experience with big data technologies (e.g. Spark, Hadoop, Databricks). Experience with coaching/mentoring other engineers. Prior experience in Management Consulting is a strong plus. Willingness to travel and work at local and international clients. Fluency More ❯