ETL processes. Proficiency in Python. Experience with cloud platforms (AWS, Azure, or GCP). Knowledge of data modelling, warehousing, and optimisation. Familiarity with big data frameworks (e.g. Apache Spark, Hadoop). Understanding of data governance, security, and compliance best practices. Strong problem-solving skills and experience working in agile environments. Desirable: Experience with Docker/Kubernetes, streaming data (Kafka More ❯
data modelling, warehousing, and performance optimisation. Proven experience with cloud platforms (AWS, Azure, or GCP) and their data services. Hands-on experience with big data frameworks (e.g. Apache Spark, Hadoop). Strong knowledge of data governance, security, and compliance. Ability to lead technical projects and mentor junior engineers. Excellent problem-solving skills and experience in agile environments. Desirable: Experience More ❯
experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience, such as Python Reporting tools More ❯
data modelling tools, data warehousing, ETL processes, and data integration techniques. · Experience with at least one cloud data platform (e.g. AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark). · Strong knowledge of data workflow solutions like Azure Data Factory, Apache NiFi, Apache Airflow etc · Good knowledge of stream and batch processing solutions like Apache Flink, ApacheMore ❯
experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Reporting tools (e.g. Tableau, PowerBI, Qlik) GDPR and Government More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Michael Page Technology
Experience within the Insurance industry Strong proficiency in programming languages such as Python, Java, or Scala. Experience with cloud platforms like Azure. Knowledge of big data technologies such as Hadoop, Spark, or Kafka. Proficiency in SQL and database management systems. Familiarity with data warehousing concepts and tools. Ability to work collaboratively with cross-functional teams. A solid understanding of More ❯
IT consulting firm specializing in Big Data Engineers. Currently, we are in pursuit of a highly proficient Big Data Engineer. Requirements Strong experience with big data technologies such as Hadoop, Spark, and Kafka. Experience building and optimizing data pipelines, architectures, and data sets. Proficiency in programming languages such as Java, Scala, or Python. In-depth knowledge of distributed file More ❯
leeds, west yorkshire, yorkshire and the humber, united kingdom Hybrid/Remote Options
Pharmacy2U
high-quality software features. Strong communication, organisational, and interpersonal skills. Ability to manage multiple priorities in a fast-paced environment. Experience with SQL, NoSQL, and big data platforms (e.g., Hadoop, Spark). Knowledge of cloud security (AWS, Azure, GCP) and data access controls. Proficiency in scripting languages (e.g., Python, Bash) for automation. Certifications such as OSCP, CEH, CISSP, or More ❯
lancashire, north west england, united kingdom Hybrid/Remote Options
CHEP
such as Python, R, and SQL for data analysis and model development. Experience working with cloud computing platforms including AWS and Azure, and familiarity with distributed computing frameworks like Hadoop and Spark. Deep understanding of supply chain operations and the ability to apply data science methods to solve real-world business problems effectively. Strong foundational knowledge in mathematics and More ❯
leeds, west yorkshire, yorkshire and the humber, united kingdom
AND Digital
skills in languages such as Python, R, SQL or Scala, alongside experience of using modern and traditional data technologies including: ElasticSearch, MongoDB, PostgreSQL, mySQL/mariaDB, Oracle, SQL Server, Hadoop, Kafka, Splunk/ELK or other logging and monitoring tools, BI and Data Warehousing solutions and ETL and migration technologies. This should be backed with strong cloud-native data More ❯
Tadworth, Surrey, South East, United Kingdom Hybrid/Remote Options
Pfizer
management of secondary data with application of real-world data. Experience with both traditional SQL and modern NoSQL data stores including SQL, and large-scale distributed systems such as Hadoop and or working in Snowflake/Databricks. Ability to partner with cross-functional teams (Commercial, Medical, Operations) to execute brand tactics. Able to connect, integrate and synthesize analysis and More ❯
Python) and other database applications; · Understanding of PC environment and related software, including Microsoft Office applications; · Knowledge of data engineering using data stores including MS SQL Server, Oracle, NoSQL, Hadoop or other distributed data technologies. Experience using data visualization tools is a plus; · Experienced with Excel to aggregate, model, and manage large data sets; · Familiar with Microsoft Power BI More ❯
e.g. MS SQL, Oracle) NoSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J) Data exchange and processing skills (e.g. ETL, ESB, API) Development (e.g. Python) skills Big data technologies knowledge (e.g. Hadoop stack) Knowledge in NLP (Natural Language Processing) Knowledge in OCR (Object Character Recognition) Knowledge in Generative AI (Artificial Intelligence) would be advantageous Experience in containerisation technologies (e.g. Docker) would More ❯
utilising strong communication and stakeholder management skills when engaging with customers Significant experience of coding in Python and Scala or Java Experience with big data processing tools such as Hadoop or Spark Cloud experience; GCP specifically in this case, including services such as Cloud Run, Cloud Functions, BigQuery, GCS, Secret Manager, Vertex AI etc. Experience with Terraform Prior experience More ❯
languages (Python, Bash) and programming languages (Java). Hands-on experience with DevOps tools : GitLab, Ansible, Prometheus, Grafana, Nagios, Argo CD, Rancher, Harbour. Deep understanding of big data technologies : Hadoop, Spark, and NoSQL databases. Nice to Have Familiarity with agile methodologies (Scrum or Kanban). Strong problem-solving skills and a collaborative working style. Excellent communication skills , with the More ❯
commercial impact. Understanding of ML Ops vs DevOps and broader software engineering standards. Cloud experience (any platform). Previous mentoring experience. Nice to have: Snowflake or Databricks Spark, PySpark, Hadoop or similar big data tooling BI exposure (PowerBI, Tableau, etc.) Interview Process Video call - high-level overview and initial discussion In-person technical presentation - based on a provided example More ❯
Stevenage, Hertfordshire, South East, United Kingdom Hybrid/Remote Options
MBDA
e.g. MS SQL, Oracle...) noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J...) Data exchange and processing skills (e.g. ETL, ESB, API...) Development (e.g. Python) skills Big data technologies knowledge (e.g. Hadoop stack) Knowledge in NLP (Natural Language Processing) Knowledge in OCR (Object Character Recognition) Knowledge in Generative AI (Artificial Intelligence) would be advantageous Experience in containerisation technologies (e.g. Docker) would More ❯
Bolton, Greater Manchester, North West, United Kingdom Hybrid/Remote Options
MBDA
e.g. MS SQL, Oracle...) noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J...) Data exchange and processing skills (e.g. ETL, ESB, API...) Development (e.g. Python) skills Big data technologies knowledge (e.g. Hadoop stack) Knowledge in NLP (Natural Language Processing) Knowledge in OCR (Object Character Recognition) Knowledge in Generative AI (Artificial Intelligence) would be advantageous Experience in containerisation technologies (e.g. Docker) would More ❯
recommender systems at scale. Deep understanding of recent LLM and generative AI architectures with experience fine-tuning and deploying them. Experience processing large-scale data via distributed systems (Spark, Hadoop, etc.). Excellent communication and collaboration across engineering, analytics, and product teams. Track record of impact through production ML systems and/or peer-reviewed publications. Accommodation requests If More ❯
london, south east england, united kingdom Hybrid/Remote Options
Pardon Our Interruption
take pride in the software you produce. Apache Maven Desirable Skills and Experience Industry experience of any of the following technologies/skills would be beneficial: Elasticsearch Docker ApacheHadoop, Kafka or Camel. Javascript Knowledge of both Windows and Linux operating systems. Kubernetes Nifi Nsq Apache Ignite Arango Why BAE Systems? This is a place where you'll be More ❯
Data Solutions in Mission-Critical areas. WE NEED THE BIG DATA ENGINEER TO HAVE.... Current DV clearance - Standard or Enhanced Must have experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience with Palantir Foundry is preferred but not essential Experience working in an Agile Scrum environment Experience in design, development, test and integration of software IT …/DEVELOPPED VETTING/DEVELOPED VETTED/DEEP VETTING/DEEP VETTED/SC CLEARED/SC CLEARANCE/SECURITY CLEARED/SECURITY CLEARANCE/NIFI/CLOUDERA/HADOOP/KAFKA/ELASTIC SEARCH More ❯