of platform components. Big Data Architecture: Build and maintain big data architectures and data pipelines to efficiently process large volumes of geospatial and sensor data. Leverage technologies such as Hadoop, Apache Spark, and Kafka to ensure scalability, fault tolerance, and speed. Geospatial Data Integration: Develop systems that integrate geospatial data from a variety of sources (e.g., satellite imagery, remote … data-driven applications. Familiarity with geospatial data formats (e.g., GeoJSON, Shapefiles, KML) and tools (e.g., PostGIS, GDAL, GeoServer). Technical Skills: Expertise in big data frameworks and technologies (e.g., Hadoop, Spark, Kafka, Flink) for processing large datasets. Proficiency in programming languages such as Python, Java, or Scala, with a focus on big data frameworks and APIs. Experience with cloud More ❯
engineering, architecture, or platform management roles, with 5+ years in leadership positions. Expertise in modern data platforms (e.g., Azure, AWS, Google Cloud) and big data technologies (e.g., Spark, Kafka, Hadoop). Strong knowledge of data governance frameworks, regulatory compliance (e.g., GDPR, CCPA), and data security best practices. Proven experience in enterprise-level architecture design and implementation. Hands-on knowledge More ❯
of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and More ❯
Statistics, Maths or similar Science or Engineering discipline Strong Python and other programming skills (Java and/or Scala desirable) Strong SQL background Some exposure to big data technologies (Hadoop, spark, presto, etc.) NICE TO HAVES OR EXCITED TO LEARN: Some experience designing, building and maintaining SQL databases (and/or NoSQL) Some experience with designing efficient physical data More ❯
Azure Functions, Azure SQL Database, HDInsight, and Azure Machine Learning Studio. Data Storage & Databases: SQL & NoSQL Databases: Experience with databases like PostgreSQL, MySQL, MongoDB, and Cassandra. Big Data Ecosystems: Hadoop, Spark, Hive, and HBase. Data Integration & ETL: Data Pipelining Tools: Apache NiFi, Apache Kafka, and Apache Flink. ETL Tools: AWS Glue, Azure Data Factory, Talend, and Apache Airflow. AI More ❯
Experience of Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building and leading. You must be: Willing to work on client sites, potentially for extended periods. Willing to More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
ADLIB Recruitment
Clear communicator, able to translate complex data concepts to cross-functional teams Bonus points for experience with: DevOps tools like Docker, Kubernetes, CI/CD Big data tools (Spark, Hadoop), ETL workflows, or high-throughput data streams Genomic data formats and tools Cold and hot storage management, ZFS/RAID systems, or tape storage AI/LLM tools to More ❯
and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working More ❯
Docker Experience with NLP and/or computer vision Exposure to cloud technologies (eg. AWS and Azure) Exposure to Big data technologies Exposure to Apache products eg. Hive, Spark, Hadoop, NiFi Programming experience in other languages This is not an exhaustive list, and we are keen to hear from you even if you don't tick every box. The More ❯
Location: Worcester Duration: 6 month initial contract Rate: (Outside IR35) Security: Active DV clearance required Role details: We are looking for 3 x Data Engineers to join our defence & security client on a contract basis. You will helpdesign, develop, and More ❯
South West London, London, United Kingdom Hybrid / WFH Options
TALENT INTERNATIONAL UK LTD
lead capacity Strong proficiency in Python for data processing and automation Deep knowledge of ETL/ELT frameworks and best practices Hands-on experience with Big Data tools (e.g. Hadoop, Spark, Kafka, Hive) Familiarity with cloud data platforms (e.g. AWS, Azure, GCP) Strong understanding of data architecture, pipelines, warehousing, and performance tuning Excellent communication and stakeholder engagement skills Desirables More ❯
in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog More ❯
in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog More ❯
in a similar role. - 3+ years of experience with data modeling, data warehousing, ETL/ELT pipelines and BI tools. - Experience with cloud-based big data technology stacks (e.g., Hadoop, Spark, Redshift, S3, EMR, SageMaker, DynamoDB etc.) - Knowledge of data management and data storage principles. - Expert-level proficiency in writing and optimizing SQL. - Ability to write code in Python More ❯
practices across the development life cycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience building data pipelines or automated ETL processes - Experience writing and optimizing SQL queries with large-scale, complex datasets Amazon is an equal opportunity employer and More ❯
flow diagrams, and process documentation. MINIMUM QUALIFICATIONS/SKILLS Proficiency in Python and SQL. Experience with cloud platforms like AWS, GCP, or Azure, and big data technologies such as Hadoop or Spark. Experience working with relational and NoSQL databases. Strong knowledge of data structures, data modeling, and database schema design. Experience supporting data science workloads with structured and unstructured More ❯
None Preferred education Bachelor's Degree Required technical and professional expertise Design, develop, and maintain Java-based applications for processing and analyzing large datasets, utilizing frameworks such as ApacheHadoop, Spark, and Kafka. Collaborate with cross-functional teams to define, design, and ship data-intensive features and services. Optimize existing data processing pipelines for efficiency, scalability, and reliability. Develop … s degree in Computer Science, Information Technology, or a related field, or equivalent experience. Experience in Big Data Java development. In-depth knowledge of Big Data frameworks, such as Hadoop, Spark, and Kafka, with a strong emphasis on Java development. Proficiency in data modeling, ETL processes, and data warehousing concepts. Experience with data processing languages like Scala, Python, or More ❯
build scalable data infrastructure, develop machine learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks, and ensure solutions meet … R; Collaborative, team-based development; Cloud analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Data engineering approaches; Database management, e.g. MySQL, Postgress; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity with git; Public sector More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Actica Consulting Limited
build scalable data infrastructure, develop machine learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks, and ensure solutions meet … R; Collaborative, team-based development; Cloud analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Data engineering approaches; Database management, e.g. MySQL, Postgress; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity with git; Public sector More ❯
East Horsley, Surrey, United Kingdom Hybrid / WFH Options
Actica Consulting Limited
build scalable data infrastructure, develop machine learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks, and ensure solutions meet … R; Collaborative, team-based development; Cloud analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Data engineering approaches; Database management, e.g. MySQL, Postgress; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity with git; Public sector More ❯
implementing data governance, security standards, and compliance practices. Strong understanding of metadata management, data lineage, and data quality frameworks. Preferred Skills & Knowledge: Familiarity with big data technologies such as Hadoop, Spark, or Kafka Excellent communication skills with the ability to explain complex data strategies to non-technical stakeholders. Outstanding problem-solving abilities and organizational skills. Certifications (Preferred/Desirable More ❯
is important) Latest Data Science platforms (e.g., Databricks, Dataiku, AzureML, SageMaker) and frameworks (e.g., TensorFlow, MXNet, scikit-learn) Software engineering practices (coding standards, unit testing, version control, code review) Hadoop distributions (Cloudera, Hortonworks), NoSQL databases (Neo4j, Elastic), streaming technologies (Spark Streaming) Data manipulation and wrangling techniques Development and deployment technologies (virtualisation, CI tools like Jenkins, configuration management with Ansible More ❯
e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If More ❯
application Deep understanding in software architecture, object-oriented design principles, and data structures Extensive experience in developing microservices using Java, Python Experience in distributed computing frameworks like - Hive/Hadoop, Apache Spark. Good experience in Test driven development and automating test cases using Java/Python Experience in SQL/NoSQL (Oracle, Cassandra) database design Demonstrated ability to be More ❯
Derby, Derbyshire, United Kingdom Hybrid / WFH Options
Cooper Parry
and algorithms such as clustering, regression, classification, forecasting, neural networks, hyperparameter tuning, NLP, and utilising LLMs. Proficiency in programming languages such as Python, R, SAS, SQL, Java, Spark ApacheHadoop Experience across the Microsoft Fabric data analytics platform and suite of tools. Strong analytical and problem-solving skills We are looking for someone based in and around the East More ❯