Job Title: AWS Data Engineer Contract: 6 Months Location: Remote, UK Skills: Advanced knowledge of data management tools including SQL/DBMS, MongoDB, Hadoop and/or other big data technologies. Advanced programming skills in Java, Python, R, C++, C#, etc. Knowledge of statistical and data mining techniques (regression, decision trees, clustering, neural networks, etc.). Experience with data More ❯
Agile working practices CI/CD tooling Scripting experience (Python, Perl, Bash, etc.) ELK (Elastic stack) JavaScript Cypress Linux experience Search engine technology (e.g., Elasticsearch) Big Data Technology experience (Hadoop, Spark, Kafka, etc.) Microservice and cloud native architecture Desirable skills Able to demonstrate experience of troubleshooting and diagnosis of technical issues. Able to demonstrate excellent team-working skills. Strong More ❯
SonarQube Familiarity with Docker for containerization Experience with Microsoft-based infrastructures (not open-source) Desirable Skills: Background in financial services, particularly Market Risk Knowledge of Chef, Ansible, SQL, and Hadoop ecosystems (HDFS, Hive, Impala) Understanding of observability tools (Elasticsearch, Filebeat, Kibana) Familiarity with network protocols and certificate/SSH key management More ❯
in software development with at least 2 server-side languages - Java being must have Proven experience with microservices architecture and scalable, distributed systems. Proficient in data technologies like MySQL , Hadoop , or Cassandra . Experience with batch processing , data pipelines , and data integrity practices. Familiarity with AWS services (e.g., RDS, Step Functions, EC2, Kinesis) is a plus. Solid understanding of More ❯
in software development with at least 2 server-side languages - Java being must have Proven experience with microservices architecture and scalable, distributed systems. Proficient in data technologies like MySQL , Hadoop , or Cassandra . Experience with batch processing , data pipelines , and data integrity practices. Familiarity with AWS services (e.g., RDS, Step Functions, EC2, Kinesis) is a plus. Solid understanding of More ❯
optimize PySpark and SQL queries to analyze, reconcile, and interrogate large datasets. Recommend improvements to reporting processes, data quality, and query performance. Contribute to the architecture and design of Hadoop environments. Translate architecture and requirements into scalable, production-ready code. Provide technical leadership and direction on complex, high-impact projects. Act as a subject matter expert (SME) to senior … to Hive, Impala, and Spark ecosystem technologies (e.g. HDFS, Apache Spark, Spark-SQL, UDF, Sqoop). Experience building and optimizing Big Data pipelines, architectures, and data sets. Familiarity with Hadoop and Big Data ecosystems. Strong knowledge of Data Warehouse and ETL design and development methodologies. Ability to work under pressure and adapt to changing requirements. Excellent verbal and written More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom
Hays
optimize PySpark and SQL queries to analyze, reconcile, and interrogate large datasets. Recommend improvements to reporting processes, data quality, and query performance. Contribute to the architecture and design of Hadoop environments. Translate architecture and requirements into scalable, production-ready code. Provide technical leadership and direction on complex, high-impact projects. Act as a subject matter expert (SME) to senior … to Hive, Impala, and Spark ecosystem technologies (e.g. HDFS, Apache Spark, Spark-SQL, UDF, Sqoop). Experience building and optimizing Big Data pipelines, architectures, and data sets. Familiarity with Hadoop and Big Data ecosystems. Strong knowledge of Data Warehouse and ETL design and development methodologies. Ability to work under pressure and adapt to changing requirements. Excellent verbal and written More ❯
based insights, collaborating closely with stakeholders. Passionately discover hidden solutions in large datasets to enhance business outcomes. Design, develop, and maintain data processing pipelines using Cloudera technologies, including ApacheHadoop, Apache Spark, Apache Hive, and Python. Collaborate with data engineers and scientists to translate data requirements into technical specifications. Develop and maintain frameworks for efficient data extraction, transformation, and … and verbal communication skills for effective team collaboration. Eagerness to learn and master new technologies and techniques. Experience with AutoSys is preferred. Experience with distributed data/computing tools: Hadoop, Hive, MySQL, etc. If you're a passionate Cloudera Developer eager to make a difference in the banking industry, we want to hear from you! Apply now to join More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom
McGregor Boyall
work across regulatory and transformation initiatives that span multiple trading desks, functions, and stakeholders. You'll build PySpark and SQL queries to interrogate, reconcile and analyse data, contribute to Hadoop data architecture discussions, and help improve reporting processes and data quality. You'll be hands-on across technical delivery, documentation, testing, and stakeholder engagement. It's a technically rich … high-impact project work at one of the world's most complex financial institutions. Key Skills: Strong hands-on experience with SQL, Python, Spark Background in Big Data/Hadoop environments Solid understanding of ETL/Data Warehousing concepts Strong communicator, with the ability to explain technical concepts to senior stakeholders Details: Location: Belfast - 3 days/week onsite More ❯
nature of the work, you must hold enhanced DV Clearance. WE NEED THE DATA ENGINEER TO HAVE.... Current enhanced DV Security Clearance Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience With Palantir Foundry Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design, development, test and integration of …/DEVELOPPED VETTING/DEVELOPED VETTED/DEEP VETTING/DEEP VETTED/SC CLEARED/SC CLEARANCE/SECURITY CLEARED/SECURITY CLEARANCE/NIFI/CLOUDERA/HADOOP/KAFKA/ELASTIC SEARCH/LEAD BIG DATA ENGINEER/LEAD BIG DATA DEVELOPER More ❯
/MOD or Enhanced DV Clearance. WE NEED THE PYTHON/DATA ENGINEER TO HAVE.... Current DV Security Clearance (Standard or Enhanced) Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Python/PySpark experience Experience With Palantir Foundry is nice to have Experience working in an Agile Scrum environment with tools such as Confluence/Jira …/DEVELOPPED VETTING/DEVELOPED VETTED/DEEP VETTING/DEEP VETTED/SC CLEARED/SC CLEARANCE/SECURITY CLEARED/SECURITY CLEARANCE/NIFI/CLOUDERA/HADOOP/KAFKA/ELASTIC SEARCH/LEAD BIG DATA ENGINEER/LEAD BIG DATA DEVELOPER More ❯
/MOD or Enhanced DV Clearance. WE NEED THE PYTHON/DATA ENGINEER TO HAVE. Current DV Security Clearance (Standard or Enhanced) Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Python/PySpark experience Experience With Palantir Foundry is nice to have Experience working in an Agile Scrum environment with tools such as Confluence/Jira …/DEVELOPPED VETTING/DEVELOPED VETTED/DEEP VETTING/DEEP VETTED/SC CLEARED/SC CLEARANCE/SECURITY CLEARED/SECURITY CLEARANCE/NIFI/CLOUDERA/HADOOP/KAFKA/ELASTIC SEARCH/LEAD BIG DATA ENGINEER/LEAD BIG DATA DEVELOPER More ❯