role is open to all applicants, regardless of age. What you'll need to succeed Familiarity with UNIXKnowledge of CI toolsetsFamiliar with SQL, Oracle DB, Postgres, ActiveMQ, Zabbix, Ambari, Hadoop, Jira Confluence, BitBucket, ActiviBPM, Oracle SOA, Asure, SQL Server, Jenkins, Puppet and other cloud technologies.If successful, you will undergo BPSS clearance and must be eligible for SC Clearance. What More ❯
and machine learning PREFERRED QUALIFICATIONS - Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. - Experience with large scale distributed systems such as Hadoop, Spark etc. Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Los Angeles County applicants More ❯
Experience as a Data Site Reliability Engineer or similar role, focusing on data infrastructure management Proficiency in data technologies, such as relational databases, data warehousing, big data platforms (e.g., Hadoop, Spark), data streaming (e.g., Kafka), and cloud services (e.g., AWS, GCP, Azure) Programming skills in Python, Java, or Scala, with automation and scripting experience Experience with containerization and orchestration More ❯
is open to all applicants, regardless of age. What you'll need to succeed Familiarity with UNIXKnowledge of CI toolsets Familiar with SQL, Oracle DB, Postgres, ActiveMQ, Zabbix, Ambari, Hadoop, Jira Confluence, BitBucket, ActiviBPM, Oracle SOA, Asure, SQL Server, Jenkins, Puppet and other cloud technologies. If successful, you will undergo BPSS clearance and must be eligible for SC Clearance. More ❯
Employment Type: Contract
Rate: £300.0 - £315.0 per day + c£315 per day (inside IR35)
/mathematical software (e.g. R, SAS, or Matlab) - Experience with statistical models e.g. multinomial logistic regression - Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) - Experience working with data engineers and business intelligence engineers collaboratively - Demonstrated expertise in a wide range of ML techniques PREFERRED QUALIFICATIONS - Experience as a leader and More ❯
with some of the brightest technical minds in the industry today. BASIC QUALIFICATIONS - 10+ years of technical specialist, design and architecture experience - 10+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience - 10+ years of consulting, design and implementation of serverless distributed solutions experience - Australian citizen with ability to obtain security clearance. PREFERRED QUALIFICATIONS - AWS Professional level More ❯
internal customer facing, complex and large scale project management experience 5+ years of continuous integration and continuous delivery (CI/CD) experience 5+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience 5+ years of software development with object oriented language experience 3+ years of cloud based solution (AWS or equivalent), system, network and operating system experience More ❯
mathematics or equivalent quantitative field - Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. - Experience with large scale distributed systems such as Hadoop, Spark etc. - Excellent technical publications and material contributions to the CV/ML/AI field as related to image and video processing Our inclusive culture empowers Amazonians to More ❯
modern data architectures, Lambda type architectures - Proficiency in writing and optimizing SQL - Knowledge of AWS services including S3, Redshift, EMR, Kinesis and RDS. - Experience with Open Source Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.) - Ability to write code in Python, Ruby, Scala or other platform-related Big data technology - Knowledge of professional software engineering practices & best practices for More ❯
and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working More ❯
e.g., Python, KornShell) - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If More ❯
years of relevant work experience in solving real world business problems using machine learning, deep learning, data mining and statistical algorithms - Strong hands-on programming skills in Python, SQL, Hadoop/Hive. Additional knowledge of Spark, Scala, R, Java desired but not mandatory - Strong analytical thinking - Ability to creatively solve business problems, innovating new approaches where required and articulating More ❯
East London, London, United Kingdom Hybrid / WFH Options
McGregor Boyall Associates Limited
. Strong knowledge of LLM algorithms and training techniques . Experience deploying models in production environments. Nice to Have: Experience in GenAI/LLMs Familiarity with distributed computing tools (Hadoop, Hive, Spark). Background in banking, risk management, or capital markets . Why Join? This is a unique opportunity to work at the forefront of AI innovation in financial More ❯
cloud services (AWS, Azure, GCP), especially around secure deployment Nice-to-Have: Background in government, defence, or highly regulated sectors Exposure to big data tools like Kafka, Spark, or Hadoop Understanding of containerisation and orchestration (e.g. Docker, Kubernetes) Familiarity with infrastructure as code tools (e.g. Terraform, Ansible) Experience building monitoring solutions with Prometheus, Grafana, or ELK Interest in or More ❯
English is required. Preferred Skills: Experience in commodities markets or broader financial markets. Knowledge of quantitative modeling, risk management, or algorithmic trading. Familiarity with big data technologies like Kafka, Hadoop, Spark, or similar. Why Work With Us? Impactful Work: Directly influence the profitability of the business by building technology that drives trading decisions. Innovative Culture: Be part of a More ❯
with SQL and database technologies (incl. various Vector Stores and more traditional technologies e.g. MySQL, PostgreSQL, NoSQL databases). − Hands-on experience with data tools and frameworks such as Hadoop, Spark, or Kafka - advantage − Familiarity with data warehousing solutions and cloud data platforms. − Background in building applications wrapped around AI/LLM/mathematical models − Ability to scale up More ❯
with SQL and database technologies (incl. various Vector Stores and more traditional technologies e.g. MySQL, PostgreSQL, NoSQL databases). − Hands-on experience with data tools and frameworks such as Hadoop, Spark, or Kafka - advantage − Familiarity with data warehousing solutions and cloud data platforms. − Background in building applications wrapped around AI/LLM/mathematical models − Ability to scale up More ❯
Role Overview: We are seeking a highly skilled and experienced Senior Azure Data Engineer to join our team. In this role, you will lead the design, development, and maintenance of scalable data solutions in Microsoft Azure. You will work closely More ❯
cloud services (AWS, Azure, GCP), especially around secure deployment Nice-to-Have: Background in government, defence, or highly regulated sectors Exposure to big data tools like Kafka, Spark, or Hadoop Understanding of containerisation and orchestration (e.g. Docker, Kubernetes) Familiarity with infrastructure as code tools (e.g. Terraform, Ansible) Experience building monitoring solutions with Prometheus, Grafana, or ELK Interest in or More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Mars
help shape our digital platforms 🧠 What we’re looking for Proven experience as a Data Engineer in cloud environments (Azure ideal) Proficiency in Python, SQL, Spark, Databricks Familiarity with Hadoop, NoSQL, Delta Lake Bonus: Azure Functions, Logic Apps, Django, CI/CD tools 💼 What you’ll get from Mars A competitive salary & bonus Hybrid working with flexibility built in More ❯
lineage, and metadata practices Fluency in database technologies (both relational and NoSQL) Experience with Linux environments and data visualisation tools (e.g. Tableau, QuickSight, Looker ) Bonus points for: Familiarity with Hadoop, Spark, or MapReduce Exposure to data APIs and microservice-based architectures AWS certifications (Solutions Architect Associate, Big Data Specialty) Experience or interest in machine learning data pipelines 🚀 Why Join More ❯
familiarity with DevOps tools and concepts – e.g. Kubernetes , Git-based CI/CD , cloud infrastructure (AWS/GCP/Azure). Bonus: Exposure to tools like Elasticsearch/Kibana , Hadoop/HBase , OpenSearch , or VPN/proxy architectures. Strong grasp of software security principles , system performance optimisation, and infrastructure reliability. Experience working on large-scale , production-grade systems with More ❯
Financial Services, Manufacturing, Life Sciences and Healthcare, Technology and Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13+ billion. Location - London Skill - ApacheHadoop We are looking for open-source contributors to Apache projects, who have an in-depth understanding of the code behind the Apache ecosystem, should have experience in Cloudera or … hybrid cloud environment Ability to do debug & fix code in the open source Apache code and should be an individual contributor to open source projects. Job description: The ApacheHadoop project requires up to 3 individuals with experience in designing and building platforms, and supporting applications both in cloud environments and on-premises. These resources are expected to be … to support all developers in migrating and debugging various RiskFinder critical applications. They need to be "Developers" who are expert in designing and building Big Data platforms using ApacheHadoop and support ApacheHadoop implementations both in cloud environments and on-premises. More ❯
Financial Services, Manufacturing, Life Sciences and Healthcare, Technology and Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13+ billion. Location - London Skill - ApacheHadoop We are looking for open-source contributors to Apache projects, who have an in-depth understanding of the code behind the Apache ecosystem, should have experience in Cloudera or … hybrid cloud environment Ability to do debug & fix code in the open source Apache code and should be an individual contributor to open source projects. Job description: The ApacheHadoop project requires up to 3 individuals with experience in designing and building platforms, and supporting applications both in cloud environments and on-premises. These resources are expected to be … to support all developers in migrating and debugging various RiskFinder critical applications. They need to be "Developers" who are expert in designing and building Big Data platforms using ApacheHadoop and support ApacheHadoop implementations both in cloud environments and on-premises. More ❯
implementing data governance, security standards, and compliance practices. Strong understanding of metadata management, data lineage, and data quality frameworks. Preferred Skills & Knowledge: Familiarity with big data technologies such as Hadoop, Spark, or Kafka Excellent communication skills with the ability to explain complex data strategies to non-technical stakeholders. Outstanding problem-solving abilities and organizational skills. Certifications (Preferred/Desirable More ❯