South East London, England, United Kingdom Hybrid / WFH Options
Anson McCade
Knowledge of CI/CD processes and infrastructure-as-code. • Eligible for SC clearance (active clearance highly advantageous). Desirable Skills • Exposure to large data processing frameworks (e.g., Spark, Hadoop). • Experience deploying data via APIs and microservices. • AWS certifications (Solution Architect Associate, Data Analytics Speciality, etc.). • Experience in public sector programmes or government frameworks. Package & Benefits More ❯
such as Java, TypeScript, Python, and Go Web libraries and frameworks such as React and Angular Designing, building, and maintaining CI/CD pipelines Big data technologies like NiFi, Hadoop, Spark Cloud and containerization technologies such as AWS, OpenShift, Kubernetes, Docker DevOps methodologies, including infrastructure as code and GitOps Database technologies, e.g., relational databases, Elasticsearch, MongoDB Why join Gemba More ❯
non-traditional background, or alternative experiences, your application is welcome. Minimum qualifications 10+ years of technical specialist, design, and architecture experience 10+ years of experience with databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) 10+ years of consulting, design, and implementation of serverless distributed solutions Australian citizen with the ability to obtain security clearance Preferred qualifications AWS Professional level certification More ❯
frameworks (MXNet, Caffe2, TensorFlow, Theano, CNTK, Keras) and ML tools (SparkML, AML). 7+ years in IT platform implementation, consulting, and distributed solutions design. Experience with databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis), cloud solutions (AWS or equivalent), systems, networks, and operating systems. If you need accommodations during the application process, please visit this link . More ❯
expertise and technical acumen to ensure successful delivery of complex data projects on time and within budget. Key Responsibilities: Project Management: Lead and manage legacy data platform migration (Teradata, Hadoop), data lake build, and data analytics projects from initiation to completion. Develop comprehensive project plans, including scope, timelines, resource allocation, and budgets. Monitor project progress, identify risks, and implement More ❯
expertise and technical acumen to ensure successful delivery of complex data projects on time and within budget. Key Responsibilities: Project Management: Lead and manage legacy data platform migration (Teradata, Hadoop), data lake build, and data analytics projects from initiation to completion. Develop comprehensive project plans, including scope, timelines, resource allocation, and budgets. Monitor project progress, identify risks, and implement More ❯
to implement them through libraries. Experience with programming, ideally Python, and the ability to quickly pick up handling large data volumes with modern data processing tools, e.g. by using Hadoop/Spark/SQL. Experience with or ability to quickly learn open-source software including machine learning packages, such as Pandas, scikit-learn, along with data visualization technologies. Experience More ❯
to implement them through libraries. Experience with programming, ideally Python, and the ability to quickly pick up handling large data volumes with modern data processing tools, e.g. by using Hadoop/Spark/SQL Experience with or ability to quickly learn open-source software including machine learning packages, such as Pandas, scikit-learn, along with data visualisation technologies. Experience More ❯
with some of the brightest technical minds in the industry today. BASIC QUALIFICATIONS - 10+ years of technical specialist, design and architecture experience - 10+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience - 10+ years of consulting, design and implementation of serverless distributed solutions experience - Australian citizen with ability to obtain security clearance. PREFERRED QUALIFICATIONS - AWS Professional level More ❯
Azure Functions. Strong knowledge of scripting languages (e.g., Python, Bash, PowerShell) for automation and data transformation. Proficient in working with databases, data warehouses, and data lakes (e.g., SQL, NoSQL, Hadoop, Redshift). Familiarity with APIs and web services for integrating external systems and applications into orchestration workflows. Hands-on experience with data transformation and ETL (Extract, Transform, Load) processes. More ❯
Azure Functions. Strong knowledge of scripting languages (e.g., Python, Bash, PowerShell) for automation and data transformation. Proficient in working with databases, data warehouses, and data lakes (e.g., SQL, NoSQL, Hadoop, Redshift). Familiarity with APIs and web services for integrating external systems and applications into orchestration workflows. Hands-on experience with data transformation and ETL (Extract, Transform, Load) processes. More ❯
Azure Functions. Strong knowledge of scripting languages (e.g., Python, Bash, PowerShell) for automation and data transformation. Proficient in working with databases, data warehouses, and data lakes (e.g., SQL, NoSQL, Hadoop, Redshift). Familiarity with APIs and web services for integrating external systems and applications into orchestration workflows. Hands-on experience with data transformation and ETL (Extract, Transform, Load) processes. More ❯
native tech stack in designing and building data & AI solutions Experience with data modeling, ETL processes, and data warehousing Knowledge of big data tools and frameworks such as Spark, Hadoop, or Kafka More ❯
Growth Revenue Management, Marketing Analytics, CLM/CRM Analytics and/or Risk Analytics. Conduct analyses in typical analytical tools ranging from SAS, SPSS, Eviews, R, Python, SQL, Teradata, Hadoop, Access, Excel, etc. Communicate analyses via compelling presentations. Solve problems, disaggregate issues, develop hypotheses and develop actionable recommendations from data and analysis. Prepare and facilitating workshops. Manage stakeholders and … An ability to think analytically, decompose problem sets, develop hypotheses and recommendations from data analysis. Strong technical skills regarding data analysis, statistics, and programming. Strong working knowledge of, Python, Hadoop, SQL, and/or R. Working knowledge of Python data tools (e.g. Jupyter, Pandas, Scikit-Learn, Matplotlib). Ability to talk the language of statistics, finance, and economics a More ❯
Data Scientist - skills in statistics, physics, mathematics, Computer Science, Engineering, Data Mining, Big Data (Hadoop, Hive, MapReduce) This is an exceptional opportunity to work as a Data Scientist within a global analytics team, utilizing various big data technologies to develop complex behavioral models, analyze customer uptake of products, and foster new product innovation. Responsibilities include: Generating and reviewing large More ❯
real-time intelligence, BI, Copilot), Azure Databricks, Purview Data Governance, and Azure Databases: SQL DB, Cosmos DB, PostgreSQL. Maintain and grow expertise in on-prem EDW (Teradata, Netezza, Exadata), Hadoop & BI solutions. Represent Microsoft through thought leadership in cloud Database & Analytics communities and customer forums. Qualifications Proven technical pre-sales or technical consulting experience OR Bachelor's Degree in More ❯
familiarity with DevOps tools and concepts – e.g. Kubernetes , Git-based CI/CD , cloud infrastructure (AWS/GCP/Azure). Bonus: Exposure to tools like Elasticsearch/Kibana , Hadoop/HBase , OpenSearch , or VPN/proxy architectures. Strong grasp of software security principles , system performance optimisation, and infrastructure reliability. Experience working on large-scale , production-grade systems with More ❯
familiarity with DevOps tools and concepts – e.g. Kubernetes , Git-based CI/CD , cloud infrastructure (AWS/GCP/Azure). Bonus: Exposure to tools like Elasticsearch/Kibana , Hadoop/HBase , OpenSearch , or VPN/proxy architectures. Strong grasp of software security principles , system performance optimisation, and infrastructure reliability. Experience working on large-scale , production-grade systems with More ❯
security principles, system performance optimisation, and infrastructure reliability. Experience working on large-scale, production-grade systems with distributed architectures. Nice to Have: Exposure to tools like Elasticsearch/Kibana, Hadoop/HBase, OpenSearch, or VPN/proxy architectures. Ideal Candidate will: Bring technical vision, initiative, and a passion for exploring and implementing emerging technologies. Are a natural technical leader More ❯
security principles, system performance optimisation, and infrastructure reliability. Experience working on large-scale, production-grade systems with distributed architectures. Nice to Have: Exposure to tools like Elasticsearch/Kibana, Hadoop/HBase, OpenSearch, or VPN/proxy architectures. Ideal Candidate will: Bring technical vision, initiative, and a passion for exploring and implementing emerging technologies. Are a natural technical leader More ❯
mathematics or equivalent quantitative field - Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. - Experience with large scale distributed systems such as Hadoop, Spark etc. - Experience with Deep Learning for search and recommendation systems - Experience with NLP and LLM algorithms and tools a plus - Experience performing and interpreting A/B experiments More ❯
or related language PREFERRED QUALIFICATIONS - Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. - Experience with large scale distributed systems such as Hadoop, Spark etc. Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Los Angeles County applicants More ❯
protection directions. At least one successful cross-system full-link data fortification implementation experience. Familiarity with at least one language, such as Java, Go, or Python, and knowledge of Hadoop/Spark ecosystems in the big data domain. Preference for candidates with multinational corporate experience in data security and over 3 years of experience in international data security, encryption More ❯
protection directions. At least one successful cross-system full-link data fortification implementation experience. Familiarity with at least one language, such as Java, Go, or Python, and knowledge of Hadoop/Spark ecosystems in the big data domain. Preference for candidates with multinational corporate experience in data security and over 3 years of experience in international data security, encryption More ❯
in around here. Tech Stack: Micro-services Container Platforms (OpenShift, Kubernetes, CRC, Docker) NoSQL DBs (Cassandra, MongoDB, HBase, Zookeeper, ArangoDB) Serialization libraries (Thrift, Protocol Buffers) Large scale data processing (Hadoop, Kafka) Dependency injection frameworks (Guice, Spring) Text search engines (Lucene, ElasticSearch) Splunk/Elastic CI/CD Build tools: Maven, Git, Jenkins Frameworks: Vert.x, Spring Boot Real-time communication … data volume distributed systems 3rd generation messaging systems Backends for mobile messaging systems SIP or XMPP Soft real-time systems Experience doing performance tuning Big Data technologies, such as Hadoop, Kafka, and Cassandra, to build applications that contain petabytes of data and process millions of transactions per day Cloud computing, virtualization and containerization Continuous integration systems Deployment technology such More ❯