cloud, preferably AWS, is preferred. Extensive background in software architecture and solution design, with deep expertise in microservices, distributed systems, and cloud-native architectures. Advanced proficiency in Python and ApacheSpark, with a strong focus on ETL data processing and scalable data engineering workflows. In-depth technical knowledge of AWS data services, with hands-on experience implementing data More ❯
cloud, preferably AWS, is preferred. Extensive background in software architecture and solution design, with deep expertise in microservices, distributed systems, and cloud-native architectures. Advanced proficiency in Python and ApacheSpark, with a strong focus on ETL data processing and scalable data engineering workflows. In-depth technical knowledge of AWS data services, with hands-on experience implementing data More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
cloud, preferably AWS, is preferred. Extensive background in software architecture and solution design, with deep expertise in microservices, distributed systems, and cloud-native architectures. Advanced proficiency in Python and ApacheSpark, with a strong focus on ETL data processing and scalable data engineering workflows. In-depth technical knowledge of AWS data services, with hands-on experience implementing data More ❯
cloud, preferably AWS, is preferred. Extensive background in software architecture and solution design, with deep expertise in microservices, distributed systems, and cloud-native architectures. Advanced proficiency in Python and ApacheSpark, with a strong focus on ETL data processing and scalable data engineering workflows. In-depth technical knowledge of AWS data services, with hands-on experience implementing data More ❯
Strong track record delivering production-grade ML models Solid grasp of MLOps best practices Confident speaking to technical and non-technical stakeholders 🛠️ Tech you’ll be using: Python, SQL, Spark, R MLflow, vector databases GitHub/GitLab/Azure DevOps Jira, Confluence 🎓 Bonus points for: MSc/PhD in ML or AI Databricks ML Engineer (Professional) certified More ❯
Strong track record delivering production-grade ML models Solid grasp of MLOps best practices Confident speaking to technical and non-technical stakeholders 🛠️ Tech you’ll be using: Python, SQL, Spark, R MLflow, vector databases GitHub/GitLab/Azure DevOps Jira, Confluence 🎓 Bonus points for: MSc/PhD in ML or AI Databricks ML Engineer (Professional) certified More ❯
or AI, including leadership roles. Deep expertise in machine learning, NLP, and predictive modelling. Proficient in Python or R, cloud platforms (AWS, GCP, Azure), and big data tools (e.g. Spark). Strong business acumen, communication skills, and stakeholder engagement. If this role looks of interest, please apply here. Please note - this role cannot offer visa sponsorship. More ❯
or AI, including leadership roles. Deep expertise in machine learning, NLP, and predictive modelling. Proficient in Python or R, cloud platforms (AWS, GCP, Azure), and big data tools (e.g. Spark). Strong business acumen, communication skills, and stakeholder engagement. If this role looks of interest, please apply here. Please note - this role cannot offer visa sponsorship. More ❯
science ecosystem (e.g., pandas, scikit-learn, TensorFlow/PyTorch) , Statistical methods and machine learning (e.g., A/B testing, model validation) , Data pipelining tools like SQL, dbt, BigQuery, or Spark , A strong communicator with the ability to communicate technical concepts into layman's terms for a non-technical audience , You're not afraid to challenge the status quo if More ❯
variety of technical and executive audiences both written and verbal Preferred (but not required) to have: Hands on experience with Python Experience working with modern data technology (e.g. dbt, spark, containers, devops tooling, orchestration tools, git, etc.) Experience with data science and machine learning technology People want to buy from people who understand them. Our Solutions Engineers build connections More ❯
hold or be willing to gain a UK Security Clearance Preferred (but not required) to have: Hands on experience with Python Experience working with modern data technology (e.g. dbt, spark, containers, devops tooling, orchestration tools, git, etc.) Experience with AI, data science and machine learning technologies People want to buy from people who understand them. Our Solution Engineers build More ❯
current cyber security threats, actors and their techniques Experience with data science, big data analytics technology stack, analytic development for endpoint and network security, and streaming technologies (e.g., Kafka, Spark Streaming, and Kinesis) Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting decisions based on your More ❯
deploying models in production and adjusting model thresholds to improve performance Experience designing, running, and analyzing complex experiments or leveraging causal inference designs Experience with distributed tools such as Spark, Hadoop, etc. A PhD or MS in a quantitative field (e.g., Statistics, Engineering, Mathematics, Economics, Quantitative Finance, Sciences, Operations Research) Office-assigned Stripes spend at least 50% of the More ❯
customer facing, complex and large scale project management experience - 5+ years of continuous integration and continuous delivery (CI/CD) experience - 5+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience - 5+ years of consulting, design and implementation of serverless distributed solutions experience - 3+ years of cloud based solution (AWS or equivalent), system, network and operating system More ❯
Relevant experience in delivery of AI design, build, deployment or management Proficiency or certification in Microsoft Office tools, as well as relevant technologies such as Python, TensorFlow, Jupiter Notebook, Spark, Azure Cloud, Git, Docker and/or any other relevant technologies Strong analytical and problem-solving skills, with the ability to work on complex projects and deliver actionable insights More ❯
Relevant experience in delivery of AI design, build, deployment or management Proficiency or certification in Microsoft Office tools, as well as relevant technologies such as Python, TensorFlow, Jupiter Notebook, Spark, Azure Cloud, Git, Docker and/or any other relevant technologies Strong analytical and problem-solving skills, with the ability to work on complex projects and deliver actionable insights More ❯
current cyber security threats, actors and their techniques. Experience with data science, big data analytics technology stack, analytic development for endpoint and network security, and streaming technologies (e.g., Kafka, Spark Streaming, and Kinesis). Strong sense of ownership combined with collaborative approach to overcoming challenges and influencing organizational change. Amazon is an equal opportunities employer. We believe passionately that More ❯
mindset with ability to think strategically about business, product, and technical challenges in an enterprise environment - Extensive hands-on experience with data platform technologies, including at least three of: Spark, Hadoop ecosystem, orchestration frameworks, MPP databases, NoSQL, streaming technologies, data catalogs, BI and visualization tools - Proficiency in at least one programming language (e.g., Python, Java, Scala), infrastructure as code More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
WorksHub
that help us achieve our objectives. So each team leverages the technology that fits their needs best. You'll see us working with data processing/streaming like Kinesis, Spark and Flink; application technologies like PostgreSQL, Redis & DynamoDB; and breaking things using in-house chaos principles and tools such as Gatling to drive load all deployed and hosted on More ❯
the latest tech, serious brain power, and deep knowledge of just about every industry. We believe a mix of data, analytics, automation, and responsible AI can do almost anything-spark digital metamorphoses, widen the range of what humans can do, and breathe life into smart products and services. Want to join our crew of sharp analytical minds? You'll More ❯
priorities aimed at maximizing value through data utilization. Knowled g e/Experience Expertise in Commercial/Procurement Analytics. Experience in SAP (S/4 Hana). Experience with Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL … processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog le Bi g Query). Experience with DataOps practices and tools, includin g CI/CD for data pipelines. More ❯
listed below. AI techniques (supervised and unsupervised machine learning, deep learning, graph data analytics, statistical analysis, time series, geospatial analysis, NLP, sentiment analysis, pattern detection, etc.) Python, R, or Spark for data insights Data Bricks/Data QISQL for data access and processing (PostgreSQL preferred, but general SQL knowledge is important) Latest Data Science platforms (e.g., Databricks, Dataiku, AzureML … SageMaker) and frameworks (e.g., TensorFlow, MXNet, scikit-learn) Software engineering practices (coding standards, unit testing, version control, code review) Hadoop distributions (Cloudera, Hortonworks), NoSQL databases (Neo4j, Elastic), streaming technologies (Spark Streaming) Data manipulation and wrangling techniques Development and deployment technologies (virtualisation, CI tools like Jenkins, configuration management with Ansible, containerisation with Docker, Kubernetes) Data visualization skills (JavaScript preferred) Experience More ❯
researching new technologies and software versions Working with cloud technologies and different operating systems Working closely alongside Data Engineers and DevOps engineers Working with big data technologies such as spark Demonstrating stakeholder engagement by communicating with the wider team to understand the functional and non-functional requirements of the data and the product in development and its relationship to … networks into production Experience with Docker Experience with NLP and/or computer vision Exposure to cloud technologies (eg. AWS and Azure) Exposure to Big data technologies Exposure to Apache products eg. Hive, Spark, Hadoop, NiFi Programming experience in other languages This is not an exhaustive list, and we are keen to hear from you even if you More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
options Hybrid working - 1 day a week in a central London office High-growth scale-up with a strong mission and serious funding Modern tech stack: Python, SQL, Snowflake, Apache Iceberg, AWS, Airflow, dbt, Spark Work cross-functionally with engineering, product, analytics, and data science leaders What You'll Be Doing Lead, mentor, and grow a high-impact More ❯