survey exchange platforms. Knowledge of dynamic pricing models. Experience with Databricks and using it for scalable data processing and machine learning workflows. Experience working with big data technologies (e.g., Spark, PySpark). Experience with online market research methods/products. Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We More ❯
Agile delivery Advanced knowledge of AWS data services (e.g. S3, Glue, EMR, Lambda, Redshift) Expertise in big data technologies and distributed systems Strong coding and optimisation skills (e.g. Python, Spark, SQL) Data quality management and observability Strategic thinking and solution architecture Stakeholder and vendor management Continuous improvement and innovation mindset Excellent communication and mentoring abilities Experience you'd be More ❯
deep learning, GenAI, LLM, etc. as well as hands on experience on AWS services like SageMaker and Bedrock, and programming skills such as Python, R, SQL, Java, Julia, Scala, Spark/Numpy/Pandas/scikit, JavaScript Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace More ❯
or Google Cloud Platform (GCP). Strong proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle. Experience with big data technologies such as Hadoop, Spark, or Hive. Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow. Proficiency in Python and at least one other programming language More ❯
and real-time streaming. Knowledge of developing and processing full and incremental loads. Experience of automated loads using Databricks workflows and Jobs Expertise in Azure Databricks, including Delta Lake, Spark optimizations, and MLflow. Strong experience with Azure Data Factory (ADF) for data integration and orchestration. Hands-on experience with Azure DevOps, including pipelines, repos, and infrastructure as code (IaC … including monitoring, logging, and cost management. Knowledge of data security, compliance, and governance in Azure, including Azure Active Directory (AAD), RBAC, and encryption. Experience working with big data technologies (Spark, Python, Scala, SQL). Strong problem-solving and troubleshooting skills. Excellent communication skills with the ability to collaborate with cross-functional teams to understand requirements, data solutions, data models More ❯
and real-time streaming. Knowledge of developing and processing full and incremental loads. Experience of automated loads using Databricks workflows and Jobs Expertise in Azure Databricks, including Delta Lake, Spark optimizations, and MLflow. Strong experience with Azure Data Factory (ADF) for data integration and orchestration. Hands-on experience with Azure DevOps, including pipelines, repos, and infrastructure as code (IaC … including monitoring, logging, and cost management. Knowledge of data security, compliance, and governance in Azure, including Azure Active Directory (AAD), RBAC, and encryption. Experience working with big data technologies (Spark, Python, Scala, SQL). Strong problem-solving and troubleshooting skills. Excellent communication skills with the ability to collaborate with cross-functional teams to understand requirements, data solutions, data models More ❯
flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, ApacheSpark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator More ❯
flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, ApacheSpark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator More ❯
flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, ApacheSpark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom
IO Associates
flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, ApacheSpark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator More ❯
CD pipelines Familiar with observability tools, logging frameworks, and performance monitoring Background in serverless technologies (e.g., Lambda, Step Functions, API Gateway) Experience with data tools like EMR, Glue, or ApacheSpark Understanding of event-driven architecture (EventBridge, SNS, SQS) Knowledge of AWS database offerings including DynamoDB and RDS Familiarity with multi-region deployments and failover strategies AWS certifications More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Anson Mccade
knowledge of Kafka, Confluent, Databricks, Unity Catalog, and cloud-native architecture. Skilled in Data Mesh, Data Fabric, and product-led data strategy design. Experience with big data tools (e.g., Spark), ETL/ELT, SQL/NoSQL, and data visualisation. Confident communicator with a background in consultancy, stakeholder management, and Agile delivery. Want to hear more? Message me anytime. Linked More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Method Resourcing
a plus). Experience with model lifecycle management (MLOps), including monitoring, retraining, and model versioning. Ability to work across data infrastructure, from SQL to large-scale distributed data tools (Spark, etc.). Strong written and verbal communication skills, especially in cross-functional contexts. Bonus Experience (Nice to Have) Exposure to large language models (LLMs) or foundational model adaptation. Previous More ❯
would really make yourapplication stand out: Implementationexperience with Machine Learning models and applications Knowledgeof cloud-based Machine Learning engines (AWS, Azure, Google, etc.) Experiencewith large scale data processing tools (Spark, Hadoop, etc.) Abilityto query and program databases (SQL, No SQL) Experiencewith distributed ML frameworks (TensorFlow, PyTorch, etc.) Familiaritywith collaborative software tools (Git, Jira, etc.) Experiencewith user interface libraries/ More ❯
and committed to ongoing learning and mentoring colleagues. Key Responsibilities : Designing, prototyping, and implementing robust recommendation applications using best-practice agile development processes Working with technologies including Java, Scala, Spark, EMR, Kubernetes, and Airflow Building cloud infrastructure in AWS to host and monitor the applications, and automating common tasks mercilessly. Collaborating as part of a tight-knit, agile, quality More ❯
Salford, Manchester, United Kingdom Hybrid / WFH Options
Manchester Digital
the ability to pivot strategies in response to innovative technologies, insights, or regulatory developments. Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and big data technologies (e.g., Snowflake, Spark). Strong communication skills, with the ability to distill complex data concepts into clear messages for non-technical stakeholders. Excellent stakeholder management and cross-functional collaboration skills, with the More ❯
testing, mentoring junior scientists, and leading technical decisions. You are proficient in Python, Java, Scala, and ML frameworks (e.g., TensorFlow, PyTorch ), with experience in cloud platforms (AWS), big data (Spark), and deployment tools (Kubernetes, Airflow, Docker). Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical More ❯
systems in modern cloud environments (e.g. AWS, GCP) Technologies and Tools Python ML and MLOps tooling (e.g. SageMaker, Databricks, TFServing, MLflow) Common ML libraries (e.g. scikit-learn, PyTorch, TensorFlow) Spark and Databricks AWS services (e.g. IAM, S3, Redis, ECS) Shell scripting and related developer tooling CI/CD tools and best practices Streaming and batch data systems (e.g. Kafka More ❯
Proficiency in a systems programming language (e.g., Go, C++, Java, Rust). Experience with deep learning frameworks like PyTorch or TensorFlow. Experience with large-scale data processing engines like Spark and Dataproc. Familiarity with data pipeline tools like dbt. Benefits Flexible Working Hours & Remote-First Environment - Work when and where you're most productive, with flexibility and support. Comprehensive More ❯
presentations Strong organisational skills with experience in balancing multiple projects Familiarity with Posit Connect, workflow orchestration tools (e.g., Airflow), AWS services (e.g., SageMaker, Redshift), or distributed computing tools (e.g., Spark, Kafka) Experience in a media or newsroom environment Agile team experience Advanced degree in Maths, Statistics, or a related field What's in it for you? Our benefits Our More ❯
ad conversion data, targeting and measurement, as well as the changing privacy and compliance landscape. Have a data engineering background, or working experience with data technologies such as Databricks, Spark, Kafka, SQL and Airflow. Have a strong sense of ownership and track record of delivery. You get huge satisfaction from tackling complex and ambitious problems, and delivering the highest More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in ApacheSpark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base More ❯
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in ApacheSpark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base More ❯
Disaster recovery process/tools Experience in troubleshooting and problem resolution Experience in System Integration Knowledge of the following: Hadoop, Flume, Sqoop, Map Reduce, Hive/Impala, Hbase, Kafka, Spark Streaming Experience of ETL tools incorporating Big Data Shell Scripting, Python Beneficial Skills: Understanding of: LAN, WAN, VPN and SD Networks Hardware and Cabling set-up experience Experience of More ❯
processes on their own Cloud estate. Responsibilities include: DevOps tooling/automation written with Bash/Python/Groovy/Jenkins/Golang Provisioning software/frameworks (Elasticsearch/Spark/Hadoop/PostgreSQL) Infrastructure Management - CasC, IasC (Ansible, Terraform, Packer) Log and metric aggregation with Fluentd, Prometheus, Grafana, Alertmanager Public Cloud, primarily GCP, but also AWS and Azure More ❯