Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom
IO Associates
flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, ApacheSpark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator More ❯
CD pipelines Familiar with observability tools, logging frameworks, and performance monitoring Background in serverless technologies (e.g., Lambda, Step Functions, API Gateway) Experience with data tools like EMR, Glue, or ApacheSpark Understanding of event-driven architecture (EventBridge, SNS, SQS) Knowledge of AWS database offerings including DynamoDB and RDS Familiarity with multi-region deployments and failover strategies AWS certifications More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Anson Mccade
knowledge of Kafka, Confluent, Databricks, Unity Catalog, and cloud-native architecture. Skilled in Data Mesh, Data Fabric, and product-led data strategy design. Experience with big data tools (e.g., Spark), ETL/ELT, SQL/NoSQL, and data visualisation. Confident communicator with a background in consultancy, stakeholder management, and Agile delivery. Want to hear more? Message me anytime. Linked More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Method Resourcing
a plus). Experience with model lifecycle management (MLOps), including monitoring, retraining, and model versioning. Ability to work across data infrastructure, from SQL to large-scale distributed data tools (Spark, etc.). Strong written and verbal communication skills, especially in cross-functional contexts. Bonus Experience (Nice to Have) Exposure to large language models (LLMs) or foundational model adaptation. Previous More ❯
would really make yourapplication stand out: Implementationexperience with Machine Learning models and applications Knowledgeof cloud-based Machine Learning engines (AWS, Azure, Google, etc.) Experiencewith large scale data processing tools (Spark, Hadoop, etc.) Abilityto query and program databases (SQL, No SQL) Experiencewith distributed ML frameworks (TensorFlow, PyTorch, etc.) Familiaritywith collaborative software tools (Git, Jira, etc.) Experiencewith user interface libraries/ More ❯
and committed to ongoing learning and mentoring colleagues. Key Responsibilities : Designing, prototyping, and implementing robust recommendation applications using best-practice agile development processes Working with technologies including Java, Scala, Spark, EMR, Kubernetes, and Airflow Building cloud infrastructure in AWS to host and monitor the applications, and automating common tasks mercilessly. Collaborating as part of a tight-knit, agile, quality More ❯
Salford, Manchester, United Kingdom Hybrid / WFH Options
Manchester Digital
the ability to pivot strategies in response to innovative technologies, insights, or regulatory developments. Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and big data technologies (e.g., Snowflake, Spark). Strong communication skills, with the ability to distill complex data concepts into clear messages for non-technical stakeholders. Excellent stakeholder management and cross-functional collaboration skills, with the More ❯
testing, mentoring junior scientists, and leading technical decisions. You are proficient in Python, Java, Scala, and ML frameworks (e.g., TensorFlow, PyTorch ), with experience in cloud platforms (AWS), big data (Spark), and deployment tools (Kubernetes, Airflow, Docker). Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical More ❯
systems in modern cloud environments (e.g. AWS, GCP) Technologies and Tools Python ML and MLOps tooling (e.g. SageMaker, Databricks, TFServing, MLflow) Common ML libraries (e.g. scikit-learn, PyTorch, TensorFlow) Spark and Databricks AWS services (e.g. IAM, S3, Redis, ECS) Shell scripting and related developer tooling CI/CD tools and best practices Streaming and batch data systems (e.g. Kafka More ❯
Proficiency in a systems programming language (e.g., Go, C++, Java, Rust). Experience with deep learning frameworks like PyTorch or TensorFlow. Experience with large-scale data processing engines like Spark and Dataproc. Familiarity with data pipeline tools like dbt. Benefits Flexible Working Hours & Remote-First Environment - Work when and where you're most productive, with flexibility and support. Comprehensive More ❯
presentations Strong organisational skills with experience in balancing multiple projects Familiarity with Posit Connect, workflow orchestration tools (e.g., Airflow), AWS services (e.g., SageMaker, Redshift), or distributed computing tools (e.g., Spark, Kafka) Experience in a media or newsroom environment Agile team experience Advanced degree in Maths, Statistics, or a related field What's in it for you? Our benefits Our More ❯
ad conversion data, targeting and measurement, as well as the changing privacy and compliance landscape. Have a data engineering background, or working experience with data technologies such as Databricks, Spark, Kafka, SQL and Airflow. Have a strong sense of ownership and track record of delivery. You get huge satisfaction from tackling complex and ambitious problems, and delivering the highest More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in ApacheSpark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base More ❯
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in ApacheSpark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base More ❯
Disaster recovery process/tools Experience in troubleshooting and problem resolution Experience in System Integration Knowledge of the following: Hadoop, Flume, Sqoop, Map Reduce, Hive/Impala, Hbase, Kafka, Spark Streaming Experience of ETL tools incorporating Big Data Shell Scripting, Python Beneficial Skills: Understanding of: LAN, WAN, VPN and SD Networks Hardware and Cabling set-up experience Experience of More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom
McGregor Boyall
and strategically important role that involves high-impact project work at one of the world's most complex financial institutions. Key Skills: Strong hands-on experience with SQL, Python, Spark Background in Big Data/Hadoop environments Solid understanding of ETL/Data Warehousing concepts Strong communicator, with the ability to explain technical concepts to senior stakeholders Details: Location More ❯
processes on their own Cloud estate. Responsibilities include: DevOps tooling/automation written with Bash/Python/Groovy/Jenkins/Golang Provisioning software/frameworks (Elasticsearch/Spark/Hadoop/PostgreSQL) Infrastructure Management - CasC, IasC (Ansible, Terraform, Packer) Log and metric aggregation with Fluentd, Prometheus, Grafana, Alertmanager Public Cloud, primarily GCP, but also AWS and Azure More ❯
or similar languages (e.g. Java or Python) Software collaboration and revision control (e.g. Git or SVN) Desired skills and experiences: ElasticSearch/Kibana Cloud computing (e.g. AWS) Hadoop/Spark etc. Graph Databases Educational level: Master Degree Tagged as: Clustering , Data Mining , Industry , Information Retrieval , Master Degree , Sentiment Analysis , United Kingdom More ❯
and verbal Preferred (but not required) to have: Experience with large, global Financial Services & Banking customers Hands on experience with Python Experience working with modern data technology (e.g. dbt, spark, containers, devops tooling, orchestration tools, git, etc.) Experience with data science and machine learning technology People want to buy from people who understand them. Our Solution Engineers build connections More ❯
statistics, mathematics, economics, or a related field Solid experience in a data science, analytics, or consulting role, preferably in the cryptocurrency, financial services, or cybersecurity domains Proficiency in Python, Spark, SQL, and other data analysis and visualization tools and frameworks, such as Tableau, Power BI, or Splunk Experience in applying machine learning, artificial intelligence, and natural language processing techniques More ❯
Terraform. Experience with observability stacks (Grafana, Prometheus, OpenTelemetry). Familiarity with Postgres. Interest in data-privacy, AdTech/MarTech or large-scale data processing. Familiarity with Kafka, gRPC or Apache Spark. As well as working as part of an amazing, engaging and collaborative team, we offer our staff a wide range of benefits to motivate them to be the More ❯
there's nothing we can't achieve in the cloud. BASIC QUALIFICATIONS 7+ years of technical specialist, design and architecture experience 5+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience 7+ years of consulting, design and implementation of serverless distributed solutions experience 5+ years of software development with object-oriented language experience 3+ years of cloud More ❯
Analytics, Azure Data Factory, Azure Purview). • Deep understanding and practical experience in designing and implementing medallion data lake architectures. • Strong knowledge of data warehousing, big data technologies (including Spark), and distributed systems. • Proficiency in data modelling techniques (conceptual, logical, physical) and data quality frameworks. • Proven experience in developing and implementing QA strategies, test plans, and automated testing for More ❯
Analytics, Azure Data Factory, Azure Purview). Deep understanding and practical experience in designing and implementing medallion data lake architectures. Strong knowledge of data warehousing, big data technologies (including Spark), and distributed systems. Proficiency in data modelling techniques (conceptual, logical, physical) and data quality frameworks. Proven experience in developing and implementing QA strategies, test plans, and automated testing for More ❯
Cambridge, Cambridgeshire, United Kingdom Hybrid / WFH Options
Deloitte LLP
for Deloitte landscape and uses cases. Build data pipelines, models, and AI applications, using cloud platforms and frameworks such as Azure AI/ML Studio, AWS Bedrock, GCP Vertex, Spark, TensorFlow, PyTorch, etc. Build and deploy production grade fine-tuned LLMs and complex RAG architectures. Create and manage the complex and robust prompts across the GenAI solutions. Communicate effectively More ❯