flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, Apache Spark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator with More ❯
flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, Apache Spark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator with More ❯
flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, Apache Spark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator with More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom
IO Associates
flows to Databricks for improved traceability Implement Unity Catalog for automated data lineage Deliver backlog items through Agile sprint planning Skills & Experience Strong hands-on experience with Databricks, Fabric, Apache Spark, Delta Lake Proficient in Python, SQL, and PySpark Familiar with Azure Data Factory, Event Hub, Unity Catalog Solid understanding of data governance and enterprise architecture Effective communicator with More ❯
Central London, London, United Kingdom Hybrid / WFH Options
Singular Recruitment
applications and high proficiency SQL for complex querying and performance tuning. ETL/ELT Pipelines: Proven experience designing, building, and maintaining production-grade data pipelines using Google Cloud Dataflow (Apache Beam) or similar technologies. GCP Stack: Hands-on expertise with BigQuery , Cloud Storage , Pub/Sub , and orchestrating workflows with Composer or Vertex Pipelines. Data Architecture & Modelling: Ability to More ❯
the office. What You'll Need to Succeed You'll bring 5+ years of data engineering experience, with expert-level skills in Python and/or Scala, SQL, and Apache Spark. You're highly proficient with Databricks and Databricks Asset Bundles, and have a strong understanding of data transformation best practices. Experience with GitHub, DBT, and handling large structured More ❯
time and batch inference Monitor and troubleshoot deployed models to ensure reliability and performance Stay updated with advancements in machine learning frameworks and distributed computing technologies Experience: Proficiency in Apache Spark and Spark MLlib for machine learning tasks Strong understanding of predictive modeling techniques (e.g., regression, classification, clustering) Experience with distributed systems like Hadoop for data storage and processing More ❯
of large-scale distributed data processing. Experience with developing extract-transform-load (ETL). Experience with distributed messaging systems like Kafka and RabbitMQ. Experience with distributed computing frameworks like Apache Spark and Flink. Bonus Points Experience working with AWS or Google Cloud Platform (GCP). Experience in building a data warehouse and data lake. Knowledge of advertising platforms. About More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Curo Resourcing Ltd
etc. Infrastructure as Code and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable: Jupyter Hub Awareness RabbitMQ or other common queue technology More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet More ❯
knowledge and Unix skills. Highly proficient working with cloud environments (ideally Azure), distributed computing and optimising workflows and pipelines. Experience working with common data transformation and storage formats, e.g. Apache Parquet, Delta tables. Strong experience working with containerisation (e.g. Docker) and deployment (e.g. Kubernetes). Experience with Spark, Databricks, data lakes. Highly proficient in working with version control and More ❯
Bedford, Bedfordshire, England, United Kingdom Hybrid / WFH Options
Reed Talent Solutions
source systems into our reporting solutions. Pipeline Development: Develop and configure meta-data driven data pipelines using data orchestration tools such as Azure Data factory and engineering tools like Apache Spark to ensure seamless data flow. Monitoring and Failure Recovery: Implement monitoring procedures to detect failures or unusual data profiles and establish recovery processes to maintain data integrity. Azure More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Apacheix
a year Individual healthcare cover Genuine flexible working Work from home, our Bristol offices, or client sites The latest secure tech Investment in personal development Vibrant social scene Why Apache iX? Our growing team brings a wealth of experience from across the defence and security sector, and we pride ourselves in delivering the highest quality services to our clients. More ❯
data-based insights, collaborating closely with stakeholders. Passionately discover hidden solutions in large datasets to enhance business outcomes. Design, develop, and maintain data processing pipelines using Cloudera technologies, including Apache Hadoop, Apache Spark, Apache Hive, and Python. Collaborate with data engineers and scientists to translate data requirements into technical specifications. Develop and maintain frameworks for efficient data More ❯
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary More ❯
South West London, London, United Kingdom Hybrid / WFH Options
Anson Mccade
on experience with cloud platforms like AWS, Azure, GCP, or Snowflake. Strong knowledge of data governance, compliance, and security standards (GDPR, CCPA). Proficiency in big data technologies like Apache Spark and understanding of data product strategies. Strong leadership and stakeholder management skills in Agile delivery environments. Package: £90,000 - £115,000 base salary Bonus Pension and company benefits More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
XPERT-CAREER LTD
AI agents Understanding of Large Language Models (LLMs) and intelligent automation workflows Experience building high-availability, scalable systems using microservices or event-driven architecture Knowledge of orchestration tools like Apache Airflow , Kubernetes , or serverless frameworks Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field Experience working in Agile/Scrum environments Strong problem-solving skills and attention More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
KO2 Embedded Recruitment Solutions LTD
team, you'll deliver Linux infrastructure solutions and support for a diverse range of clients. Expect to work with: Linux distributions: Debian, Ubuntu, Red Hat Enterprise Linux Web stacks: Apache, Nginx, MySQL, PostgreSQL, PHP, Python Networking: Static/dynamic routing, DNS, VPNs, and firewalls Containers & automation: Docker, Kubernetes, and CI/CD pipelines Cloud platforms: AWS, Azure, and Google More ❯
Sheffield, Yorkshire, United Kingdom Hybrid / WFH Options
Reach Studios Limited
Azure etc.) What You'll Need Must-haves: Comprehensive experience in a DevOps or SRE role, ideally in a multi-project environment Deep experience with web stacks: Nginx/Apache, PHP-FPM, MySQL, Redis, Varnish, Elasticsearch Proven expertise in managing and optimising Cloudflare across DNS, security, performance, and access Experience with Magento 2 infrastructure and deployment CI/CD More ❯
is crucial in reimagining and rebuilding our data platform to ensure we stay ahead in a rapidly evolving landscape. You'll be hands-on, using the latest tools like Apache Arrow, Polars, and Dagster to craft a data architecture that aligns with our innovative investment strategies. In this position, you'll be deeply involved in building and optimizing data More ❯
features. Rapid Prototyping: Create interactive AI demos and proofs-of-concept with Streamlit, Gradio, or Next.js for stakeholder feedback; MLOps & Deployment: Implement CI/CD pipelines (e.g., GitLab Actions, Apache Airflow), experiment tracking (MLflow), and model monitoring for reliable production workflows; Cross-Functional Collaboration: Participate in code reviews, architectural discussions, and sprint planning to deliver features end-to-end. More ❯