Location: Remote-first (UK-based) 💰 Rate: Up to £550 p/d 📆 Contract: 6 - 12 months (Outside IR35) 🛠 Tech Stack: Python, FastAPI, GCP, BigQuery, ApacheSpark, Apache Beam, Google Cloud Dataflow We're working with a forward-thinking consultancy that helps top companies build and scale high … You’ll Be Doing: 🔹 Building data pipelines and ETL workflows that process huge datasets 🔹 Designing, optimizing, and maintaining high-throughput reporting solutions 🔹 Working with ApacheSpark for large-scale data processing 🔹 Using Apache Beam and Google Cloud Dataflow to manage complex data workflows 🔹 Developing and improving backend … writing clean, efficient, and scalable code ✔ Experience with BigQuery, PostgreSQL, and Elasticsearch ✔ Hands-on experience with Google Cloud, Kubernetes, and Terraform ✔ Deep understanding of ApacheSpark for large-scale data processing ✔ Knowledge of Apache Beam & Google Cloud Dataflow for data pipeline orchestration ✔ A team-first mindset with More ❯
performance and responsiveness. Stay Up to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like ApacheSpark , Databricks , Apache Pulsar , Apache Airflow , Temporal , and Apache Flink , sharing knowledge and suggesting improvements. Documentation: Contribute to clear and … or Azure . DevOps Tools: Familiarity with containerization ( Docker ) and infrastructure automation tools like Terraform or Ansible . Real-time Data Streaming: Experience with Apache Pulsar or similar systems for real-time messaging and stream processing is a plus. Data Engineering: Experience with ApacheSpark , Databricks , or … similar big data platforms for processing large datasets, building data pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or Temporal for managing workflows and scheduling jobs in distributed systems. Stream Processing: Experience with Apache Flink or other stream processing frameworks is a plus. More ❯
performance and responsiveness. Stay Up to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like ApacheSpark , Databricks , Apache Pulsar , Apache Airflow , Temporal , and Apache Flink , sharing knowledge and suggesting improvements. Documentation: Contribute to clear and … or Azure . DevOps Tools: Familiarity with containerization ( Docker ) and infrastructure automation tools like Terraform or Ansible . Real-time Data Streaming: Experience with Apache Pulsar or similar systems for real-time messaging and stream processing is a plus. Data Engineering: Experience with ApacheSpark , Databricks , or … similar big data platforms for processing large datasets, building data pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or Temporal for managing workflows and scheduling jobs in distributed systems. Stream Processing: Experience with Apache Flink or other stream processing frameworks is a plus. More ❯
experienced Head of Data Engineering to lead data strategy, architecture, and team management in a fast-paced fintech environment. This role involves designing scalable ApacheSpark , Databricks , and Snowflake solutions on Azure , optimizing ETL/ELT pipelines , ensuring data security and compliance, and driving innovation in big data … team of data engineers, fostering a culture of innovation, collaboration, and best practices. Big Data Processing: Architect, optimize, and manage big data solutions leveraging ApacheSpark, Databricks, and Snowflake to enable real-time and batch data processing. Cloud Data Infrastructure: Oversee the deployment and maintenance of Azure-based … 8+ years in data engineering, with at least 3+ years in a leadership role within fintech or financial services. Strong hands-on experience with ApacheSpark, Databricks, Snowflake, and Azure Data Services (Azure Data Lake, Azure Synapse, etc.). Deep understanding of distributed computing, data warehousing, and data More ❯
experienced Head of Data Engineering to lead data strategy, architecture, and team management in a fast-paced fintech environment. This role involves designing scalable ApacheSpark , Databricks , and Snowflake solutions on Azure , optimizing ETL/ELT pipelines , ensuring data security and compliance, and driving innovation in big data … team of data engineers, fostering a culture of innovation, collaboration, and best practices. Big Data Processing: Architect, optimize, and manage big data solutions leveraging ApacheSpark, Databricks, and Snowflake to enable real-time and batch data processing. Cloud Data Infrastructure: Oversee the deployment and maintenance of Azure-based … 8+ years in data engineering, with at least 3+ years in a leadership role within fintech or financial services. Strong hands-on experience with ApacheSpark, Databricks, Snowflake, and Azure Data Services (Azure Data Lake, Azure Synapse, etc.). Deep understanding of distributed computing, data warehousing, and data More ❯
City of London, London, United Kingdom Hybrid / WFH Options
I3 Resourcing Limited
Data Platform Engineer - SSIS & T-SQL, Data Factory - Hybrid Data Platform Engineer SSIS & T-SQL, Data Factory, Databricks/ApacheSpark London Insurance Market City, London/Hybrid (3 days per week in the office) Permanent £85,000 per annum + benefits + bonus PLEASE ONLY APPLY IF … data function in a London Market Insurance setting Sound understanding of data warehousing concepts ETL/ELTs - SSIS & T-SQL, Data Factory, Databricks/ApacheSpark Data modelling Strong communication skills and able to build relationships and trust with stakeholders Data Platform Engineer SSIS & T-SQL, Data Factory … Databricks/ApacheSpark London Insurance Market City, London/Hybrid (3 days per week in the office) Permanent £85,000 per annum + benefits + bonus More ❯
Our team values continuous learning, knowledge sharing, and creating inclusive solutions that make a difference. Key Responsibilities Support customers with big data services including ApacheSpark, Hive, Presto, and other Hadoop ecosystem components Develop and share technical solutions through various communication channels Contribute to improving support processes and … work week schedule, which may include weekends on rotation. BASIC QUALIFICATIONS - Good depth of understanding in Hadoop Administration, support and troubleshooting (Any two applications: ApacheSpark, Apache Hive, Presto, Map-Reduce, Zookeeper, HBASE, HDFS and Pig.) - Good understanding of Linux and Networking concepts - Intermediate programming/scripting More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
INFUSED SOLUTIONS LIMITED
culture. Key Responsibilities Design, build, and maintain scalable data solutions to support business objectives. Work with Microsoft Fabric to develop robust data pipelines. Utilise ApacheSpark and the Spark API to handle large-scale data processing. Contribute to data strategy, governance, and architecture best practices. Identify and … approaches. Collaborate with cross-functional teams to deliver projects on time . Key Requirements ? Hands-on experience with Microsoft Fabric . ? Strong expertise in ApacheSpark and Spark API . ? Knowledge of data architecture, engineering best practices, and governance . ? DP-600 & DP-700 certifications are highly More ❯
learning libraries in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks More ❯
learning libraries in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks More ❯
learning libraries in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks More ❯
learning libraries in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks More ❯
learning libraries in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks More ❯
Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one. Deep experience with distributed computing with ApacheSpark and knowledge of Spark runtime internals. Familiarity with CI/CD for production deployments. Working knowledge of MLOps. Design and deployment … data, analytics, and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake, and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
companies where years-long behemoth projects are the norm, our projects are fast-paced, typically 2 to 4 months long. Most are delivered using ApacheSpark/Databricks on AWS/Azure and require you to directly manage the customer relationship alone or in collaboration with a Project … at DATAPAO, meaning that you'll get access to Databricks' public and internal courses to learn all the tricks of Distributed Data Processing, MLOps, ApacheSpark, Databricks, and Cloud Migration from the best. Additionally, we'll pay for various data & cloud certifications, you'll get dedicated time for … seniority level during the selection process. About DATAPAO At DATAPAO, we are delivery partners and the preferred training provider for Databricks, the creators of Apache Spark. Additionally, we are Microsoft Gold Partners in delivering cloud migration and data architecture on Azure. Our delivery partnerships enable us to work in More ❯
AI solutions using the Databricks Lakehouse (Delta Lake, Unity Catalog, MLflow). Design and lead the development of modular, high-performance data pipelines using ApacheSpark and PySpark. Champion the adoption of Lakehouse architecture (bronze/silver/gold layers) to ensure scalable, governed data platforms. Collaborate with … monitoring across data workloads. Mentor engineering teams and support architectural decisions as a recognised Databricks expert. Essential Skills & Experience: Demonstrable expertise with Databricks and ApacheSpark in production environments. Proficiency in PySpark, SQL, and working within one or more cloud platforms (Azure, AWS, or GCP). In-depth More ❯
data governance and best practice. ·Become an SME on the design, development, and deployment of data ETL pipelines (using Azure Data Factory, Azure Synapse, ApacheSpark and other technologies) to access, combine and transform data from on-prem and cloud-based sources. ·Ensure that all data pipelines are … and balance the need for delivery over scalability Experience & Skills Required ·Proven track record of developing data pipelines and products using Azure, Azure Synapse, ApacheSpark, DevOps, Snowflake, Databricks and Fabric. ·High level of coding proficiency in SQL and Python. ·A good level of experience of Data Modelling More ❯
is made up of a series of components providing the next generation valuation and risk management services. Responsibilities Development of big data technologies like ApacheSpark and Azure Databricks Programming complex production systems in Scala, Java, or Python Experience in a platform engineering role on a major cloud … development, build, and runtime environments including experience of Kubernetes Salary - 140-150K/year SKILLS Must have Development of big data technologies like ApacheSpark and Azure Databricks Programming complex production systems in Scala, Java, or Python Experience in a platform engineering role on a major cloud More ❯
to non-technical and technical audiences alike Passion for collaboration, life-long learning, and driving business value through ML [Preferred] Experience working with Databricks & ApacheSpark to process large-scale distributed datasets About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide - including … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive More ❯
to non-technical and technical audiences alike. Passion for collaboration, life-long learning, and driving business value through ML. Preferred Experience working with Databricks & ApacheSpark to process large-scale distributed datasets. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide - including … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn, and Facebook. Benefits At Databricks, we strive to provide comprehensive More ❯
together. Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience [Preferred] Experience working with Databricks and ApacheSpark [Preferred] Experience working in a customer-facing role About Databricks Databricks is the data and AI company. More than 10,000 organizations … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive More ❯
driving business value through ML Company first focus and collaborative individuals - we work better when we work together. Preferred Experience working with Databricks and ApacheSpark Preferred Experience working in a customer-facing role About Databricks Databricks is the data and AI company. More than 10,000 organizations … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
driving business value through ML Company first focus and collaborative individuals - we work better when we work together. Preferred Experience working with Databricks and ApacheSpark Preferred Experience working in a customer-facing role About Databricks Databricks is the data and AI company. More than 10,000 organizations … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn, and Facebook. Benefits At Databricks, we strive to provide comprehensive More ❯
driving business value through ML Company first focus and collaborative individuals - we work better when we work together. [Preferred] Experience working with Databricks and ApacheSpark [Preferred] Experience working in a customer-facing role About Databricks Databricks is the data and AI company. More than 10,000 organizations … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn, and Facebook. Benefits At Databricks, we strive to provide comprehensive More ❯
Greetings, We currently have an urgent opening for a Senior Java Developer (Spark) with more than 8 Years of experience at Synechron in Belfast, UK. Job Role: - Senior Java Developer (Spark) Job Location- Belfast, UK About Company: At Synechron, we believe in the power of digital to transform … with 8-12 years of hands-on commercial development experience. Proficient in understanding and working with distributed systems, specifically technologies such as Hadoop and Apache Hive. Mandatory experience with ApacheSpark for big data processing. Familiarity with streaming technologies, particularly Kafka, is a plus. Experience in the … are interested in this opportunity, kindly send your updated profile along with the required details listed below. Total Experience Experience in Java Experience in Spark, Hadoop/Apache Hive, Kafka: - Current CTC:- Expected CTC:- Notice period:- Current Location:- Ready to relocate to Belfast UK: - Visa Status:- Passport Validity More ❯