performance and responsiveness. Stay Up to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like ApacheSpark , Databricks , Apache Pulsar , Apache Airflow , Temporal , and Apache Flink , sharing knowledge and suggesting improvements. Documentation: Contribute to clear and … or Azure . DevOps Tools: Familiarity with containerization ( Docker ) and infrastructure automation tools like Terraform or Ansible . Real-time Data Streaming: Experience with Apache Pulsar or similar systems for real-time messaging and stream processing is a plus. Data Engineering: Experience with ApacheSpark , Databricks , or … similar big data platforms for processing large datasets, building data pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or Temporal for managing workflows and scheduling jobs in distributed systems. Stream Processing: Experience with Apache Flink or other stream processing frameworks is a plus. More ❯
experienced Head of Data Engineering to lead data strategy, architecture, and team management in a fast-paced fintech environment. This role involves designing scalable ApacheSpark , Databricks , and Snowflake solutions on Azure , optimizing ETL/ELT pipelines , ensuring data security and compliance, and driving innovation in big data … team of data engineers, fostering a culture of innovation, collaboration, and best practices. Big Data Processing: Architect, optimize, and manage big data solutions leveraging ApacheSpark, Databricks, and Snowflake to enable real-time and batch data processing. Cloud Data Infrastructure: Oversee the deployment and maintenance of Azure-based … 8+ years in data engineering, with at least 3+ years in a leadership role within fintech or financial services. Strong hands-on experience with ApacheSpark, Databricks, Snowflake, and Azure Data Services (Azure Data Lake, Azure Synapse, etc.). Deep understanding of distributed computing, data warehousing, and data More ❯
Location: London Hybrid working Full Time Contract Long term engagement Job Summary We are seeking a highly skilled Data Engineer with expertise in Databricks, ApacheSpark, and Scala/Python to join our dynamic team in London on a contingent worker basis. The ideal candidate will have hands … implementing robust data pipelines and ensuring compliance with data governance best practices. Key Responsibilities: Develop, optimize, and maintain big data pipelines using Databricks and ApacheSpark . Write efficient, scalable, and maintainable Scala/Python code for data processing and transformation. Collaborate with data architects, analysts, and business … issues. Document technical processes, best practices, and governance policies. Key Requirements: 5+ years of experience in data engineering with a focus on Databricks and ApacheSpark . Strong programming skills in Scala and/or Python . Experience or a strong understanding of data governance, metadata management, and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
I3 Resourcing Limited
Data Platform Engineer - SSIS & T-SQL, Data Factory - Hybrid Data Platform Engineer SSIS & T-SQL, Data Factory, Databricks/ApacheSpark London Insurance Market City, London/Hybrid (3 days per week in the office) Permanent £85,000 per annum + benefits + bonus PLEASE ONLY APPLY IF … data function in a London Market Insurance setting Sound understanding of data warehousing concepts ETL/ELTs - SSIS & T-SQL, Data Factory, Databricks/ApacheSpark Data modelling Strong communication skills and able to build relationships and trust with stakeholders Data Platform Engineer SSIS & T-SQL, Data Factory … Databricks/ApacheSpark London Insurance Market City, London/Hybrid (3 days per week in the office) Permanent £85,000 per annum + benefits + bonus More ❯
Our team values continuous learning, knowledge sharing, and creating inclusive solutions that make a difference. Key Responsibilities Support customers with big data services including ApacheSpark, Hive, Presto, and other Hadoop ecosystem components Develop and share technical solutions through various communication channels Contribute to improving support processes and … work week schedule, which may include weekends on rotation. BASIC QUALIFICATIONS - Good depth of understanding in Hadoop Administration, support and troubleshooting (Any two applications: ApacheSpark, Apache Hive, Presto, Map-Reduce, Zookeeper, HBASE, HDFS and Pig.) - Good understanding of Linux and Networking concepts - Intermediate programming/scripting More ❯
learning libraries in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks More ❯
learning libraries in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks More ❯
learning libraries in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks More ❯
Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one. Deep experience with distributed computing with ApacheSpark and knowledge of Spark runtime internals. Familiarity with CI/CD for production deployments. Working knowledge of MLOps. Design and deployment … data, analytics, and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake, and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
companies where years-long behemoth projects are the norm, our projects are fast-paced, typically 2 to 4 months long. Most are delivered using ApacheSpark/Databricks on AWS/Azure and require you to directly manage the customer relationship alone or in collaboration with a Project … at DATAPAO, meaning that you'll get access to Databricks' public and internal courses to learn all the tricks of Distributed Data Processing, MLOps, ApacheSpark, Databricks, and Cloud Migration from the best. Additionally, we'll pay for various data & cloud certifications, you'll get dedicated time for … seniority level during the selection process. About DATAPAO At DATAPAO, we are delivery partners and the preferred training provider for Databricks, the creators of Apache Spark. Additionally, we are Microsoft Gold Partners in delivering cloud migration and data architecture on Azure. Our delivery partnerships enable us to work in More ❯
is made up of a series of components providing the next generation valuation and risk management services. Responsibilities Development of big data technologies like ApacheSpark and Azure Databricks Programming complex production systems in Scala, Java, or Python Experience in a platform engineering role on a major cloud … development, build, and runtime environments including experience of Kubernetes Salary - 140-150K/year SKILLS Must have Development of big data technologies like ApacheSpark and Azure Databricks Programming complex production systems in Scala, Java, or Python Experience in a platform engineering role on a major cloud More ❯
to non-technical and technical audiences alike. Passion for collaboration, life-long learning, and driving business value through ML. Preferred Experience working with Databricks & ApacheSpark to process large-scale distributed datasets. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide - including … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn, and Facebook. Benefits At Databricks, we strive to provide comprehensive More ❯
driving business value through ML Company first focus and collaborative individuals - we work better when we work together. Preferred Experience working with Databricks and ApacheSpark Preferred Experience working in a customer-facing role About Databricks Databricks is the data and AI company. More than 10,000 organizations … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
driving business value through ML Company first focus and collaborative individuals - we work better when we work together. Preferred Experience working with Databricks and ApacheSpark Preferred Experience working in a customer-facing role About Databricks Databricks is the data and AI company. More than 10,000 organizations … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn, and Facebook. Benefits At Databricks, we strive to provide comprehensive More ❯
or product feature use cases. Experience in building and deploying live software services in production. Exposure to some of the following technologies (or equivalent): ApacheSpark, AWS Redshift, AWS S3, Cassandra (and other NoSQL systems), AWS Athena, Apache Kafka, Apache Flink, AWS, and service-oriented architecture. More ❯
AWS Certified Data Analytics - Specialty or AWS Certified Solutions Architect - Associate. Experience with Airflow for workflow orchestration. Exposure to big data frameworks such as ApacheSpark, Hadoop, or Presto. Hands-on experience with machine learning pipelines and AI/ML data engineering on AWS. Benefits: Competitive salary and More ❯
experience working with relational and non-relational databases (e.g. Snowflake, BigQuery, PostgreSQL, MySQL, MongoDB). Hands-on experience with big data technologies such as ApacheSpark, Kafka, Hive, or Hadoop. Proficient in at least one programming language (e.g. Python, Scala, Java, R). Experience deploying and maintaining cloud More ❯
with TensorFlow, PyTorch, Scikit-learn, etc. is a strong plus. You have some experience with large scale, distributed data processing frameworks/tools like Apache Beam, ApacheSpark, or even our open source API for it - Scio, and cloud platforms like GCP or AWS. You care about More ❯
SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews More ❯
Scala. ? AI Frameworks: Extensive experience with AI frameworks and libraries, including TensorFlow, PyTorch, or similar. ? Data Processing: Expertise in big data technologies such as ApacheSpark, Hadoop, and experience with data pipeline tools like Apache Airflow. ? Cloud Platforms: Strong experience with cloud services, particularly AWS, Azure, or More ❯
and contribute to code reviews and best practices Skills & Experience Strong expertise in Python and SQL for data engineering Hands-on experience with Databricks, Spark, Delta Lake, Delta Live Tables Experience in batch and real-time data processing Proficiency with cloud platforms (AWS, Azure, Databricks) Solid understanding of data More ❯
and contribute to code reviews and best practices Skills & Experience Strong expertise in Python and SQL for data engineering Hands-on experience with Databricks, Spark, Delta Lake, Delta Live Tables Experience in batch and real-time data processing Proficiency with cloud platforms (AWS, Azure, Databricks) Solid understanding of data More ❯
delivery across a range of projects, including data analysis, extraction, transformation, and loading, data intelligence, data security and proven experience in their technologies (e.g. Spark, cloud-based ETL services, Python, Kafka, SQL, Airflow) You have experience in assessing the relevant data quality issues based on data sources & uses cases More ❯
Qualifications: Master's or Ph.D. degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or related fields. Proven experience in Databricks and its ecosystem (Spark, Delta Lake, MLflow, etc.). Strong proficiency in Python and R for data analysis, machine learning, and data visualization. In-depth knowledge of cloud … BigQuery, Redshift, Data Lakes). Expertise in SQL for querying large datasets and optimizing performance. Experience working with big data technologies such as Hadoop, ApacheSpark, and other distributed computing frameworks. Solid understanding of machine learning algorithms, data preprocessing, model tuning, and evaluation. Experience in working with LLM More ❯
london, south east england, united kingdom Hybrid / WFH Options
Careerwise
Qualifications: Master's or Ph.D. degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or related fields. Proven experience in Databricks and its ecosystem (Spark, Delta Lake, MLflow, etc.). Strong proficiency in Python and R for data analysis, machine learning, and data visualization. In-depth knowledge of cloud … BigQuery, Redshift, Data Lakes). Expertise in SQL for querying large datasets and optimizing performance. Experience working with big data technologies such as Hadoop, ApacheSpark, and other distributed computing frameworks. Solid understanding of machine learning algorithms, data preprocessing, model tuning, and evaluation. Experience in working with LLM More ❯