London, England, United Kingdom Hybrid / WFH Options
Biprocsi Ltd
automation to ensure successful project delivery, adhering to client timelines and quality standards. Implement and manage real-time and batch data processing frameworks (e.g., Apache Kafka, ApacheSpark, Google Cloud Dataproc) in line with project needs. Build and maintain robust monitoring, logging, and alerting systems for client … in languages like Python, Bash, or Go to automate tasks and build necessary tools. Expertise in designing and optimising data pipelines using frameworks like Apache Airflow or equivalent. Demonstrated experience with real-time and batch data processing frameworks, including Apache Kafka, ApacheSpark, or Google Cloud More ❯
learning libraries in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks More ❯
London, England, United Kingdom Hybrid / WFH Options
Endava
Key Responsibilities Data Pipeline Development Architect, implement and maintain real-time and batch data pipelines to handle large datasets efficiently. Employ frameworks such as ApacheSpark, Databricks, Snowflake or Airflow to automate ingestion, transformation, and delivery. Data Integration & Transformation Work with Data Analysts to understand source-to-target … ensure regulatory compliance (GDPR). Document data lineage and recommend improvements for data ownership and stewardship. Qualifications Programming: Python, SQL, Scala, Java. Big Data: ApacheSpark, Hadoop, Databricks, Snowflake, etc. Cloud: AWS (Glue, Redshift), Azure (Synapse, Data Factory, Fabric), GCP (BigQuery, Dataflow). Data Modelling & Storage: Relational (PostgreSQL More ❯
Cincinnati, Ohio, United States Hybrid / WFH Options
LeadStack Inc
least 3+ years in a leadership role. • Proven experience leading product-centric data engineering initiatives in an agile delivery environment. • Expertise in Azure Databricks, ApacheSpark, Azure SQL, and other Microsoft Azure services. • Strong programming skills in Python, Scala, and SQL for data processing and API development. • Experience … and implement scalable data pipelines and architectures on Azure Databricks. • Optimize ETL/ELT workflows, ensuring efficiency in data processing, storage, and retrieval. • Leverage ApacheSpark, Delta Lake, and Azure-native services to build high-performance data solutions. • Ensure best practices in data governance, security, and compliance within … Azure environments. • Troubleshoot and fine-tune Spark jobs for optimal performance and cost efficiency. Azure SQL & Cloud Migration: • Lead the migration of Azure SQL to Azure Databricks, ensuring a seamless transition of data workloads. • Design and implement scalable data pipelines to extract, transform, and load (ETL/ELT) data More ❯
London, England, United Kingdom Hybrid / WFH Options
TN United Kingdom
Key Responsibilities Data Pipeline Development Architect, implement and maintain real-time and batch data pipelines to handle large datasets efficiently. Employ frameworks such as ApacheSpark, Databricks, Snowflake or Airflow to automate ingestion, transformation, and delivery. Data Integration & Transformation Work with Data Analysts to understand source-to-target … ensure regulatory compliance (GDPR). Document data lineage and recommend improvements for data ownership and stewardship. Qualifications Programming: Python, SQL, Scala, Java. Big Data: ApacheSpark, Hadoop, Databricks, Snowflake, etc. Cloud: AWS (Glue, Redshift), Azure (Synapse, Data Factory, Fabric), GCP (BigQuery, Dataflow). Data Modelling & Storage: Relational (PostgreSQL More ❯
London, England, United Kingdom Hybrid / WFH Options
Datapao
companies where years-long behemoth projects are the norm, our projects are fast-paced, typically 2 to 4 months long. Most are delivered using ApacheSpark/Databricks on AWS/Azure and require you to directly manage the customer relationship alone or in collaboration with a Project … at DATAPAO, meaning that you'll get access to Databricks' public and internal courses to learn all the tricks of Distributed Data Processing, MLOps, ApacheSpark, Databricks, and Cloud Migration from the best. Additionally, we'll pay for various data & cloud certifications, you'll get dedicated time for … seniority level during the selection process. About DATAPAO At DATAPAO, we are delivery partners and the preferred training provider for Databricks, the creators of Apache Spark. Additionally, we are Microsoft Gold Partners in delivering cloud migration and data architecture on Azure. Our delivery partnerships enable us to work in More ❯
London, England, United Kingdom Hybrid / WFH Options
Luupli
analytical, problem-solving, and critical thinking skills. 8.Experience with social media analytics and understanding of user behaviour. 9.Familiarity with big data technologies, such as Apache Hadoop, ApacheSpark, or Apache Kafka. 10.Knowledge of AWS machine learning services, such as Amazon SageMaker and Amazon Comprehend. 11.Experience with More ❯
London, England, United Kingdom Hybrid / WFH Options
bigspark
Engineer - UK Remote About us bigspark, a UK-based consultancy, delivers next-level data platforms and solutions with a focus on exciting technologies including ApacheSpark, Apache Kafka, and projects within Machine Learning, Data Engineering, Streaming, and Data Science. We are looking for a Python Software Engineer More ❯
London, England, United Kingdom Hybrid / WFH Options
bigspark
Engineer - UK Remote About Us bigspark, a UK based consultancy delivering next level data platforms and solutions with a focus on exciting technologies including ApacheSpark, Apache Kafka and working on projects within Machine Learning, Data Engineering, Streaming and Data Science is looking for a Python Software More ❯
London, England, United Kingdom Hybrid / WFH Options
Endava Limited
with business objectives. Key Responsibilities Architect, implement, and maintain real-time and batch data pipelines to handle large datasets efficiently. Employ frameworks such as ApacheSpark, Databricks, Snowflake, or Airflow to automate ingestion, transformation, and delivery. Data Integration & Transformation Work with Data Analysts to understand source-to-target … ensure regulatory compliance (GDPR). Document data lineage and recommend improvements for data ownership and stewardship. Qualifications Programming: Python, SQL, Scala, Java. Big Data: ApacheSpark, Hadoop, Databricks, Snowflake, etc. Data Modelling: Designing dimensional, relational, and hierarchical data models. Scalability & Performance: Building fault-tolerant, highly available data architectures. More ❯
London, England, United Kingdom Hybrid / WFH Options
S.i. Systems
Social network you want to login/join with: Senior C# (.NET) Developer with search ranking (Apache Solr) experience to build microservices and queries for search indexing in an AWS environment, London col-narrow-left Client: S.i. Systems Location: London, United Kingdom Job Category: Other - EU work permit required … Job Views: 2 Posted: 02.06.2025 Expiry Date: 17.07.2025 col-wide Job Description: Our Client is seeking a Senior C# (.NET) Developer with search ranking (Apache Solr) experience to build microservices and queries for search indexing in an AWS environment. Fully remote role that can be worked anywhere within Canada. … Must Haves: 8+ years as a Software Developer using Object-Oriented Design (OOD) Most recent 3 years primarily focused on C# (.NET) Experience with Apache Solr including changing ranking order, key areas, adding plug ins and building custom analyzers Experience working in cloud environment, AWS preferred but open to More ❯
London, England, United Kingdom Hybrid / WFH Options
DEPOP
platform teams at scale, ideally in a consumer or marketplace environment. Deep understanding of distributed systems and modern data ecosystems - including experience with Databrick, ApacheSpark, Apache Kafka and DBT. Demonstrated success in managing data platforms at scale, including both batch processing and real-time streaming architectures. More ❯
driving business value through ML Company first focus and collaborative individuals - we work better when we work together. Preferred Experience working with Databricks and ApacheSpark Preferred Experience working in a customer-facing role About Databricks Databricks is the data and AI company. More than 10,000 organizations … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
London, England, United Kingdom Hybrid / WFH Options
Apollo Solutions
manipulation and analysis, with the ability to build, maintain, and deploy sequences of automated processes Bonus Experience (Nice to Have) Familiarity with dbt, Fivetran, Apache Airflow, Data Mesh, Data Vault 2.0, Fabric, and ApacheSpark Experience working with streaming technologies such as Apache Kafka, ApacheMore ❯
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Gaming Innovation Group
at: Object oriented programming (Java) Data modelling using any database technologies ETL processes (ETLs are oldschool, we transfer in memory now) and experience with ApacheSpark or Apache NiFi Applied understanding of CI\CD in change management Dockerised applications Used distributed version control systems Excellent team player More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
Gaming Innovation Group
Object-oriented programming (Java) Data modeling using various database technologies ETL processes (transferring data in-memory, moving away from traditional ETLs) and experience with ApacheSpark or Apache NiFi Applied understanding of CI/CD in change management Dockerized applications Using distributed version control systems Being an More ❯
London, England, United Kingdom Hybrid / WFH Options
SBS
modelling, design, and integration expertise. Data Mesh Architectures: In-depth understanding of data mesh architectures. Technical Proficiency: Proficient in dbt, SQL, Python/Java, ApacheSpark, Trino, Apache Airflow, and Astro. Cloud Technologies: Awareness and experience with cloud technologies, particularly AWS. Analytical Skills: Excellent problem-solving and More ❯
Databricks platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … to continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and ApacheSpark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the More ❯
Databricks platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … to continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and ApacheSpark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the More ❯
Databricks platform, supporting clients in solving complex data challenges. Your Job's Key Responsibilities Are: Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based … to continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and ApacheSpark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the More ❯
City of Westminster, England, United Kingdom Hybrid / WFH Options
nudge Global Ltd
with cloud data platforms such as GCP (BigQuery, Dataflow) or Azure (Data Factory, Synapse) Expert in SQL, MongoDB and distributed data systems such as Spark, Databricks or Kafka Familiarity with data warehousing concepts and tools (e.g. Snowflake) Experience with CI/CD pipelines, containerization (Docker), and infrastructure-as-code More ❯
London, England, United Kingdom Hybrid / WFH Options
nudge
with cloud data platforms such as GCP (BigQuery, Dataflow) or Azure (Data Factory, Synapse) Expert in SQL, MongoDB and distributed data systems such as Spark, Databricks or Kafka Familiarity with data warehousing concepts and tools (e.g. Snowflake) Experience with CI/CD pipelines, containerization (Docker), and infrastructure-as-code More ❯
London, England, United Kingdom Hybrid / WFH Options
Locus Robotics
and scaling data systems. Highly desired experience with Azure, particularly Lakehouse and Eventhouse architectures. Experience with relevant infrastructure and tools including NATS, Power BI, ApacheSpark/Databricks, and PySpark. Hands-on experience with data warehousing methodologies and optimization libraries (e.g., OR-Tools). Experience with log analysis More ❯
London, England, United Kingdom Hybrid / WFH Options
DATAPAO
most complex projects - individually or by leading small delivery teams. Our projects are fast-paced, typically 2 to 4 months long, and primarily use ApacheSpark/Databricks on AWS/Azure. You will manage customer relationships either alone or with a Project Manager, and support our pre More ❯