P-1455 Job Description At Databricks, we are passionate about enabling data teams to solve the world's toughest problems - from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infrastructure platform so our customers can use deep data insights … Team The Backline Engineering Team serves as the critical bridge between Engineering and Frontline Support. We handle complex technical issues and escalations across the Apache Spark ecosystem and the Databricks Platform stack. With a strong focus on customer success, we are committed to delivering exceptional customer satisfaction by providing deep technical expertise, proactive issue resolution, and continuous improvements to the … long term roadmap for Backline, focusing on automation, tool development, bug fixing and proactive issue resolution. Take ownership of high impact customer escalations by leading critical incident response during Databricks runtime outages and major incidents. Participate in weekday and weekend on call rotations, ensuring fast and effective resolution of urgent issues. Balance real time escalations with day to day planning More ❯
Skip to main content Overview Culture Benefits Diversity Engineering Research Students & new grads Back to search results Backline Manager (Apache Spark) Amsterdam, Netherlands P-1455 Job Description At Databricks, we are passionate about enabling data teams to solve the world's toughest problems - from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We … Team The Backline Engineering Team serves as the critical bridge between Engineering and Frontline Support. We handle complex technical issues and escalations across the Apache Spark ecosystem and the Databricks Platform stack. With a strong focus on customer success, we are committed to delivering exceptional customer satisfactionby providing deep technical expertise, proactive issue resolution, and continuous improvements to the platform. … long-term roadmap for Backline, focusing on automation, tool development, bug fixing and proactive issue resolution. Take ownership of high-impact customer escalations by leading critical incident response during Databricks runtime outages and major incidents. Participate in weekday and weekend on-call rotations, ensuring fast and effective resolution of urgent issues. Balance real-time escalations with day-to-day planning More ❯
robust data platforms. Innovation Stay current with emerging Azure technologies and best practices in data architecture. Required Skills & Experience Technical Expertise Extensive experience with Azure Data Services: Data Factory, Databricks, Synapse, Data Lake, Azure SQL. Strong understanding of data modeling, data warehousing, and distributed computing. Proficiency in Python, SQL, and Spark for data engineering tasks. Financial Services Domain Proven track More ❯
Stockport, England, United Kingdom Hybrid/Remote Options
Gravitas Recruitment Group (Global) Ltd
smooth releases and integration. Key Skills Data Modelling Python & SQL AWS/Redshift 3–5+ years of experience in data engineering Nice to Have Airflow, Tableau, Power BI, Snowflake, Databricks Data governance/data quality tooling Degree preferred Atlassian/Jira, CI/CD, Terraform Why Join? Career Growth: Clear progression to Tech Lead. Variety: Exposure to multiple squads and More ❯
to architectural decisions, and support the migration of reporting services to Azure. Key Responsibilities: Design, build, and maintain ETL pipelines using Azure Data Factory , Azure Data Lake , Synapse , and Databricks . Design and build a greenfield Azure data platform to support business-critical data needs. Collaborate with stakeholders across the organization to gather and define data requirements. Assist in the More ❯
databases including PostgreSQL . Integrate and automate workflows using DevOps tools (e.g., CI/CD, Jenkins, Git ). Collaborate on cloud-based data initiatives using AWS (S3) and Azure Databricks . Contribute to Agile/Scrum development processes and ensure timely delivery of project milestones. Work closely with cross-functional teams across multiple locations and time zones. Mentor team members More ❯
databases including PostgreSQL . Integrate and automate workflows using DevOps tools (e.g., CI/CD, Jenkins, Git ). Collaborate on cloud-based data initiatives using AWS (S3) and Azure Databricks . Contribute to Agile/Scrum development processes and ensure timely delivery of project milestones. Work closely with cross-functional teams across multiple locations and time zones. Mentor team members More ❯
Chicago, Illinois, United States Hybrid/Remote Options
Newcastle Associates, Inc
kinds of data, and collaborating with both technical and non-technical teammates, this role is for you. What Youll Do Build and manage data pipelines in Azure (Data Factory, Databricks, Synapse, etc.). Pull in data from different sourcesAPIs, databases, cloud apps, even streaming data. Organize, clean, and transform data so its ready for reporting, dashboards, or advanced analytics. Keep More ❯
Experience with orchestration tools (Airflow/Prefect) and cloud platforms (AWS preferred). Proven experience handling large-scale biological or multi-omics datasets. Bonus: exposure to distributed computing (Spark, Databricks, Kubernetes) or data cataloguing systems. You Are Curious and scientifically minded, with a strong understanding of biological data workflows. Collaborative and able to communicate effectively across computational and experimental teams. More ❯
Experience with orchestration tools (Airflow/Prefect) and cloud platforms (AWS preferred). Proven experience handling large-scale biological or multi-omics datasets. Bonus: exposure to distributed computing (Spark, Databricks, Kubernetes) or data cataloguing systems. You Are Curious and scientifically minded, with a strong understanding of biological data workflows. Collaborative and able to communicate effectively across computational and experimental teams. More ❯
. Familiarity with Snowflake cost monitoring, governance, replication , and environment management. Strong understanding of data modelling (star/snowflake schemas, SCDs, lineage). Proven Azure experience (Data Factory, Synapse, Databricks) for orchestration and integration. Proficient in SQL for complex analytical transformations and optimisations. Comfortable working in agile teams and using Azure DevOps for CI/CD workflows. Prior experience in More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Robert Half
. Familiarity with Snowflake cost monitoring, governance, replication , and environment management. Strong understanding of data modelling (star/snowflake schemas, SCDs, lineage). Proven Azure experience (Data Factory, Synapse, Databricks) for orchestration and integration. Proficient in SQL for complex analytical transformations and optimisations. Comfortable working in agile teams and using Azure DevOps for CI/CD workflows. Prior experience in More ❯
data quality, governance, and compliance processes Skills & experience required • Proven background leading data engineering teams or projects in a technology-driven business • Expert knowledge of modern cloud data platforms (Databricks, Snowflake, ideally AWS) • Advanced Python programming skills and fluency with the wider Python data toolkit • Strong capability with SQL, Spark, Airflow, Terraform, and workflow orchestration tools • Solid understanding of CICD More ❯
experience of: delivering high quality, complex technology solutions in commercial and government organizations delivering data analytics platforms and enterprise applications; working with SQL, SSRS, SSIS, SSAS, Azure Data Factory, Databricks, Python, DAX senior level understanding of health care management systems; and working with senior executives Deep expertise in one or more major DBMS platforms (e.g., Microsoft SQL Server, Oracle, PostgreSQL More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Higher - AI recruitment
and SQL Hands-on experiences with Data Architecture, including: Cloud platforms and orchestration tools (e.g. Dagster, Airflow) AI/MLOps: Model deployment, monitoring, lifecycle management. Big Data Processing: Spark, Databricks, or similar. Bonus: Knowledge Graph engineering, graph databases, ontologies. Located in London And ideally you... Are a zero-to-one builder who thrives on autonomy and ambiguity. Are a strong More ❯
and SQL Hands-on experiences with Data Architecture, including: Cloud platforms and orchestration tools (e.g. Dagster, Airflow) AI/MLOps: Model deployment, monitoring, lifecycle management. Big Data Processing: Spark, Databricks, or similar. Bonus: Knowledge Graph engineering, graph databases, ontologies. Located in London And ideally you... Are a zero-to-one builder who thrives on autonomy and ambiguity. Are a strong More ❯
basis your varied role will include, but will not be limited to: Design, build, and optimize high-performance data pipelines and ETL workflows using tools like Azure Synapse, Azure Databricks or Microsoft Fabric. Implement scalable solutions to ingest, store, and transform vast datasets, ensuring data availability and quality across the organization. Write clean, efficient, and reusable Python code tailored to More ❯
Letchworth Garden City, Hertfordshire, United Kingdom Hybrid/Remote Options
Willmott Dixon Group
with hands-on experience in relational and dimensional data modelling. Modern Data Engineering: Proven ability to design and deliver scalable solutions using tools like Microsoft Fabric (strongly preferred), Synapse, Databricks, or similar. Supporting Know-How: Solid grasp of data architecture, governance and security. DevOps & Cloud Fluency: Practical experience with CI/CD pipelines, APIs, and cloud tooling (eg Azure DevOps More ❯
12+ years of experience in a customer facing technical role and a working experience in: Distributed systems and massively parallel processing technologies and concepts such as Snowflake, Teradata, Spark, Databricks, Hadoop, Oracle, SQL Server, and performance optimisation Data strategies and methodologies such as Data Mesh, Data Vault, Data Fabric, Data Governance, Data Management, Enterprise Architecture Data organisation and modelling concepts More ❯
Automation Scripting Languages (Python, Powershell) Reporting tools (Power BI, SSRS or Tableau) Microsoft Fabric/Data Engineer - Modern Data Architecttures e.g lakehouse, data mesh ETL/ELT Tools (dbt, Databricks) Source Control (GitHub, Tortoise SVN) Batch Scheduling (Control-M, Autosys) CI/CD Education/Qualifications: Degree educated and/or equivalent experience.Personal Requirements: Excellent communication skills Results driven, with More ❯
Data Engineer (Databricks) – AI/Data Consulting Firm About Us We are an ambitious consulting firm focused on delivering cutting-edge solutions in data and AI. Our mission is to empower organisations to unlock the full potential of their data by leveraging platforms like Databricks alongside other emerging technologies. As a Data Engineer, you will play a crucial role in … building and optimising data solutions, ensuring scalability, performance, and reliability for our clients' complex data challenges. The Role As a Data Engineer (Databricks), you will be responsible for designing, implementing, and optimising large-scale data processing systems. You will work closely with clients, data scientists, and solution architects to ensure efficient data pipelines, reliable infrastructure, and scalable analytics capabilities. This … practices, coding standards, and documentation to improve data engineering processes. Mentor junior engineers and support knowledge-sharing across teams. Key Responsibilities: Design, build, and maintain scalable data pipelines using Databricks, Spark, and Delta Lake. Develop efficient ETL/ELT workflows to process large volumes of structured and unstructured data. Implement data governance, security, and compliance standards. Work with cloud platforms More ❯
Data Engineer (Databricks) – AI/Data Consulting Firm About Us We are an ambitious consulting firm focused on delivering cutting-edge solutions in data and AI. Our mission is to empower organisations to unlock the full potential of their data by leveraging platforms like Databricks alongside other emerging technologies. As a Data Engineer, you will play a crucial role in … building and optimising data solutions, ensuring scalability, performance, and reliability for our clients' complex data challenges. The Role As a Data Engineer (Databricks), you will be responsible for designing, implementing, and optimising large-scale data processing systems. You will work closely with clients, data scientists, and solution architects to ensure efficient data pipelines, reliable infrastructure, and scalable analytics capabilities. This … practices, coding standards, and documentation to improve data engineering processes. Mentor junior engineers and support knowledge-sharing across teams. Key Responsibilities: Design, build, and maintain scalable data pipelines using Databricks, Spark, and Delta Lake. Develop efficient ETL/ELT workflows to process large volumes of structured and unstructured data. Implement data governance, security, and compliance standards. Work with cloud platforms More ❯
Overview Databricks is the data and AI company. We enable data teams to solve bold problems by building and running a data and AI infrastructure platform used by thousands of customers. The Lakeflow Jobs team focuses on data-aware orchestration within the Databricks Data Intelligence Platform, powering ETL, AI/ML, BI, and streaming workloads with high reliability. As a … of major orchestration tools (Airflow, Dagster, Prefect, etc.) Technical understanding of data pipelines (Spark, dbt, Lakeflow Pipelines) Strong data analysis and operationalization skills (SQL, Python, building operational dashboards) About DatabricksDatabricks is a data and AI company trusted by more than 10,000 organizations worldwide. We unify and democratize data, analytics, and AI through the Databricks Data Intelligence Platform. We … are headquartered in San Francisco with offices globally. We were founded by the creators of Lakehouse, Apache Spark, Delta Lake, and MLflow. Benefits Databricks strives to provide comprehensive benefits and perks that meet the needs of our employees. For region-specific details, please visit our benefits page. Our Commitment to Diversity and Inclusion Databricks is committed to fostering a diverse More ❯
for designing, developing, and implementing data solutions on the Microsoft Azure platform, including data pipelines and architectures Azure services: Expertise in Azure services such as Azure Data Factory, Azure Databricks, and Azure SQL Database . Data pipeline development: Experience with designing and building ETL/ELT pipelines. Projects include: AI, Dashboards, Cloud Infrastructure, Data pipelines, Data Lake, Data models Experience More ❯
with Palantir Foundry is a must • Strong background in Python, Java, or Scala. • Proficiency in SQL, data processing, and ETL/ELT design. • Experience with cloud-native data platforms (Databricks, Spark, etc.). • Familiarity with CI/CD pipelines and automated testing. The Package: • £70,000 – £120,000 base • Bonus scheme • Pension & benefits package To hear more about this Data More ❯