Remote Databricks Jobs in Luton

2 of 2 Remote Databricks Jobs in Luton

Data Platform Engineer

Luton, England, United Kingdom
Hybrid / WFH Options
easyJet
more than 90 million passengers this year, we employ over 10,000 people. Its big-scale stuff and we’re still growing. Job Purpose With a big investment into Databricks and a large amount of interesting data this is the chance for you to come and be part of an exciting transformation in the way we store, analyse and use … engineering practices (e.g. TDD, CI/CD). • Experience with Apache Spark or any other distributed data programming frameworks. • Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. • Experience with cloud infrastructure like AWS or Azure. • Experience with Linux and containerisation (e.g Docker, shell scripting). • Understanding Data modelling and Data Cataloguing principles. • Understanding of … end monitoring, quality checks, lineage tracking and automated alerts to ensure reliable and trustworthy data across the platform. • Experience of building a data transformation framework with dbt. • Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. What you’ll get in return • Competitive base salary • Up to 20% bonus • 25 days holiday • BAYE, SAYE & Performance share More ❯
Posted:

Senior Data Platform Engineer

Luton, England, United Kingdom
Hybrid / WFH Options
easyJet
more than 90 million passengers this year, we employ over 10,000 people. Its big-scale stuff and we’re still growing. Job Purpose With a big investment into Databricks, and with a large amount of interesting data, this is the chance for you to come and be part of an exciting transformation in the way we store, analyse and … solutions. Job Accountabilities Develop robust, scalable data pipelines to serve the easyJet analyst and data science community. Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. Work with data scientists, machine learning engineers and DevOps engineers to develop, develop and deploy machine learning models and algorithms aimed at addressing … workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data lineage frameworks like Great Expectations, dbt data quality, OpenLineage or Marquez More ❯
Posted: