learning libraries in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks More ❯
learning libraries in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks More ❯
learning libraries in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks More ❯
driving business value through ML Company first focus and collaborative individuals - we work better when we work together. Preferred Experience working with Databricks and ApacheSpark Preferred Experience working in a customer-facing role About Databricks Databricks is the data and AI company. More than 10,000 organizations … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
driving business value through ML Company first focus and collaborative individuals - we work better when we work together. Preferred Experience working with Databricks and ApacheSpark Preferred Experience working in a customer-facing role About Databricks Databricks is the data and AI company. More than 10,000 organizations … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn, and Facebook. Benefits At Databricks, we strive to provide comprehensive More ❯
experience working with relational and non-relational databases (e.g. Snowflake, BigQuery, PostgreSQL, MySQL, MongoDB). Hands-on experience with big data technologies such as ApacheSpark, Kafka, Hive, or Hadoop. Proficient in at least one programming language (e.g. Python, Scala, Java, R). Experience deploying and maintaining cloud More ❯
and scaling data systems. Highly desired experience with Azure, particularly Lakehouse and Eventhouse architectures. Experience with relevant infrastructure and tools including NATS, Power BI, ApacheSpark/Databricks, and PySpark. Hands-on experience with data warehousing methodologies and optimization libraries (e.g., OR-Tools). Experience with log analysis More ❯
delivery across a range of projects, including data analysis, extraction, transformation, and loading, data intelligence, data security and proven experience in their technologies (e.g. Spark, cloud-based ETL services, Python, Kafka, SQL, Airflow) You have experience in assessing the relevant data quality issues based on data sources & uses cases More ❯
team-oriented environment. Preferred Skills: Experience with programming languages such as Python or R for data analysis. Knowledge of big data technologies (e.g., Hadoop, Spark) and data warehousing concepts. Familiarity with cloud data platforms (e.g., Azure, AWS, Google Cloud) is a plus. Certification in BI tools, SQL, or related More ❯
East London, London, United Kingdom Hybrid / WFH Options
Asset Resourcing
programming languages such as Python or Java. Understanding of data warehousing concepts and data modeling techniques. Experience working with big data technologies (e.g., Hadoop, Spark) is an advantage. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Responsibilities: Design, build and maintain efficient and scalable data pipelines More ❯
programming languages such as Python or Java. Understanding of data warehousing concepts and data modeling techniques. Experience working with big data technologies (e.g., Hadoop, Spark) is an advantage. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Benefits Enhanced leave - 38 days inclusive of 8 UK Public More ❯
Snowflake. Understanding of cloud platform infrastructure and its impact on data architecture. Data Technology Skills: A solid understanding of big data technologies such as ApacheSpark, and knowledge of Hadoop ecosystems. Knowledge of programming languages such as Python, R, or Java is beneficial. Exposure to ETL/ELT More ❯
unstructured datasets. Engineering best practices and standards. Experience with data warehouse software (e.g. Snowflake, Google BigQuery, Amazon Redshift). Experience with data tools: Hadoop, Spark, Kafka, etc. Code versioning (Github integration and automation). Experience with scripting languages such as Python or R. Working knowledge of message queuing and … stream processing. Experience with ApacheSpark or Similar Technologies. Experience with Agile and Scrum Technologies. Familiarity with dbt and Airflow is an advantage. Experience working in a start-up or scale up environment. Experience working in the fields of financial technology, traditional financial services, or blockchain/cryptocurrency. More ❯
London, England, United Kingdom Hybrid / WFH Options
Focus on SAP
Hybrid Languages: English Key skills: 5+ years of Data Engineer. Proven expertise in Databricks (including Delta Lake, Workflows, Unity Catalog). Strong command of ApacheSpark, SQL, and Python. Hands-on experience with cloud platforms (AWS, Azure, or GCP). Understanding of modern data architectures (e.g., Lakehouse, ELT … Rights to work in the UK is must (No Sponsorship available) Responsibilities: Design, build, and maintain scalable and efficient data pipelines using Databricks and Apache Spark. Collaborate with Data Scientists, Analysts, and Product teams to understand data needs and deliver clean, reliable datasets. Optimize data workflows and storage (Delta More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Focus on SAP
Hybrid Languages: English Key skills: 5+ years of Data Engineer. Proven expertise in Databricks (including Delta Lake, Workflows, Unity Catalog). Strong command of ApacheSpark, SQL, and Python. Hands-on experience with cloud platforms (AWS, Azure, or GCP). Understanding of modern data architectures (e.g., Lakehouse, ELT … Rights to work in the UK is must (No Sponsorship available) Responsibilities: Design, build, and maintain scalable and efficient data pipelines using Databricks and Apache Spark. Collaborate with Data Scientists, Analysts, and Product teams to understand data needs and deliver clean, reliable datasets. Optimize data workflows and storage (Delta More ❯
or similar role. Proficiency with Databricks and its ecosystem. Strong programming skills in Python, R, or Scala. Experience with big data technologies such as ApacheSpark, Databricks. Knowledge of SQL and experience with relational databases. Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud). Strong analytical and More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Careerwise
or similar role. Proficiency with Databricks and its ecosystem. Strong programming skills in Python, R, or Scala. Experience with big data technologies such as ApacheSpark, Databricks. Knowledge of SQL and experience with relational databases. Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud). Strong analytical and More ❯
technologies like Docker and Kubernetes. Ideally, some familiarity with data workflow management tools such as Airflow as well as big data technologies such as ApacheSpark/Ignite or other caching and analytics technologies. A working knowledge of FX markets and financial instruments would be beneficial. What we More ❯
/Experience: 1. GIS experience and proficiency with geospatial libraries (e.g., GeoPandas, QGIS, PostGIS). 2. Familiarity with Databricks and distributed computing frameworks (e.g. Spark). 3. Exposure to CI/CD pipelines and workflow automation. 4. Experience with data visualisation tools such as Tableau, Power BI, or equivalent. More ❯
automation. Proficiency in building and maintaining batch and streaming ETL/ELT pipelines at scale, employing tools such as Airflow, Fivetran, Kafka, Iceberg, Parquet, Spark, Glue for developing end-to-end data orchestration leveraging on AWS services to ingest, transform and process large volumes of structured and unstructured data More ❯
environments with AI/ML components or interest in learning data workflows for ML applications . Bonus if you have e xposure to Kafka, Spark, or Flink . Experience with data compliance regulations (GDPR). What you can expect from us: Opportunity for annual bonuses Medical Insurance Cycle to More ❯
data services. DataOps Knowledge: Experienced with CI/CD for data workflows, version control (e.g., Git), and automation in data engineering. Desirable: Experience with ApacheSpark Familiarity with machine learning frameworks and libraries Understanding of data governance and compliance Strong problem-solving and analytical skills Excellent communication and More ❯
London, Tower, United Kingdom Hybrid / WFH Options
Intec Select Ltd
data services. DataOps Knowledge: Experienced with CI/CD for data workflows, version control (e.g., Git), and automation in data engineering. Desirable: Experience with ApacheSpark Familiarity with machine learning frameworks and libraries Understanding of data governance and compliance Strong problem-solving and analytical skills Excellent communication and More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Intec Select
data services. DataOps Knowledge: Experienced with CI/CD for data workflows, version control (e.g., Git), and automation in data engineering. Desirable: Experience with ApacheSpark Familiarity with machine learning frameworks and libraries Understanding of data governance and compliance Strong problem-solving and analytical skills Excellent communication and More ❯
Azure or AWS Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as ApacheSpark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure/AWS Data Engineering certifications Databricks certifications What More ❯