frameworks and practices. Understanding of machine learning workflows and how to support them with robust data pipelines. DESIRABLE LANGUAGES/TOOLS Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks More ❯
write clean, scalable, robust code using python or similar programming languages. Background in software engineering a plus. DESIRABLE LANGUAGES/TOOLS Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks More ❯
Strong understanding of DevOps methodologies and principles. Solid understanding of data warehousing data modeling and data integration principles. Proficiency in at least one scripting/programming language (e.g. Python Scala Java). Experience with SQL and NoSQL databases. Familiarity with data quality and data governance best practices. Strong analytical and problem-solving skills. Excellent communication interpersonal and presentation skills. Desired More ❯
Strong understanding of DevOps methodologies and principles. Solid understanding of data warehousing data modeling and data integration principles. Proficiency in at least one scripting/programming language (e.g. Python Scala Java). Experience with SQL and NoSQL databases. Familiarity with data quality and data governance best practices. Strong analytical and problem-solving skills. Excellent communication interpersonal and presentation skills. Desired More ❯
GeoServer). Technical Skills: Expertise in big data frameworks and technologies (e.g., Hadoop, Spark, Kafka, Flink) for processing large datasets. Proficiency in programming languages such as Python, Java, or Scala, with a focus on big data frameworks and APIs. Experience with cloud services and technologies (AWS, Azure, GCP) for big data processing and platform deployment. Strong knowledge of data warehousing More ❯
improving the engineering culture at Depop and encourage others around you to follow these values What You'll Bring Advanced knowledge in a high-level programming language (e.g. Python, Scala) with proven foundation in software engineering best practices - testing, clean coding standards, code reviews, pair programming, automation-first mindset Experience delivering data compliance & privacy solutions ensuring that we uphold data More ❯
Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building and leading. You must be: Willing to work on client sites, potentially for extended periods. Willing to travel for work More ❯
services (AWS, GCP, or Azure) Understanding of data modeling, distributed systems, ETL/ELT pipelines, and streaming architectures Proficiency in SQL and at least one programming language (e.g., Python, Scala, or Java) Demonstrated experience owning complex technical systems end-to-end, from design through production Excellent communication skills with the ability to explain technical concepts to both technical and non More ❯
security, compliance, and data governance standards (e.g. GDPR , RBAC) Mentor junior engineers and guide technical decisions on client engagements ✅ Ideal Experience Strong in Python and SQL , plus familiarity with Scala or Java Experience supporting AI/ML workflows and working with Data Scientists Exposure to cloud platforms: AWS , Azure , or GCP Hands-on with modern data tooling: Spark , Databricks , Snowflake More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Omnis Partners
security, compliance, and data governance standards (e.g. GDPR , RBAC) Mentor junior engineers and guide technical decisions on client engagements ✅ Ideal Experience Strong in Python and SQL , plus familiarity with Scala or Java Experience supporting AI/ML workflows and working with Data Scientists Exposure to cloud platforms: AWS , Azure , or GCP Hands-on with modern data tooling: Spark , Databricks , Snowflake More ❯
the payments industry. Experience using Azure Databricks. Experience using containerised services such as Docker/Kubernetes. Experience using IaC tools such as Terraform/Bicep. Experience using programming languages; Scala, Powershell, YAML. Comprehensive, payments industry training by in-house and industry experts. Excellent performance-based earning opportunity, including OKR-driven bonuses. Future opportunity for equity, rewarded to high performers. Personal More ❯
Databricks on Azure or AWS. Databricks Components : Proficient in Delta Lake, Unity Catalog, MLflow, and other core Databricks tools. Programming & Query Languages : Strong skills in SQL and Apache Spark (Scala or Python). Relational Databases : Experience with on-premises and cloud-based SQL databases. Data Engineering Techniques : Skilled in Data Governance, Architecture, Data Modelling, ETL/ELT, Data Lakes, Data More ❯
degree or higher in an applicable field such as Computer Science, Statistics, Maths or similar Science or Engineering discipline Strong Python and other programming skills (Java and/or Scala desirable) Strong SQL background Some exposure to big data technologies (Hadoop, spark, presto, etc.) NICE TO HAVES OR EXCITED TO LEARN: Some experience designing, building and maintaining SQL databases (and More ❯
Azure). Deep understanding of data modeling , distributed systems , streaming architectures , and ETL/ELT pipelines . Proficiency in SQL and at least one programming language such as Python , Scala , or Java . Demonstrated experience owning and delivering complex systems from architecture through implementation. Excellent communication skills with the ability to explain technical concepts to both technical and non-technical More ❯
complex real-world problems Drive innovation in fraud analytics, data engineering, and AI applications What You Bring Proven experience in Java or a similar OOP stack (e.g., C#, Kotlin, Scala) Strong grasp of REST APIs, microservices, and containerisation (Docker/Kubernetes) Agile mindset with strong software engineering fundamentals Passion for innovation, learning, and leading by example Familiarity with DevOps tools More ❯
are looking for data engineers who have a variety of different skills which include some of the below. Strong proficiency in at least one programming language (Python, Java, or Scala) Extensive experience with cloud platforms (AWS, GCP, or Azure) Experience with: Data warehousing and lake architectures ETL/ELT pipeline development SQL and NoSQL databases Distributed computing frameworks (Spark, Kinesis More ❯
Mandatory Proficient in either GCP (Google) or AWS cloud Hands on experience in designing and building data pipelines using Hadoop and Spark technologies. Proficient in programming languages such as Scala, Java, or Python. Experienced in designing, building, and maintaining scalable data pipelines and applications. Hands-on experience with Continuous Integration and Deployment strategies. Solid understanding of Infrastructure as Code tools. More ❯
and Capital Markets is a plus. Experience with Big Data technologies ( i.e. Kafka, Apache Spark, NOSQL) Knowledge of BI tools like Power BI, Microstrategy etc Exposure to Python and Scala Exposure to Salesforce ecosytem About S&P Global Ratings At S&P Global Ratings, our analyst-driven credit ratings, research, and sustainable finance opinions provide critical insights that are essential More ❯
to their growth and development Apply agile methodologies (Scrum, pair programming, etc.) to deliver value iteratively Essential Skills & Experience Extensive hands-on experience with programming languages such as Python, Scala, Spark, and SQL Strong background in building and maintaining data pipelines and infrastructure In-depth knowledge of cloud platforms and native cloud services (e.g., AWS, Azure, or GCP) Familiarity with More ❯
to their growth and development Apply agile methodologies (Scrum, pair programming, etc.) to deliver value iteratively Essential Skills & Experience Extensive hands-on experience with programming languages such as Python, Scala, Spark, and SQL Strong background in building and maintaining data pipelines and infrastructure In-depth knowledge of cloud platforms and native cloud services (e.g., AWS, Azure, or GCP) Familiarity with More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Robert Half
Azure Data Lake Storage Azure SQL Database Solid understanding of data modeling, ETL/ELT, and warehousing concepts Proficiency in SQL and one or more programming languages (e.g., Python, Scala) Exposure to Microsoft Fabric, or a strong willingness to learn Experience using version control tools like Git and knowledge of CI/CD pipelines Familiarity with software testing methodologies and More ❯
GitHub proficiency Strong organizational, analytical, problem-solving, and communication skills Comfort working with remote teams and distributed delivery models Additional skills that are a plus: Programming languages such as Scala, Rust, Go, Angular, React, Kotlin Database management with PostgreSQL Experience with ElasticSearch, observability tools like Grafana and Prometheus What this role can offer Opportunity to deepen understanding of AI and More ❯
Farnborough, Hampshire, South East, United Kingdom
Peregrine
flow issues, optimize performance, and implement error-handling strategies. Optional scripting skills for creating custom NiFi processors. Programming & Data Technologies: Proficiency in Java and SQL. Experience with C# and Scala is a plus. Experience with ETL tools and big data platforms. Knowledge of data modeling, replication, and query optimization. Hands-on experience with SQL and NoSQL databases is desirable. Familiarity More ❯
in building and deploying data solutions using platforms like Databricks (Azure/AWS) , Microsoft Fabric , Azure Data Factory , and Azure Synapse . Strong skills in SQL and Apache Spark (Scala or Python). A strong foundation in data disciplines such as governance , architecture , modelling , data lakes , warehousing , MDM , and BI . Advanced skills in SQL and Python , with hands-on More ❯
review standards and agile delivery. Contributing to technical strategy and mentoring other engineers. What You’ll Need: Strong experience in data engineering with expertise in languages such as Python, Scala, or Spark. Proficiency in designing and building data pipelines, working with both structured and unstructured data. Experience with cloud platforms (AWS, Azure, or GCP), using native services for data workloads. More ❯