What You Bring: 2+ years in data engineering or related roles Bachelor’s in CS, Engineering, Mathematics, Finance, etc. Proficiency in Python, SQL , and one or more: R, Java, Scala Experience with relational/NoSQL databases (e.g., PostgreSQL, MongoDB) Familiarity with big data tools (Hadoop, Spark, Kafka), cloud platforms (Azure, AWS, GCP), and workflow tools (Airflow, Luigi) Bonus: experience with More ❯
What You Bring: 2+ years in data engineering or related roles Bachelor’s in CS, Engineering, Mathematics, Finance, etc. Proficiency in Python, SQL , and one or more: R, Java, Scala Experience with relational/NoSQL databases (e.g., PostgreSQL, MongoDB) Familiarity with big data tools (Hadoop, Spark, Kafka), cloud platforms (Azure, AWS, GCP), and workflow tools (Airflow, Luigi) Bonus: experience with More ❯
Design and maintain ETL pipelines for diverse data sources (APIs, databases, file systems). Ensure reliability, scalability, and performance. Data Transformation & Processing Implement data transformation using Spark (PySpark/Scala) and related tools. Conduct data cleaning, validation, and enrichment. Azure Databricks Implementation Work with Unity Catalog, Delta Lake, Spark SQL . Optimize and follow best practices in Databricks development. Program More ❯
What You Bring: 2+ years in data engineering or related roles Bachelor’s in CS, Engineering, Mathematics, Finance, etc. Proficiency in Python, SQL , and one or more: R, Java, Scala Experience with relational/NoSQL databases (e.g., PostgreSQL, MongoDB) Familiarity with big data tools (Hadoop, Spark, Kafka), cloud platforms (Azure, AWS, GCP), and workflow tools (Airflow, Luigi) Bonus: experience with More ❯
Design and maintain ETL pipelines for diverse data sources (APIs, databases, file systems). Ensure reliability, scalability, and performance. Data Transformation & Processing Implement data transformation using Spark (PySpark/Scala) and related tools. Conduct data cleaning, validation, and enrichment. Azure Databricks Implementation Work with Unity Catalog, Delta Lake, Spark SQL . Optimize and follow best practices in Databricks development. Program More ❯
etc.). Built and maintained an ML model or pipeline in production environments in public/hybrid cloud infrastructure. Coding proficiency in at least one modern programming language (Java, Scala, Python etc.). Strong background in data structures and algorithms Experience working with at least one machine learning framework (TensorFlow, PyTorch, XGBoost, etc) Experience working with big data technologies (Spark More ❯
and resolve performance bottlenecks and platform-related issues. Experience with containerization (Docker, Kubernetes) and automation tools such as Ansible, Terraform is a plus. Strong scripting and programming skills ( Python, Scala, Bash, SQL ). Knowledge of data governance, GDPR, and compliance frameworks is advantageous. Work closely with DevOps, Data Engineering, and Infrastructure teams to streamline operations. Provide technical mentorship and knowledge More ❯
and deliver high-quality data solutions. Automation: Implement automation processes and best practices to streamline data workflows and reduce manual interventions. Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python. Good to have: Cloudera – Spark, Hive, Impala, HDFS, Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin. Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering More ❯
in a leadership or managerial role. Strong background in financial services, particularly in market risk, counterparty credit risk, or risk analytics Proficiency in modern programming languages (e.g., Java, Python, Scala) and frameworks. Experience with cloud platforms (AWS, Azure, or GCP). Deep understanding of software development lifecycle (SDLC), agile methodologies, and devOps practices. Preferred Skills: Strong communication and stakeholder management More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
technical standards and contribute to solution development. Requirements Expertise in data engineering, analytics, architecture, or security using at least one cloud platform (AWS, Azure, GCP). Proficiency in Python, Scala, Spark, SQL. Experience with database technologies like Oracle, MySQL, MongoDB from an application development perspective. Leadership in designing, building, and maintaining data pipelines and infrastructure. Strong problem-solving skills with More ❯
in a leadership or managerial role. Strong background in financial services, particularly in market risk, counterparty credit risk, or risk analytics Proficiency in modern programming languages (e.g., Java, Python, Scala) and frameworks. Experience with cloud platforms (AWS, Azure, or GCP). Deep understanding of software development lifecycle (SDLC), agile methodologies, and devOps practices. Preferred Skills: Strong communication and stakeholder management More ❯
of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) - Knowledge of AWS Infrastructure - Knowledge of writing and optimizing SQL queries in a business environment with large-scale, complex datasets More ❯
experience working in enterprise data warehouse and analytics technologies Hands-on experience building and training machine learning models. Experience writing software in one or more languages such as Python, Scala, R, or similar with strong competencies in data structures, algorithms, and software design. Experience working with recommendation engines, data pipelines, or distributed machine learning. Experience working with deep learning frameworks More ❯
web/mobile applications or platform with either Java/J2EE or .NET tech stack and database technologies such as Oracle, MySQL, etc. " Exposure to polyglot programming languages like Scala, Python and Golang will be a plus " Ability to read/write code and expertise with various design patterns " Have used NoSQL database such as MongoDB, Cassandra, etc. " Work on More ❯
London, England, United Kingdom Hybrid / WFH Options
VIOOH
Experience managing AWS or GCP cloud environments. Experience with monitoring tools such as Datadog, Kibana, Grafana, or Prometheus. Proficiency with Terraform, Docker, and Kubernetes. Software development experience in Java, Scala, or Python. Desirable (but not required): Experience with Apache Spark jobs and pipelines. Knowledge of functional programming languages. Understanding of database design concepts. Ability to write and analyze SQL queries. More ❯
web/mobile applications or platform with either Java/J2EE or .NET tech stack and database technologies such as Oracle, MySQL, etc. " Exposure to polyglot programming languages like Scala, Python and Golang will be a plus " Ability to read/write code and expertise with various design patterns " Have used NoSQL database such as MongoDB, Cassandra, etc. " Work on More ❯
the estimated effort and technical implications of user stories and user journeys. Coaching and mentoring team members. M INIMUM ( ESSENTIAL ) REQUIREMENTS : Strong software development experience in one of Java, Scala, or Python Software development experience withdata-processing platforms from vendors such as AWS, Azure, GCP, Databricks. Experience of developing substantial components for large-scale data processing solutions and deploying into More ❯
on industry trends and emerging technologies. Minimum skills and experience: Leading small teams of data engineers. Designing and building Databricks data products. Strong programming skills in Python (PySpark preferred), Scala, or SQL. Experience with enterprise-level data pipelines, data integration, and ETL processes. Proficiency in production-grade coding, automated testing, and CI/CD deployment (e.g., Github Actions, Jenkins). More ❯
AWS, GCP, or Azure for real-time data ingestion and storage. Ability to optimise and refactor existing data pipelines for greater performance and reliability. Programming skills in Python, Java, Scala, or a similar language. Proficiency in database technologies (SQL, NoSQL, time-series databases) and data modelling. Strong understanding of data pipeline orchestration tools (e.g., Apache Airflow, Kubernetes). You thrive More ❯
London, England, United Kingdom Hybrid / WFH Options
Simple Machines
driven APIs, and designing database schemas and queries to meet business requirements A passion and proven background in picking up and adopting new technologies on the fly Exposure to Scala, or functional programming generally Exposure with highly concurrent, asynchronous backend technologies, such as Ktor, http4k, http4s, Play, RxJava, etc Exposure with DynamoDB or similar NoSQL databases, such as Cassandra, HBase More ❯
and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities: - Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business - Design, develop, and More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
KPMG UK
it? Expertise in data engineering/analytics/architecture/security using native technologies of least one cloud platform (AWS, Azure, GCP) Expertise in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Expertise in leading the design, build and maintenance of data pipelines and More ❯
Bradford, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
it? Expertise in data engineering/analytics/architecture/security using native technologies of least one cloud platform (AWS, Azure, GCP) Expertise in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Expertise in leading the design, build and maintenance of data pipelines and More ❯
of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica More ❯
London, England, United Kingdom Hybrid / WFH Options
Simple Machines
days remote (WFH) A passion and a proven background in picking up and adopting new technologies on the fly. Exposure to backend server experience using Kotlin. Exposure to Scala, or functional programming generally. Exposure with highly concurrent, asynchronous backend technologies, such as Ktor, http4k, http4s, Play, RxJava, etc. Exposure with DynamoDB or similar NoSQL databases, such as Cassandra, HBase, BigTable More ❯