frameworks and practices. Understanding of machine learning workflows and how to support them with robust data pipelines. DESIRABLE LANGUAGES/TOOLS Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks More ❯
write clean, scalable, robust code using python or similar programming languages. Background in software engineering a plus. DESIRABLE LANGUAGES/TOOLS Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks More ❯
technical skills, knowledge and professional qualifications provided: Databases – has a deep understanding of relational databases Use DevOps methods and work in an Agile environment Programming - expertise in Python, Java, Scala, or other programming languages used to build data pipelines, implement data transformations, and automate data workflows Has expertise in SQL AI and Machine Learning knowledge as it applies to end More ❯
Strong understanding of DevOps methodologies and principles. Solid understanding of data warehousing data modeling and data integration principles. Proficiency in at least one scripting/programming language (e.g. Python Scala Java). Experience with SQL and NoSQL databases. Familiarity with data quality and data governance best practices. Strong analytical and problem-solving skills. Excellent communication interpersonal and presentation skills. Desired More ❯
GeoServer). Technical Skills: Expertise in big data frameworks and technologies (e.g., Hadoop, Spark, Kafka, Flink) for processing large datasets. Proficiency in programming languages such as Python, Java, or Scala, with a focus on big data frameworks and APIs. Experience with cloud services and technologies (AWS, Azure, GCP) for big data processing and platform deployment. Strong knowledge of data warehousing More ❯
Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building and leading. You must be: Willing to work on client sites, potentially for extended periods. Willing to travel for work More ❯
West London, London, United Kingdom Hybrid / WFH Options
Young's Employment Services Ltd
knowledge of tools such as Apache Spark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in Python, Pyspark, Scala or Java. Experience operating in a cloud-native environment (e.g. Fabric, AWS, GCP, or Azure). Excellent stakeholder management and communication skills. A strategic mindset, with a practical approach to More ❯
degree or higher in an applicable field such as Computer Science, Statistics, Maths or similar Science or Engineering discipline Strong Python and other programming skills (Java and/or Scala desirable) Strong SQL background Some exposure to big data technologies (Hadoop, spark, presto, etc.) NICE TO HAVES OR EXCITED TO LEARN: Some experience designing, building and maintaining SQL databases (and More ❯
are looking for data engineers who have a variety of different skills which include some of the below. Strong proficiency in at least one programming language (Python, Java, or Scala) Extensive experience with cloud platforms (AWS, GCP, or Azure) Experience with: Data warehousing and lake architectures ETL/ELT pipeline development SQL and NoSQL databases Distributed computing frameworks (Spark, Kinesis More ❯
technical skills and knowledge Proficiency in Programming Languages : Strong proficiency in Python is essential, along with experience in Bash/Shell scripting. Familiarity with additional languages such as Java, Scala, R, or Go is a plus. Understanding of Machine Learning Fundamentals: A solid understanding of machine learning concepts, including algorithms, data pre-processing, model evaluation, and training. Familiarity with ML More ❯
Mandatory Proficient in either GCP (Google) or AWS cloud Hands on experience in designing and building data pipelines using Hadoop and Spark technologies. Proficient in programming languages such as Scala, Java, or Python. Experienced in designing, building, and maintaining scalable data pipelines and applications. Hands-on experience with Continuous Integration and Deployment strategies. Solid understanding of Infrastructure as Code tools. More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Robert Half
Azure Data Lake Storage Azure SQL Database Solid understanding of data modeling, ETL/ELT, and warehousing concepts Proficiency in SQL and one or more programming languages (e.g., Python, Scala) Exposure to Microsoft Fabric, or a strong willingness to learn Experience using version control tools like Git and knowledge of CI/CD pipelines Familiarity with software testing methodologies and More ❯
data platform that powers smarter decisions, better insights, and streamlined operations.Key skills and responsibilities * Proven experience in data engineering and data platform development * Strong programming skills in Python, Java, Scala, or similar * Advanced SQL and deep knowledge of relational databases * Hands-on experience with ETL tools and building robust data pipelines * Familiarity with data science, AI/ML integration, and More ❯
serving layers Experience with data lakehouse architecture, schema design, and GDPR-compliant solutions Working knowledge of DevOps tools and CI/CD processes Bonus Points For Development experience in Scala or Java Familiarity with Cloudera, Hadoop, HIVE, and Spark ecosystem Understanding of data privacy regulations, including GDPR, and experience working with sensitive data Ability to learn and adapt new technologies More ❯
driven APIs, and designing database schemas and queries to meet business requirements. A passion and proven background in picking up and adopting new technologies on the fly. Exposure to Scala, or functional programming generally. Exposure with highly concurrent, asynchronous backend technologies, such as Ktor, http4k, http4s, Play, RxJava, etc. Exposure with DynamoDB or similar NoSQL databases, such as Cassandra, HBase More ❯
knowledge sharing across the team. What We're Looking For Strong hands-on experience across AWS Glue, Lambda, Step Functions, RDS, Redshift, and Boto3. Proficient in one of Python, Scala or Java, with strong experience in Big Data technologies such as: Spark, Hadoop etc. Practical knowledge of building Real Time event streaming pipelines (eg, Kafka, Spark Streaming, Kinesis). Proven More ❯
large-scale data processing systems with data tooling such as Spark, Kafka, Airflow, dbt, Snowflake, Databricks, or similar Strong programming skills in languages such as SQL, Python, Go or Scala Demonstrable use and an understanding of effective use of AI tooling in your development process A growth mindset and eagerness to work in a fast-paced, mission-driven environment Good More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Noir
Data Engineer - Investment Banking - London/Hybrid (Data Engineer, SQL Data Engineer, Java, Python, Spark, Scala, SQL, Snowflake, OO programming, Snowflake, Databricks, Data Fabric, design patterns, SOLID principles, ETL, Unit testing, NUnit, MSTest, Junit, Microservices Architecture, Continuous Integration, Azure DevOps, AWS, Jenkins, Agile, Data Engineer, SQL Data Engineer) We have several fantastic new roles including a Data Engineer position to More ❯
in motivated teams, collaborating effectively and taking pride in your work. Strong problem-solving skills, viewing technology as a means to solve challenges. Proficiency in a programming language (e.g., Scala, Python, Java, C#) with understanding of domain modelling and application development. Knowledge of data management platforms (SQL, NoSQL, Spark/Databricks). Experience with modern engineering tools (Git, CI/ More ❯
understanding of various data engineering technologies including Apache Spark, Databricks and Hadoop Strong understanding of agile ways of working Up-to-date understanding of various programming languages including Python, Scala, R and SQL Up-to-date understanding of various databases and cloud-based datastores including SQL and NoSQL Up-to-date understanding of cloud platforms including AWS and/or More ❯
Position Level: Junior/Mid Position Type: Full-time (Onsite) About Intelmatix: Intelmatix is a deep tech Artificial intelligence (AI) company founded in July 2021 by a group of MIT scientists with the vision of transforming enterprises to become cognitive. More ❯
culture of innovation and challenge. We have over 300 tech experts across our teams all using the latest tools and technologies including Astro, Cloudflare, Docker, Kubernetes, AWS, Kafka, Java, Scala, Python, .Net Core, Node.js and MongoDB. Theres something for everyone . Were a place of opportunity. Youll have the tools and autonomy to drive your own career, supported by a More ❯
broad range of problems using your technical skills. Demonstrable experience of utilising strong communication and stakeholder management skills when engaging with customers Significant experience of coding in Python and Scala or Java Experience with big data processing tools such as Hadoop or Spark Cloud experience; GCP specifically in this case, including services such as Cloud Run, Cloud Functions, BigQuery, GCS More ❯
of performance. Your Experience Must have: 4+ years of industry experience in applied ML with recent experience in AI/LLM systems Strong proficiency with Python, and familiarity with Scala, Go, or Rust Cloud platform expertise with AWS, GCP, or Azure, including AI-specific services (SageMaker, Vertex AI, Azure AI) Databricks platform experience with Unity Catalog, MLflow, and Databricks Machine More ❯
practice. We are looking for experience in the following skills: Relevant work experience in data science, machine learning, and business analytics Practical experience in coding languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. More ❯