Ability to write clean, scalable, maintainable code in Python with a good understanding of software engineering concepts and patterns. Proficiency in other languages like Scala, Java, C#, C++ are an advantage. Proven record of building and maintaining data pipelines deployed in at least one of the big 3 cloud ML More ❯
Hands-on experience with big data technologies such as Apache Spark, Kafka, Hive, or Hadoop. Proficient in at least one programming language (e.g. Python, Scala, Java, R). Experience deploying and maintaining cloud infrastructure (e.g. AWS, GCP, or Azure). Familiarity with data modeling and warehousing concepts, and dimensional modeling More ❯
modern data pipelines and ETL (extract, transform, load) processes using tools such as Apache Kafka and Apache Nifi Proficient in programming languages like Java, Scala, or Python Experience or expertise using, managing, and/or testing API Gateway tools and Rest APIs Experience in traditional database and data warehouse products More ❯
experience with big data technology, systems and tools such as AWS, Hadoop, Hive, and Snowflake Expertise with common Software Engineering languages such as Python, Scala, Java, SQL and a proven ability to learn new programming languages Experience with workflow orchestration tools such as Airflow Detailed problem-solving approach, coupled with More ❯
and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. More ❯
and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. More ❯
and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. More ❯
Fleet, Hampshire, United Kingdom Hybrid / WFH Options
Experis - ManpowerGroup
and principles. Solid understanding of data warehousing, data modeling, and data integration principles. Proficiency in at least one scripting/programming language (e.g., Python, Scala, Java). Experience with SQL and NoSQL databases. Familiarity with data quality and data governance best practices. Strong analytical and problem-solving skills. Excellent communication More ❯
and principles. Solid understanding of data warehousing data modeling and data integration principles. Proficiency in at least one scripting/programming language (e.g. Python Scala Java). Experience with SQL and NoSQL databases. Familiarity with data quality and data governance best practices. Strong analytical and problem-solving skills. Excellent communication More ❯
and principles. Solid understanding of data warehousing data modeling and data integration principles. Proficiency in at least one scripting/programming language (e.g. Python Scala Java). Experience with SQL and NoSQL databases. Familiarity with data quality and data governance best practices. Strong analytical and problem-solving skills. Excellent communication More ❯
leadership role. Hands-on experience with data warehouse solutions such as Redshift, Snowflake, or BigQuery. Strong command of SQL and programming languages like Python, Scala, or Java. Familiarity with ETL/ELT tools (e.g., Airflow, Fivetran, dbt) and cloud data stacks (AWS/GCP/Azure). A deep understanding More ❯
leadership role. Hands-on experience with data warehouse solutions such as Redshift, Snowflake, or BigQuery. Strong command of SQL and programming languages like Python, Scala, or Java. Familiarity with ETL/ELT tools (e.g., Airflow, Fivetran, dbt) and cloud data stacks (AWS/GCP/Azure). A deep understanding More ❯
. Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Programming skills in Python, Java, Scala, or a similar language. Proficiency in database technologies (SQL, NoSQL, time-series databases) and data modelling. Strong understanding of data pipeline orchestration tools (e.g., Apache More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Robert Half
cloud security , data governance , and compliance (e.g., GDPR, HIPAA). Strong SQL skills and proficiency in at least one programming language (e.g., Python, Java, Scala). Excellent problem-solving, communication, and project management skills. Experience with DevOps , CI/CD pipelines , and infrastructure as code (e.g., Terraform, CloudFormation). Ability More ❯
level proficiency in SQL (e.g., MySQL, PostgreSQL, Redshift) and NoSQL databases (e.g., MongoDB, Cassandra). Proficiency in one or more programming languages (e.g., Python, Scala, Java) for data integration, automation, and problem-solving. In-depth experience with cloud data platforms (AWS, Azure, Google Cloud) and Microsoft services (e.g., Azure Storage More ❯
For: Bachelor's/Master's Degree in Computer Science, Information Technology, or equivalent. Proficiency in at least one mainstream programming language (Python, Java, Scala, Clojure, etc.). In-depth understanding of data structures and algorithmic designs. Knowledge of computer networking concepts (TCP/IP, HTTPS, DNS, etc.). Understanding More ❯
platform, either on Azure or AWS. Good working knowledge of Databricks components: DeltaLake, Unity Catalog, ML Flow, etc. Expertise in SQL, Python and Spark (Scala or Python). Experience working with relational SQL databases either on premises or in the cloud. Experience delivering multiple solutions using key techniques such as More ❯
Understanding of data modeling, distributed systems, ETL/ELT pipelines, and streaming architectures Proficiency in SQL and at least one programming language (e.g., Python, Scala, or Java) Demonstrated experience owning complex technical systems end-to-end, from design through production Excellent communication skills with the ability to explain technical concepts More ❯
JSON Schema) along with various schema description formats. Skilled in Python for scripting and data manipulation. Familiarity with other languages such as Java and Scala would be beneficial. Experience with Hadoop, Spark/PySpark and Ray for large-scale data processing. Hands-on expertise with ElasticSearch and/or SOLR More ❯
profiling, data modeling and data engineering using relational databases like Snowflake, Oracle, SQL Server; ETL tools like Informatica IICS; scripting using Python, R, or Scala; workflow management tools like Autosys Experience with stream processing systems like Kafka, Spark streaming etc Experience in Java, JMS, SOAP, REST, JSON, XML technologies, along More ❯
minimum 2 years in Ops or DevOps experience in trading environments, with expertise in large-scale distributed systems; recent work in Java, Go, or Scala is preferred. Hands-on experience with container orchestration (Kubernetes, Docker, etc.) and cloud infrastructure, especially AWS; familiarity with Infrastructure-as-Code tools like Terraform or More ❯
data flows, architecture, and operational procedures. Required Skills and Experience 3+ years of experience in a Data Engineering role. Strong programming skills in Python , Scala , or Java . Solid experience with ETL tools (e.g., Apache Airflow, dbt, Talend). Proficiency with SQL and relational/non-relational databases (e.g., PostgreSQL More ❯
Requirements Master's or PhD in Computer Science or related field 7+ years of data engineering experience Proficiency in multiple programming languages (Python, Rust, Scala, or Go) Strong SQL and NoSQL database experience Data modeling, ETL, and warehousing expertise Cloud platform experience (AWS/GCP) and infrastructure-as-code Excellent More ❯
Requirements Master's or PhD in Computer Science or related field 7+ years of data engineering experience Proficiency in multiple programming languages (Python, Rust, Scala, or Go) Strong SQL and NoSQL database experience Data modeling, ETL, and warehousing expertise Cloud platform experience (AWS/GCP) and infrastructure-as-code Excellent More ❯
data modeling , distributed systems , streaming architectures , and ETL/ELT pipelines . Proficiency in SQL and at least one programming language such as Python , Scala , or Java . Demonstrated experience owning and delivering complex systems from architecture through implementation. Excellent communication skills with the ability to explain technical concepts to More ❯