in Microsoft Fabric and Databricks, including data pipeline development, data warehousing, and data lake management Proficiency in Python, SQL, Scala, or Java Experience with data processing frameworks such as ApacheSpark, Apache Beam, or Azure Data Factory Strong understanding of data architecture principles, data modelling, and data governance Experience with cloud-based data platforms, including Azure and More ❯
tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as ApacheSpark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You … with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities: - Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business More ❯
in Computer Science, Data Science, Engineering, or a related field. Strong programming skills in languages such as Python, SQL, or Java. Familiarity with data processing frameworks and tools (e.g., ApacheSpark, Hadoop, Kafka) is a plus. Basic understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledge of database systems (e.g., MySQL, PostgreSQL, MongoDB More ❯
in Computer Science, Data Engineering, Information Systems, or related field. 5+ years in data engineering and 3+ years in architecture roles, with deep experience designing solutions on Databricks and Apache Spark. Strong grasp of Delta Lake, Lakehouse architecture, and Unity Catalog governance. Expertise in Python, SQL, and optionally Scala; strong familiarity with dbt and modern ELT practices. Proven experience … . Strong communication and stakeholder management skills, able to bridge technical and business domains. Fluency in English; other European languages a plus. Technologies You’ll Work With Core: Databricks, Spark, Delta Lake, Unity Catalog, dbt, SQL, Python Cloud: Microsoft Azure (Data Lake, Synapse, Storage, Event Hubs) DevOps: Bitbucket/GitHub, Azure DevOps, Terraform Orchestration & Monitoring: Dragster, Airflow, Datadog, Grafana More ❯
as AWS, Azure, GCP, and Snowflake. Understanding of cloud platform infrastructure and its impact on data architecture. Data Technology Skills: A solid understanding of big data technologies such as ApacheSpark, and knowledge of Hadoop ecosystems. Knowledge of programming languages such as Python, R, or Java is beneficial. Exposure to ETL/ELT processes, SQL, NoSQL databases is More ❯
City of London, London, United Kingdom Hybrid / WFH Options
OTA Recruitment
modern data modelling practices, analytics tooling, and interactive dashboard development in Power BI and Plotly/Dash. Key responsibilities: Designing and maintaining robust data transformation pipelines (ELT) using SQL, Apache Airflow, or similar tools. Building and optimizing data models that power dashboards and analytical tools Developing clear, insightful, and interactive dashboards and reports using Power BI and Plotly/… workflows and transformation tools (e.g., dbt, custom SQL models, etc). Strong ability to debug and optimize slow or failing data pipelines and queries Familiarity with distributed systems (e.g., Spark, Kafka) and how they support scalable analytics solutions. Experience designing and integrating with APIs and handling system integrations, including data migrations and networked data sources. Practical experience with cloud More ❯
modern data modelling practices, analytics tooling, and interactive dashboard development in Power BI and Plotly/Dash. Key responsibilities: Designing and maintaining robust data transformation pipelines (ELT) using SQL, Apache Airflow, or similar tools. Building and optimizing data models that power dashboards and analytical tools Developing clear, insightful, and interactive dashboards and reports using Power BI and Plotly/… workflows and transformation tools (e.g., dbt, custom SQL models, etc). Strong ability to debug and optimize slow or failing data pipelines and queries Familiarity with distributed systems (e.g., Spark, Kafka) and how they support scalable analytics solutions. Experience designing and integrating with APIs and handling system integrations, including data migrations and networked data sources. Practical experience with cloud More ❯
London, England, United Kingdom Hybrid / WFH Options
Made Tech Limited
strategies. Strong experience in IaC and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop. Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes). Ability to create data pipelines on More ❯
in Python with libraries like TensorFlow, PyTorch, or Scikit-learn for ML, and Pandas, PySpark, or similar for data processing. Experience designing and orchestrating data pipelines with tools like Apache Airflow, Spark, or Kafka. Strong understanding of SQL, NoSQL, and data modeling. Familiarity with cloud platforms (AWS, Azure, GCP) for deploying ML and data solutions. Knowledge of MLOps More ❯
London, England, United Kingdom Hybrid / WFH Options
Aker Systems Limited
exploring new technologies and methodologies to solve complex data challenges. Proven experience leading data engineering projects or teams. Expertise in designing and building data pipelines using frameworks such as ApacheSpark, Kafka, Glue, or similar. Solid understanding of data modelling concepts and experience working with both structured and semi-structured data. Strong knowledge of public cloud services, especially More ❯
databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). • In-depth knowledge of data warehousing concepts and tools (e.g., Redshift, Snowflake, Google BigQuery). • Experience with big data platforms (e.g., Hadoop, Spark, Kafka). • Familiarity with cloud-based data platforms and services (e.g., AWS, Azure, Google Cloud). • Expertise in ETL tools and processes (e.g., Apache NiFi, Talend, Informatica). More ❯
at Zodiac Maritime while working with cutting-edge cloud technologies. Key Responsibilities And Primary Deliverables Design, develop, and optimize end-to-end data pipelines (batch & streaming) using Azure Databricks, Spark, and Delta Lake. Implement Medallion Architecture to structure raw, enriched, and curated data layers efficiently. Build scalable ETL/ELT processes with Azure Data Factory and PySpark. Work with … reliability across pipelines. Collaborate with analysts to validate and refine datasets for reporting. Apply DevOps & CI/CD best practices (Git, Azure DevOps) for automated testing and deployment. Optimize Spark jobs, Delta Lake tables, and SQL queries for performance and cost efficiency. Troubleshoot and resolve data pipeline issues proactively. Partner with Data Architects, Analysts, and Business Teams to deliver More ❯
domains to enact step-change operational efficiency and maximize business value by confidently utilizing trustworthy data. What are we looking for? Great experience as a Data Engineer Experience with Spark, Databricks, or similar data processing tools. Proficiency in working with the cloud environment and various software’s including SQL Server, Hadoop, and NoSQL databases. Proficiency in Python (or similar … technologies to create and maintain data assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing technologies like Databricks, Spark, Delta Lake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS, Django, and ReactJs. Support the More ❯
domains to enact step-change operational efficiency and maximize business value by confidently utilizing trustworthy data. What are we looking for? Great experience as a Data Engineer Experience with Spark, Databricks, or similar data processing tools. Proficiency in working with the cloud environment and various software’s including SQL Server, Hadoop, and NoSQL databases. Proficiency in Python (or similar … technologies to create and maintain data assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing technologies like Databricks, Spark, Delta Lake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS, Django, and ReactJs. Support the More ❯
relational and NoSQL databases. Experience with data modelling. General understanding of data architectures and event-driven architectures. Proficient in SQL. Familiarity with one scripting language, preferably Python. Experience with Apache Airflow & Apache Spark. Solid understanding of cloud data services: AWS services such as S3, Athena, EC2, RedShift, EMR (Elastic MapReduce), EKS, RDS (Relational Database Services) and Lambda. Nice More ❯
quality practices. Collaborating with leadership and stakeholders to align data priorities. Qualifications and Experience: Expertise in Commercial/Procurement Analytics and SAP (S/4 Hana). Experience with Spark, Databricks, or similar tools. Strong proficiency in data modeling, SQL, NoSQL, and data warehousing. Hands-on experience with data pipelines, ETL, and big data technologies. Proficiency in cloud platforms More ❯
data governance, security standards, and compliance practices. Strong understanding of metadata management, data lineage, and data quality frameworks. Preferred Skills & Knowledge: Familiarity with big data technologies such as Hadoop, Spark, or Kafka Excellent communication skills with the ability to explain complex data strategies to non-technical stakeholders. Outstanding problem-solving abilities and organizational skills. Certifications (Preferred/Desirable): Azure More ❯
SQL, PySpark, and Python for data transformation and scripting. Hands-on experience with DevOps practices and managing CI/CD pipelines. Expertise in big data technologies such as Hadoop, Spark, and Kafka. Strong leadership skills, with experience in managing and developing high-performing teams. Familiarity with MuleSoft and systems thinking is a plus. Qualifications and Experience: Proven track record More ❯
teams . Preferred Skills High-Performance Computing (HPC) and AI workloads for large-scale enterprise solutions. NVIDIA CUDA, cuDNN, TensorRT experience for deep learning acceleration. Big Data platforms (Hadoop, Spark) for AI-driven analytics in professional services. Pls share CV at payal.c@hcltech.com More ❯
teams . Preferred Skills High-Performance Computing (HPC) and AI workloads for large-scale enterprise solutions. NVIDIA CUDA, cuDNN, TensorRT experience for deep learning acceleration. Big Data platforms (Hadoop, Spark) for AI-driven analytics in professional services. Pls share CV at payal.c@hcltech.com More ❯
a similar role. - 3+ years of experience with data modeling, data warehousing, ETL/ELT pipelines and BI tools. - Experience with cloud-based big data technology stacks (e.g., Hadoop, Spark, Redshift, S3, EMR, SageMaker, DynamoDB etc.) - Knowledge of data management and data storage principles. - Expert-level proficiency in writing and optimizing SQL. - Ability to write code in Python for More ❯
a team environment PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Familiarity with big data technologies (Hadoop, Spark, etc.) - Knowledge of data security and privacy best practices - Strong problem-solving and analytical skills - Excellent written and verbal communication skills Our inclusive culture empowers Amazonians to deliver the More ❯
is required. Preferred Skills: Experience in commodities markets or broader financial markets. Knowledge of quantitative modeling, risk management, or algorithmic trading. Familiarity with big data technologies like Kafka, Hadoop, Spark, or similar. Why Work With Us? Impactful Work: Directly influence the profitability of the business by building technology that drives trading decisions. Innovative Culture: Be part of a culture More ❯
is required. Preferred Skills: Experience in commodities markets or broader financial markets. Knowledge of quantitative modeling, risk management, or algorithmic trading. Familiarity with big data technologies like Kafka, Hadoop, Spark, or similar. Why Work With Us? Impactful Work: Directly influence the profitability of the business by building technology that drives trading decisions. Innovative Culture: Be part of a culture More ❯
A solid understanding of key processes in the engineering delivery cycle including Agile and DevOps, Git, APIs, Containers, Microservices and Data Pipelines Experience working with one or more of Spark, Kafka, or Snowflake NICE TO HAVE DP-203 Azure Data Engineering Microsoft Certified: Fabric Analytics Engineer Associate SKIILLS AND EXPERIENCE A high level of drive with the ability to More ❯