with DevOps practices for data engineering, including infrastructure-as-code (e.g., Terraform, CloudFormation), CI/CD pipelines, and monitoring (e.g., CloudWatch, Datadog). Familiarity with big data technologies like Apache Spark, Hadoop, or similar. ETL/ELT tools and creating common data sets across on-prem (IBMDatastage ETL) and cloud data stores Leadership & Strategy: Lead Data Engineering team(s More ❯
with cloud platforms (GCP preferred). Experience with CI/CD pipelines and version control. Proficiency in data visualisation tools (e.g. Tableau, PowerBI). Exposure to tools like DBT, Apache Airflow, Docker. Experience working with large-scale datasets (terabyte-level or higher). Excellent problem-solving capabilities. Strong communication and collaboration skills. Proficiency in Python and SQL (or similar More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Owen Thomas | Pending B Corp™
with cloud platforms (GCP preferred). Experience with CI/CD pipelines and version control. Proficiency in data visualisation tools (e.g. Tableau, PowerBI). Exposure to tools like DBT, Apache Airflow, Docker. Experience working with large-scale datasets (terabyte-level or higher). Excellent problem-solving capabilities. Strong communication and collaboration skills. Proficiency in Python and SQL (or similar More ❯
ideally with experience using data processing frameworks such as Kafka, NoSQL, Airflow, TensorFlow, or Spark. Finally, experience with cloud platforms like AWS or Azure, including data services such as Apache Airflow, Athena, or SageMaker, is essential. This is a fantastic opportunity for a Data Engineer to join a rapidly expanding start-up at an important time where you will More ❯
ideally with experience using data processing frameworks such as Kafka, NoSQL, Airflow, TensorFlow, or Spark. Finally, experience with cloud platforms like AWS or Azure, including data services such as Apache Airflow, Athena, or SageMaker, is essential. This is a fantastic opportunity for a Data Engineer to join a rapidly expanding start-up at an important time where you will More ❯
Business Administration or equivalent by experience (more than 5 years). You are used to working with different data tools such as SSAS, SSIS, SSRS, AWS, Azure data factory, Apache Spark, SQL, Python, Tableau You have a good database knowledge (SQL and NoSQL) You are fluent in French or Dutch and have full professional capacity in English You know More ❯
Skills and Qualifications Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Proficiency in Core Tools: Cloud Run, Firestore, Apigee, Cloud Composer/Apache Airflow, Pub/Sub, Dataform, and Terraform (infrastructure as code). Familiarity with Google Cloud Services: AppEngine, Endpoints, BigQuery, Cloud Storage, Cloud Dataflow, and Cloud Datastore. Programming languages: Java More ❯
Newcastle upon Tyne, Tyne and Wear, Tyne & Wear, United Kingdom
Randstad Technologies Recruitment
institutions, alongside a proven record of relevant professional experience." Proven experience in a data specialist role with a passion for solving data-related problems. Expertise in SQL, Python , and Apache Spark , with experience working in a production environment. Familiarity with Databricks and Microsoft Azure is a plus. Financial Services experience is a bonus, but not required. Strong verbal and More ❯
data-driven performance analysis and optimizations. Strong communication skills and ability to work in a team. Strong analytical and problem-solving skills. PREFERRED QUALIFICATIONS Experience with Kubernetes deployment architectures. Apache NiFi experience. Experience building trading controls within an investment bank. #J-18808-Ljbffr More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Fruition Group
best practices for data security and compliance. Collaborate with stakeholders and external partners. Skills & Experience: Strong experience with AWS data technologies (e.g., S3, Redshift, Lambda). Proficient in Python, Apache Spark, and SQL. Experience in data warehouse design and data migration projects. Cloud data platform development and deployment. Expertise across data warehouse and ETL/ELT development in AWS More ❯
are constantly looking for components to adopt in order to enhance our platform. What you'll do: Develop across our evolving technology stack - we're using Python, Java, Kubernetes, Apache Spark, Postgres, ArgoCD, Argo Workflow, Seldon, MLFlow and more. We are migrating into AWS cloud and adopting many services that are available in that environment. You will have the More ❯
and engineering practices. Key competencies include: Microsoft Fabric expertise : Designing and delivering data solutions using Microsoft Fabric, including Pipelines, Notebooks, Dataflows Gen2. Programming and query languages : Proficiency in Python, Apache Spark, KQL (Kusto Query Language). End-to-end data solution delivery : Experience with Data Governance, Migration, Modelling, ETL/ELT, Data Lakes, Warehousing, MDM, and BI. Engineering delivery More ❯
Key requirements: Substantial hands-on experience in software development, with at proven experience in data integration/data pipeline developments. Proven experience in Data integration development (Ab Initio, Talend, Apache spark, AWS Glue, SSIS or equivalent) including optimization, tuning and benchmarking Proven experience in SQL Oracle, MSSQL and equivalents including optimization, tuning and benchmarking Understanding of Cloud-native development More ❯
the UK for the last 10 years, and ability to obtain security clearance. Preferred Skills Experience with cloud platforms (IBM Cloud, AWS, Azure). Knowledge of big data frameworks (Apache Spark, Hadoop). Experience with data warehousing tools like IBM Cognos or Tableau. Certifications in relevant technologies are a plus. Additional Details Seniority level: Mid-Senior level Employment type More ❯
obtain UK security clearance. We do not sponsor visas. Preferred Skills and Experience Public sector experience Knowledge of cloud platforms (IBM Cloud, AWS, Azure) Experience with big data frameworks (Apache Spark, Hadoop) Data warehousing and BI tools (IBM Cognos, Tableau) Additional Details Seniority level: Mid-Senior level Employment type: Full-time Job function: Information Technology Industries: IT Services and More ❯
London, England, United Kingdom Hybrid / WFH Options
Source
GCP) and their relevant data and ML services. Has experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) and data lake technologies (e.g., S3, ADLS). Has experience with Apache Spark (PySpark). Is familiar with workflow orchestration tools (e.g., Airflow, Prefect, Dagster). Is proficient with Git and GitHub/GitLab. Has a strong understanding of relational, NoSQL More ❯
/ELT pipelines on GCP for digital measurement data. Leverage Advanced SQL and Python to transform, optimize, and analyze large datasets. Build and orchestrate data workflows using tools like Apache Airflow , DBT Apply basic ML and statistics (e.g., clustering, regression, classification) where helpful Ensure data quality , reliability , and system performance across the stack Collaborate closely with analysts, data scientists More ❯
Computer Science, Engineering, or a related field, or equivalent industry experience. Preferred Qualifications Experience or interest in mentoring junior engineers. Familiarity with data-centric workflows and pipeline orchestration (e.g., Apache Airflow). Proficiency in data validation, anomaly detection, or debugging using tools like Pandas, Polars, or data.table/R. Experience working with AWS or other cloud platforms. Knowledge of More ❯
Leverage full-stack technologies including Java, JavaScript, TypeScript, React, APIs, MongoDB, Elastic Search, DMN, BPMN, and Kubernetes. Utilize data-streaming technologies such as Kafka CDC, Kafka topics, EMS, and Apache Flink. Innovate and incubate new ideas. Work on a broad range of problems involving large data sets, real-time processing, messaging, workflows, and UI/UX. Drive the full More ❯
Leverage full-stack technologies including Java, JavaScript, TypeScript, React, APIs, MongoDB, Elastic Search, DMN, BPMN, and Kubernetes. Utilize data-streaming technologies such as Kafka CDC, Kafka topics, EMS, and Apache Flink. Innovate and incubate new ideas. Work on a broad range of problems involving large data sets, real-time processing, messaging, workflows, and UI/UX. Drive the full More ❯
data-driven performance analysis and optimization Strong communication skills and ability to work in a team Strong analytical and problem-solving skills PREFERRED QUALIFICATIONS Experience with Kubernetes deployment architectures Apache NiFi experience Experience building trading controls within an investment bank ABOUT GOLDMAN SACHS At Goldman Sachs, we commit our people, capital, and ideas to help our clients, shareholders, and More ❯
London, England, United Kingdom Hybrid / WFH Options
Arreoblue
or more of the following technologies: Databricks Dedicated SQL Pools Synapse Analytics Data Factory To set yourself up for success you should have in-depth knowledge of the languages Apache Spark, SQL, Python, along with solid development practices. Additionally, you will be required to have in-depth knowledge of supporting Azure platforms such as Data Lake, Key Vault, DevOps More ❯
leverage full-stack technologies including Java, JavaScript, TypeScript, React, APIs, MongoDB, Elastic Search, DMN, BPMN, and Kubernetes, utilize data-streaming technologies such as Kafka CDC, Kafka topics, EMS, and Apache Flink, innovate and incubate new ideas, work on a broad range of problems, often involving large data sets, including real-time processing, messaging, workflow, and UI/UX, drive More ❯
London, England, United Kingdom Hybrid / WFH Options
Veeva Systems, Inc
recall, or cost savings Requirements Excellent communication skills, used to work in a remote environment More than 5 years of experience Expert skills in Python or Java Experience with Apache Spark Experience writing software in AWS Nice to Have Experience with Data Lakes, Lakehouses, and Warehouses (e. g. DeltaLake, Redshift) Previously worked in agile environments Experience with expert systems More ❯
in Python with libraries like TensorFlow, PyTorch, or Scikit-learn for ML, and Pandas, PySpark, or similar for data processing. Experience designing and orchestrating data pipelines with tools like Apache Airflow, Spark, or Kafka. Strong understanding of SQL, NoSQL, and data modeling. Familiarity with cloud platforms (AWS, Azure, GCP) for deploying ML and data solutions. Knowledge of MLOps practices More ❯