Leeds, England, United Kingdom Hybrid / WFH Options
Scott Logic
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering, including Front End technologies like JavaScript. You’re a problem-solver, pragmatically exploring options More ❯
Wakefield, Yorkshire, United Kingdom Hybrid / WFH Options
Flippa.com
/CD) automation, rigorous code reviews, documentation as communication. Preferred Qualifications Familiar with data manipulation and experience with Python libraries like Flask, FastAPI, Pandas, PySpark, PyTorch, to name a few. Proficiency in statistics and/or machine learning libraries like NumPy, matplotlib, seaborn, scikit-learn, etc. Experience in building More ❯
Redshift/BigQuery) (Required) Experience with infrastructure as code (e.g. Terraform) (Required) Proficiency in using Python both for scheduling (e.g. Airflow) and manipulating data (PySpark) (Required) Experience building deployment pipelines (e.g. Azure Pipelines) (Required) Deployment of web apps using Kubernetes (Preferably ArgoCD & Helm) (Preferred) Experience working on Analytics and More ❯
Skipton, England, United Kingdom Hybrid / WFH Options
Skipton
Storage, Key Vault Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (Spark SQL) & Python (PySpark) Certifications: Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure Data Fundamentals (DP-900) You will need to be you. Curious about technology and More ❯
tools Key Technology: Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (Spark SQL) & Python (PySpark) Certifications: You will need to be you. Curious about technology and adaptable to new technologies Agile-minded, optimistic, passionate, and pragmatic about delivering valuable More ❯
Delta Lake, Azure SQL, Purview and APIM. Proficiency in developing CI/CD data pipelines and strong programming skills in Python, SQL, Bash, and PySpark for automation. Strong aptitude for data pipeline monitoring and an understanding of data security practices such as RBAC and encryption. Implemented data and pipeline More ❯
team that’s transforming how data powers retail, this is your opportunity. Your Role (Key Responsibilities) Design, build, and optimise robust data pipelines using PySpark, SparkSQL, and Databricks to ingest, transform, and enrich data from a variety of sources. Translate business requirements into scalable and performant data engineering solutions More ❯
architectures with a focus on automation, performance tuning, cost optimisation, and system reliability. Proven proficiency in programming languages such as Python, T-SQL, and PySpark, with practical knowledge of test-driven development. Demonstrated capability in building secure, scalable data solutions on Azure with an in-depth understanding of data More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
at scale. Hands-on expertise in core GCP data services such as BigQuery, Composer, Dataform, Dataproc, and Pub/Sub. Strong programming skills in PySpark, Python, and SQL. Proficiency in ETL processes, data mining, and data storage principles. Experience with BI and data visualisation tools, such as Looker or More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Movera
experience working with Azure Git/DevOps Repos experience Demonstration of problem solving ability Synapse Analytics or similar experience - desirable Visual Files experience – desirable PySpark/Python experience – desirable Powershell experience – desirable What We Offer We aim to reward your hard work generously. You’ll be greeted in our More ❯
focus are essential. Excellent communication and leadership skills are required. Key Technologies (awareness) Azure Databricks, Data Factory, Storage, Key Vault, Git, dbt, SQL, Python (PySpark). Certifications (Ideal) SAFe POPM or Scrum PSPO, Microsoft Certified: Azure Fundamentals (AZ-900), Azure Data Fundamentals (DP-900). What’s in it More ❯
Python, SQL) Data modelling and data warehousing Extract-Transform-Load (ETL) Creating data pipelines Ethics in data and AI Cloud and big data technologies (PySpark, AWS) Working with unstructured data Consulting Skills: Effective teamwork Business acumen Time management Stakeholder management Presentations Agile methodology Being a brand ambassador Being a More ❯
SQL) Building workflows in SQL, Spark and DBT Data and dimensional modelling Skills & Qualifications Azure Data factory, Synapse and SSIS Python/Park/PySpark Ideally Snowflake and DBT More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Skills for Care
external data engineers and data scientists, who may not be familiar with the datasets, to accelerate development. Our technology stack consists of: Python and Pyspark AWS glue jobs assembled into Step functions Pydeequ for data quality testing Amazon Athena for querying data Hosted on AWS, using S3, Glue, Step More ❯
Sheffield, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Strong hands-on experience with ML frameworks (PyTorch, TensorFlow, Keras). Proficiency in Python and C/C++. Experience with scalable data tools (e.g., PySpark, Kubernetes, Databricks, Apache Arrow). Proven ability to manage GPU-intensive data processing jobs. 4+ years of applied research or industry experience. Creative problem More ❯
Wakefield, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Strong hands-on experience with ML frameworks (PyTorch, TensorFlow, Keras). Proficiency in Python and C/C++. Experience with scalable data tools (e.g., PySpark, Kubernetes, Databricks, Apache Arrow). Proven ability to manage GPU-intensive data processing jobs. 4+ years of applied research or industry experience. Creative problem More ❯
York, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Strong hands-on experience with ML frameworks (PyTorch, TensorFlow, Keras). Proficiency in Python and C/C++. Experience with scalable data tools (e.g., PySpark, Kubernetes, Databricks, Apache Arrow). Proven ability to manage GPU-intensive data processing jobs. 4+ years of applied research or industry experience. Creative problem More ❯
Bradford, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Strong hands-on experience with ML frameworks (PyTorch, TensorFlow, Keras). Proficiency in Python and C/C++. Experience with scalable data tools (e.g., PySpark, Kubernetes, Databricks, Apache Arrow). Proven ability to manage GPU-intensive data processing jobs. 4+ years of applied research or industry experience. Creative problem More ❯
Doncaster, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Strong hands-on experience with ML frameworks (PyTorch, TensorFlow, Keras). Proficiency in Python and C/C++. Experience with scalable data tools (e.g., PySpark, Kubernetes, Databricks, Apache Arrow). Proven ability to manage GPU-intensive data processing jobs. 4+ years of applied research or industry experience. Creative problem More ❯
Hull, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Strong hands-on experience with ML frameworks (PyTorch, TensorFlow, Keras). Proficiency in Python and C/C++. Experience with scalable data tools (e.g., PySpark, Kubernetes, Databricks, Apache Arrow). Proven ability to manage GPU-intensive data processing jobs. 4+ years of applied research or industry experience. Creative problem More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Jet2
have: Experience of training, evaluating, deploying and maintaining ML models. Sound understanding of data warehousing and ETL tools. Strong technical skills in Python and PySpark for data processing. Familiarity with Snowflake, RDBMS or other databases. Experience of working with Cloud infrastructure. Experience of building infrastructure as code using technologies More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
TieTalent
and analytics support Hands-on experience with Azure Data ecosystem: Databricks, Data Factory, Data Lake, Synapse; certifications are a plus Proficiency in Python, with PySpark experience preferred Strong SQL skills Experience in building and maintaining data pipelines Managing DevOps pipelines Skills in process optimization, performance tuning, data modeling, and … value diversity and inclusion; your anonymized diversity data helps us improve our outreach and inclusion efforts. Nice-to-Have Skills Python, Azure, Data Factory, PySpark, SQL, DevOps Work Experience & Languages Data Engineer, Data Infrastructure English Seniorities and Job Details Entry level Contract IT & Internet industry #J-18808-Ljbffr More ❯