Data Lake Storage, Azure Data Factory, Azure Synapse Analytics, Azure Databricks, Azure SQL Database, Azure Stream Analytics, etc. Strong Python or Scala with Spark, PySpark experience Experience with relational databases and NoSQL databases Significant experience and in-depth knowledge of creating data pipelines and associated design principles, standards, Data more »
Azure Search, Azure Stream Analytics Delta Lake and Data Lakes Apache Spark Pools, SQL Pools (dpools and spools) Experience in Python, C# coding, Spark, PySpark, Unix shell/Perl scripting experience Experience in API data sourcing using REST, Soap, and other API methodologies Experience working with structured and unstructured more »
Azure Search, Azure Stream Analytics Delta Lake and Data Lakes Apache Spark Pools, SQL Pools (dpools and spools) Experience in Python, C# coding, Spark, PySpark, Unix shell/Perl scripting experience. Experience in API data sourcing using REST, Soap, and other API methodologies. Experience working with structured and unstructured more »
Ideal Candidate Profile: Proven track record in big data engineering with a solid understanding of ETL pipelines and data system projections. Proficiency in Python, PySpark, SQL, and familiarity with data science tools like R, ML, AI. Strong foundation in database management (SQL and No-SQL databases such as Aurora more »
of this is a strong preference. However other Cloud platforms like AWS/GCP are acceptable. · Coding Languages - Experience using Python with data (Pandas, PySpark) would be an advantage. Other languages such as C# would be beneficial but not essential. · Machine Learning and Data Science Tools - Any experience in more »
requirements and deliver solutions that drive business value. Requirements: 7+ years in a Data Engineering Role Excellent proficiency in SQL, Python, Microsoft Azure, Databricks, PySpark, Experience managing a team Details: Start Date: ASAP Duration: 3 months, option for permanent extension Day rate: Up to £400Ltd, depending on experience Annual more »
Lead Data Engineer: We need some strong Data engineer profiles… they need good experience with Pyspark, Python, SQL, ADF and preferably Databricks experience Job description: Building new data pipelines and optimizing data flows using the Azure cloud stack. Building data products from scratch. Support Business Analysts and Data Architects more »
understand consumers. Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
a generous benefits package. Technical Experience Required: Demonstrated expertise in data engineering with a focus on Azure services. Proficiency in SQL, Azure Databricks, and PySpark, handling unstructured/semi-structured data, with experience of schema evolution and/or serialisation. Extensive experience in building and optimizing data pipelines. Experience more »
Engineering Hands-on experience in designing and developing scripts for custom ETL processes and automation in Azure Data Factory, Azure Databricks, Azure Synapse, Python, Pyspark etc. Experience being customer-facing on numerous data focused projects with a consultative approach Ability to deliver high to low-level designs for Data more »
Engineering Hands-on experience in designing and developing scripts for custom ETL processes and automation in Azure Data Factory, Azure Databricks, Azure Synapse, Python, Pyspark etc. Experience being customer-facing on numerous data focused projects with a consultative approach Ability to deliver high to low-level designs for Data more »
in a highly numerate subject is essential • Minimum 2 years' experience in Python development, including scientific computing and data science libraries (NumPy, pandas, SciPy, PySpark) • Solid understanding of object-oriented software engineering design principles for usability, maintainability and extensibility • Experience working with Git in a version-controlled environment • Experience more »
and ensuring best practices and understood and followed. Technical Skills and Qualifications Expert knowledge in python including libraries/frameworks such as pandas, numpy, pyspark Good understanding of OOP, software design patterns, and SOLID principles Good experience in Docker Good experience in Linux Good experience in Airflow Good knowledge more »
Greater Manchester, England, United Kingdom Hybrid / WFH Options
Blue Wolf Digital
to join their team. The primary focus of the role is Databricks data engineering. You will be building data pipelines using Databricks, coding using PySpark, and supporting internal applications. You will also be using Python for Data Transformations and work across the Azure Data Platform. Must Have Strong Databricks more »
Coventry, West Midlands, West Midlands (County), United Kingdom
Investigo
platform, driving cost optimisation opportunities. Provide expertise in AWS monitoring and optimisation, optimising databases and ETL pipelines. Utilise programming languages such as Python and PySpark to transform big data into manageable datasets. Contribute to the development of interactive dashboards and provide expert analysis across program lifecycles. Transform technical data more »
month contract. Essential Skills: Insurance/Financial Services experience is highly desirable Design efficient and scalable data models. Azure Databricks (preferably), SQL, Python, PySpark, dimensional/star schema data modelling. Understanding of Conceptual, Logical and Physical Data Models Experience in ETL/ELT as well as Entity Relationship (ER more »
PowerBi would also be useful. Engineer with past experience with Java, Data, and Infrastructure (DevOps). Java is a key skill Programming: Java, Python, PySpark Storage Mechanisms: MongoDB, Redshift, AWS S3 Cloud Environments/Infra: AWS (required), [AWS Lambda, Terraform] (nice to have) Data Platforms: Creating data pipelines within more »
Senior Data Engineer Remote working Salary £65,000 - £70,000 plus benefits DataBricks, PySpark, SQL We are looking for a talented Senior Data Engineer to join one of the UK's leading research and law ranking companies at an exciting time of growth. Build new products, engineer new solutions more »
modern NLP methods required. Specifically: Transformers models (ex. BERT), LLMs, RAG & Fine-Tuning, OpenAI Stack, Langchain etc. Experience with Big Data technologies a plus —PySpark, H20.ai, Cloud AI platforms, Kubernetes Must be able to translate business requirements into analytical problems Must have proven ability to merge and transform disparate more »
a strong knowledge of data warehouse technologies. You’ll need to have a strong understanding of SQL Server, Azure Data Bricks and/or Pyspark, data pipeline creation and optimization, Azure DevOps, CI/CD pipelines, and release management. What’s in it for you? With a strong family more »
Cheltenham, England, United Kingdom Hybrid / WFH Options
Ripjar
Understand the nuances of dealing with structured and unstructured data, and be experienced in using databases (Mongo ideally) Experience with Linux Experience with Spark (Pyspark), Hadoop or other Big data technologies would be beneficial, but not required Benefits Why we think you'll enjoy it here: Base Salary of more »
the Financial Services with experience in..... Statistical Models, Computer Vision, Predictive Analytics, Data Visualization, Large Language Models (LLM), NLP, AI, Machine Learning, MLOPs Python, Pyspark, Azure, Agile, MetaBase, then please apply. You can 📧 your cv to matt@hawksworthuk.com or message me on LinkedIn. Ideally you'll have plenty of more »
Data Engineer Remote working Salary circa £50,000 - £60,000 DataBricks, PySpark, SQL, Azure We are looking for a talented Data Engineer to join one of the UK's leading research and law ranking companies at an exciting time of growth. Build new products, engineer new solutions, create systems more »
Employment Type: Permanent
Salary: £50000 - £60000/annum plus remote working and benefits
snowflake schemas. Knowledge of DevOps practices within a Power BI environment. Familiarity with Microsoft Fabric & Databricks. SQL databases expertise, data engineering with Python and PySpark, and knowledge of geospatial concepts and tools. As part of this engagement, you will work on initiatives that redefine business efficiency through AI. You more »
understand the benefits, pros and cons of various technical options. Required skills: Strong experience within a Data Engineering role Excellent understanding of Databricks and Pyspark Strong knowledge of Azure Cloud Services Excellent understanding of SQL Good exposure to Azure Data Lake technologies such as ADF, HDFS and Synapse Good more »