design reviews and other types of technical meetings. Demonstratable knowledge of DevOps tool chains and processes. Experience with Big Data (e.g., Hadoop, NiFi, Spark, PySpark, Dask, etc.). Experience in application migration. Agile development experience. Bachelor's Degree in Computer Science, Computer Engineering, Software Engineering, or related STEM or more »
West Midlands, Dudley, West Midlands (County), United Kingdom Hybrid / WFH Options
Concept Resourcing
standardise the Data Warehouse environment and solutions. Required Skills, Knowledge, and Experience: Experience designing, developing, and testing Azure Data Factory/Fabric pipelines, including PySpark Notebooks and Dataflow Gen2 workflows. Previous experience with Informatica Power Centre is desired. Significant experience in designing, writing, editing, debugging, and testing SQL code more »
Surrey, England, United Kingdom Hybrid / WFH Options
Hawksworth
that are comfortable with terms like; Statistical Models, Computer Vision, Predictive Analytics, Data Visualization, Large Language Models (LLM), NLP, AI, Machine Learning, MLOPs, Python, Pyspark, and Azure. Flexible Working : The role is 60% work from home Sponsorship : Sadly sponsorship isn't available for these roles. About You: Bachelors Degree more »
the insurance domain is advantageous. - Education : A degree in Computer Science, Data Science, Engineering, or a related discipline. Technical Skills : Proficient in Python, SQL, PySpark, and Databricks. Demonstrated proficiency in modern NLP techniques and tools. Proven track record in developing and managing data quality metrics and dashboards. Experience collaborating more »
a complete greenfield re-architecture from the ground up in Microsoft Azure. The Tech you'll be playing with: Azure Data Factory Azure Databricks PySpark SQL DBT What you need to bring: 1-3 year's experience in building Data Pipelines SQL experience in data warehousing Python experience would more »
Engineering Hands-on experience in designing and developing scripts for custom ETL processes and automation in Azure Data Factory, Azure Databricks, Azure Synapse, Python, Pyspark etc. Experience being customer-facing on numerous data focused projects with a consultative approach Ability to deliver high to low-level designs for Data more »
Engineering Hands-on experience in designing and developing scripts for custom ETL processes and automation in Azure Data Factory, Azure Databricks, Azure Synapse, Python, Pyspark etc. Experience being customer-facing on numerous data focused projects with a consultative approach Ability to deliver high to low-level designs for Data more »
exposure to the latest technologies. We are looking for both senior and junior Data Engineer 's who have solid Python skills as well as Pyspark, DataBricks and SQL. Ideally some exposure to using Data Modeling, Azure Data Factors and Azure Devops would be a distinct advantage. Strong communication and more »
Proficiency in modern programming languages and database querying languages. Comprehensive understanding of the Software Development Life Cycle and Agile methodologies. Familiarity with Python or PySpark and cloud technologies like AWS, Kubernetes, and Spark is highly desirable. Ideal Candidate: Someone with a knack for innovative solutions and a commitment to more »
and ensuring best practices and understood and followed. Technical Skills and Qualifications Expert knowledge in python including libraries/frameworks such as pandas, numpy, pyspark Good understanding of OOP, software design patterns, and SOLID principles Good experience in Docker Good experience in Linux Good experience in Airflow Good knowledge more »
Greater Manchester, England, United Kingdom Hybrid / WFH Options
Blue Wolf Digital
to join their team. The primary focus of the role is Databricks data engineering. You will be building data pipelines using Databricks, coding using PySpark, and supporting internal applications. You will also be using Python for Data Transformations and work across the Azure Data Platform. Must Have Strong Databricks more »
Coventry, West Midlands, West Midlands (County), United Kingdom
Investigo
platform, driving cost optimisation opportunities. Provide expertise in AWS monitoring and optimisation, optimising databases and ETL pipelines. Utilise programming languages such as Python and PySpark to transform big data into manageable datasets. Contribute to the development of interactive dashboards and provide expert analysis across program lifecycles. Transform technical data more »
month contract. Essential Skills: Insurance/Financial Services experience is highly desirable Design efficient and scalable data models. Azure Databricks (preferably), SQL, Python, PySpark, dimensional/star schema data modelling. Understanding of Conceptual, Logical and Physical Data Models Experience in ETL/ELT as well as Entity Relationship (ER more »
PowerBi would also be useful. Engineer with past experience with Java, Data, and Infrastructure (DevOps). Java is a key skill Programming: Java, Python, PySpark Storage Mechanisms: MongoDB, Redshift, AWS S3 Cloud Environments/Infra: AWS (required), [AWS Lambda, Terraform] (nice to have) Data Platforms: Creating data pipelines within more »
Senior Data Engineer Remote working Salary £65,000 - £70,000 plus benefits DataBricks, PySpark, SQL We are looking for a talented Senior Data Engineer to join one of the UK's leading research and law ranking companies at an exciting time of growth. Build new products, engineer new solutions more »
Employment Type: Permanent
Salary: £60000 - £70000/annum plus remote working and benefits
Birmingham, England, United Kingdom Hybrid / WFH Options
Lorien
of this is a strong preference. However other Cloud platforms like AWS/GCP are acceptable. • Coding Languages - Experience using Python with data (Pandas, PySpark) would be an advantage. Other languages such as C# would be beneficial but not essential. Their lovely offices are based in the West Midlands more »
modern NLP methods required. Specifically: Transformers models (ex. BERT), LLMs, RAG & Fine-Tuning, OpenAI Stack, Langchain etc. Experience with Big Data technologies a plus —PySpark, H20.ai, Cloud AI platforms, Kubernetes Must be able to translate business requirements into analytical problems Must have proven ability to merge and transform disparate more »
requirements and design scalable and efficient solutions to meet business needs. Implement data pipelines, ETL processes, and data transformations Strong experience with Azure Databricks, PySpark, and Python Integrate diverse data sources and formats, including structured and unstructured data, streaming data, and APIs. Technical skills: Strong proficiency in Python Extensive more »
designing and constructing robust data pipelines using the best of open-source data engineering and scientific Python toolset. Tech Stack: Airbyte AWS Glue Pandas Pyspark Delta Lake PostgreSQL The team follows agile ways of working and you engage with various stakeholders across the business. The role is Hybrid more »
the Financial Services with experience in..... Statistical Models, Computer Vision, Predictive Analytics, Data Visualization, Large Language Models (LLM), NLP, AI, Machine Learning, MLOPs Python, Pyspark, Azure, Agile, MetaBase, then please apply. You can 📧 your cv to matt@hawksworthuk.com or message me on LinkedIn. Ideally you'll have plenty of more »
Data Engineer Remote working Salary circa £50,000 - £60,000 DataBricks, PySpark, SQL, Azure We are looking for a talented Data Engineer to join one of the UK's leading research and law ranking companies at an exciting time of growth. Build new products, engineer new solutions, create systems more »
Employment Type: Permanent
Salary: £50000 - £60000/annum plus remote working and benefits
snowflake schemas. Knowledge of DevOps practices within a Power BI environment. Familiarity with Microsoft Fabric & Databricks. SQL databases expertise, data engineering with Python and PySpark, and knowledge of geospatial concepts and tools. As part of this engagement, you will work on initiatives that redefine business efficiency through AI. You more »
snowflake schemas. Knowledge of DevOps practices within a Power BI environment. Familiarity with Microsoft Fabric & Databricks. SQL databases expertise, data engineering with Python and PySpark, and knowledge of geospatial concepts and tools. As part of this engagement, you will work on initiatives that redefine business efficiency through AI. You more »
experience-related problems such as workforce management, demand forecasting, or root cause analysis Strong visualisation skills including experience with Tableau Familiarity with Databricks and PySpark for data manipulation and analysis Familiarity with Git-based source control methodologies, including branching and pull requests A self-starter, passionate about converting data more »
analysis, software design, implementation, testing, integration, deployment/installation, and maintenance) and programming. Develop in Python Juypter Notebook using data science best practices Develop PySpark analytics using big data best practices Document in Jira and Confluence using Agile methodologies Commit code in Gitlab using versioning control best practices You more »