modern NLP methods required. Specifically: Transformers models (ex. BERT), LLMs, RAG & Fine-Tuning, OpenAI Stack, Langchain etc. Experience with Big Data technologies a plus —PySpark, H20.ai, Cloud AI platforms, Kubernetes Must be able to translate business requirements into analytical problems Must have proven ability to merge and transform disparate more »
designing and constructing robust data pipelines using the best of open-source data engineering and scientific Python toolset. Tech Stack: Airbyte AWS Glue Pandas Pyspark Delta Lake PostgreSQL The team follows agile ways of working and you engage with various stakeholders across the business. The role is Hybrid more »
the Financial Services with experience in..... Statistical Models, Computer Vision, Predictive Analytics, Data Visualization, Large Language Models (LLM), NLP, AI, Machine Learning, MLOPs Python, Pyspark, Azure, Agile, MetaBase, then please apply. You can 📧 your cv to matt@hawksworthuk.com or message me on LinkedIn. Ideally you'll have plenty of more »
Data Engineer Remote working Salary circa £50,000 - £60,000 DataBricks, PySpark, SQL, Azure We are looking for a talented Data Engineer to join one of the UK's leading research and law ranking companies at an exciting time of growth. Build new products, engineer new solutions, create systems more »
Employment Type: Permanent
Salary: £50000 - £60000/annum plus remote working and benefits
snowflake schemas. Knowledge of DevOps practices within a Power BI environment. Familiarity with Microsoft Fabric & Databricks. SQL databases expertise, data engineering with Python and PySpark, and knowledge of geospatial concepts and tools. As part of this engagement, you will work on initiatives that redefine business efficiency through AI. You more »
Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI/… environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS tools such more »
candidate needs experience and confidence to speak up when several teams can be involved in delivering the project. There is a massive emphasis on Pyspark and Databricks for this particular role. Technical Skills Required: Azure (ADF, Functions, Blob Storage, Data Lake Storage, Azure Data Bricks) Databricks Spark Delta Lake … SQL PythonPySpark ADLS Day To Day Responsibilities: Extensive experience in designing, developing, and managing end-to-end data pipelines, ETL (Extract, Transform, Load), and ELT (Extract, Load, Transform) solutions. Maintains a proactive approach to staying updated with emerging technologies and a strong desire to continuously learn and adapt more »
scientific Python toolset. Our tech stack includes Airbyte for data ingestion, Prefect for pipeline orchestration, AWS Glue for managed ETL, along with Pandas and PySpark for pipeline logic implementation. We utilize Delta Lake and PostgreSQL for data storage, emphasizing the importance of data integrity and version control in our … testing frameworks. Proficiency in Python and familiarity with modern software engineering practices, including 12factor, CI/CD, and Agile methodologies. Deep understanding of Spark (PySpark), Python (Pandas), orchestration software (e.g. Airflow, Prefect) and databases, data lakes and data warehouses. Experience with cloud technologies, particularly AWS Cloud services, with a more »
experience leading a data engineering team. Key Tech: - AWS (S3, Glue, EMR, Athena, Lambda) - Snowflake, Redshift - DBT (Data Build Tool) - Programming: Python, Scala, Spark, PySpark or Ab Initio - Data pipeline orchestration (Apache Airflow) - Knowledge of SQL This is a 6 month initial contract with a trusted client of ours. more »
for data engineering ie Azure Functions * Core skills in coding with SQL, Python and Spark * Proven experience using Databricks ie lakehouse, delta live tables, Pyspark etc more »
start interviewing ASAP. Responsibilities: Azure Cloud Data Engineering using Azure Databricks Data Warehousing Data Engineering Very strong with the Microsoft Stack ESSENTIAL knowledge of PySpark clusters Python & C# Scripting experience Experience of message queues (Kafka) Experience of containerization (Docker) FINANCIAL SERVICES EXPERIENCE (Energy/commodities trading) If you have more »
building a massively distributed cloud-hosted data platform that would be used by the entire firm. Tech Stack: Python/Java, Dremio, Snowflake, Iceberg, PySpark, Glue, EMR, dbt and Airflow/Dagster. Cloud: AWS, Lambdas, ECS services This role would focus on various areas of Data Engineering including: End more »
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management NEXT STEPS: If this role looks of interest, please reach out to Joseph Gregory. more »
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management HOW TO APPLY Please register your interest by sending your CV to Kiran Ramasamy more »
Nottingham, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
Leicester, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
of an enterprise data platform built on Azure Development of new data products and services for commercial and operational teams Tech Stack Python and Pyspark SQL Pipelines, Big Data, Data architectures App Services, Functions, Cosmos DB, Databricks, Synpase and associated tools automation, PyTest If you would be interested please more »
and industry standards for the organization. Strong experience on Azure cloud services like Azure, ADF, ADLS, Synapse Proficiency in querying languages such as SQL, Pyspark, Python and familiarity with data visualization tools (e.g. Power BI). Strong communication skills to gather the business requirements from stakeholder and propose best more »
Cambridge, Cambridgeshire, East Anglia, United Kingdom Hybrid / WFH Options
Set2Recruit
Strong theoretical understanding of machine learning and neural networks Experience with containerized processes (Docker, Kubernetes) Familiarity with cloud services and distributed computing frameworks (AWS, PySpark, Ray Distributed) Problem-solving aptitude and creative thinking skills Benefits : Work From Home flexibility Employee Assistance Programme Stock Option Plan Competitive salary package more »
London, UK Do you strive for engineering excellence and are ready to lead a modern tech transformation ? Do you have experience as a Azure, Pyspark, Cloud? We have the perfect role for you- Data Engineer - Tech Lead Careers at TCS: It means more TCS is a purpose-led transformation … design, development, and maintenance of Azure-based data pipelines and analytical solutions using Databricks, Synapse, and other relevant services. Perform Data Transformations: Leverage SQL, PySpark, and Python to perform data transformations, aggregations, and analysis on large datasets. Architect Data Storage Solutions: Architect data storage solutions using Azure SQL Database more »
Manchester Area, United Kingdom Hybrid / WFH Options
Vermelo RPO
between different packages for some of these model types would be an advantage. Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW’s Radar software is preferred Proficient at more »
London, England, United Kingdom Hybrid / WFH Options
Harnham
continual improvement of their performance. REQUIRED SKILLS AND EXPERIENCE Proficiency in Python, including its associated data and machine learning packages such as numpy, pandas, pyspark, matplotlib, scikit-learn, Keras, TensorFlow, and more. Experience working with object-oriented programming languages. Experience in Forecasting and propensity. Expertise in utilizing SQL for more »
Leicestershire, England, United Kingdom Hybrid / WFH Options
Harnham
continual improvement of their performance. REQUIRED SKILLS AND EXPERIENCE Proficiency in Python, including its associated data and machine learning packages such as numpy, pandas, pyspark, matplotlib, scikit-learn, Keras, TensorFlow, and more. Experience working with object-oriented programming languages. Experience in Forecasting & AB testing. Leadership Expertise in utilizing SQL more »
An industry-leading organisation are looking for an experienced Senior Data Engineer who is well-versed in Databricks, PySpark and SQL to join their growing Data Engineering team. This role can be largely remote, with some travel to their Central London Head Office, likely around 2 times per month. … a wide variety of technical work. Your primary focus will be working with Databricks - you will be building data pipelines in Databricks, coding in PySpark, and supporting internal applications as they are moving away from a legacy application into new bespoke applications which rely on Databricks. This is a … to provide thought leadership, and to innovate and experiment with new technologies to deliver benefits to the business. Requirements Excellent skills in Databricks and Pyspark Excellent experience with SQL and T-SQL programming Experience designing, developing and maintaining data warehouses and data lakes Knowledge of Azure technologies including Data more »
An industry-leading organisation are looking for an experienced Senior Data Engineer who is well-versed in Databricks, PySpark and SQL to join their growing Data Engineering team. This role can be largely remote, with some travel to their Central London Head Office, likely around 2 times per month. … a wide variety of technical work. Your primary focus will be working with Databricks - you will be building data pipelines in Databricks, coding in PySpark, and supporting internal applications as they are moving away from a legacy application into new bespoke applications which rely on Databricks. This is a … to provide thought leadership, and to innovate and experiment with new technologies to deliver benefits to the business. Requirements Excellent skills in Databricks and Pyspark Excellent experience with SQL and T-SQL programming Experience designing, developing and maintaining data warehouses and data lakes Knowledge of Azure technologies including Data more »