modern NLP methods required. Specifically: Transformers models (ex. BERT), LLMs, RAG & Fine-Tuning, OpenAI Stack, Langchain etc. Experience with Big Data technologies a plus —PySpark, H20.ai, Cloud AI platforms, Kubernetes Must be able to translate business requirements into analytical problems Must have proven ability to merge and transform disparate more »
designing and constructing robust data pipelines using the best of open-source data engineering and scientific Python toolset. Tech Stack: Airbyte AWS Glue Pandas Pyspark Delta Lake PostgreSQL The team follows agile ways of working and you engage with various stakeholders across the business. The role is Hybrid more »
the Financial Services with experience in..... Statistical Models, Computer Vision, Predictive Analytics, Data Visualization, Large Language Models (LLM), NLP, AI, Machine Learning, MLOPs Python, Pyspark, Azure, Agile, MetaBase, then please apply. You can 📧 your cv to matt@hawksworthuk.com or message me on LinkedIn. Ideally you'll have plenty of more »
Data Engineer Remote working Salary circa £50,000 - £60,000 DataBricks, PySpark, SQL, Azure We are looking for a talented Data Engineer to join one of the UK's leading research and law ranking companies at an exciting time of growth. Build new products, engineer new solutions, create systems more »
Employment Type: Permanent
Salary: £50000 - £60000/annum plus remote working and benefits
snowflake schemas. Knowledge of DevOps practices within a Power BI environment. Familiarity with Microsoft Fabric & Databricks. SQL databases expertise, data engineering with Python and PySpark, and knowledge of geospatial concepts and tools. As part of this engagement, you will work on initiatives that redefine business efficiency through AI. You more »
experience-related problems such as workforce management, demand forecasting, or root cause analysis Strong visualisation skills including experience with Tableau Familiarity with Databricks and PySpark for data manipulation and analysis Familiarity with Git-based source control methodologies, including branching and pull requests A self-starter, passionate about converting data more »
Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI/… environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS tools such more »
candidate needs experience and confidence to speak up when several teams can be involved in delivering the project. There is a massive emphasis on Pyspark and Databricks for this particular role. Technical Skills Required: Azure (ADF, Functions, Blob Storage, Data Lake Storage, Azure Data Bricks) Databricks Spark Delta Lake … SQL PythonPySpark ADLS Day To Day Responsibilities: Extensive experience in designing, developing, and managing end-to-end data pipelines, ETL (Extract, Transform, Load), and ELT (Extract, Load, Transform) solutions. Maintains a proactive approach to staying updated with emerging technologies and a strong desire to continuously learn and adapt more »
scientific Python toolset. Our tech stack includes Airbyte for data ingestion, Prefect for pipeline orchestration, AWS Glue for managed ETL, along with Pandas and PySpark for pipeline logic implementation. We utilize Delta Lake and PostgreSQL for data storage, emphasizing the importance of data integrity and version control in our … testing frameworks. Proficiency in Python and familiarity with modern software engineering practices, including 12factor, CI/CD, and Agile methodologies. Deep understanding of Spark (PySpark), Python (Pandas), orchestration software (e.g. Airflow, Prefect) and databases, data lakes and data warehouses. Experience with cloud technologies, particularly AWS Cloud services, with a more »
experience leading a data engineering team. Key Tech: - AWS (S3, Glue, EMR, Athena, Lambda) - Snowflake, Redshift - DBT (Data Build Tool) - Programming: Python, Scala, Spark, PySpark or Ab Initio - Data pipeline orchestration (Apache Airflow) - Knowledge of SQL This is a 6 month initial contract with a trusted client of ours. more »
for data engineering ie Azure Functions * Core skills in coding with SQL, Python and Spark * Proven experience using Databricks ie lakehouse, delta live tables, Pyspark etc more »
Greater London, England, United Kingdom Hybrid / WFH Options
Agora Talent
come with scaling a company • The ability to translate complex and sometimes ambiguous business requirements into clean and maintainable data pipelines • Excellent knowledge of PySpark, Python and SQL fundamentals • Experience in contributing to complex shared repositories. What’s nice to have: • Prior early-stage B2B SaaS experience involving client more »
start interviewing ASAP. Responsibilities: Azure Cloud Data Engineering using Azure Databricks Data Warehousing Data Engineering Very strong with the Microsoft Stack ESSENTIAL knowledge of PySpark clusters Python & C# Scripting experience Experience of message queues (Kafka) Experience of containerization (Docker) FINANCIAL SERVICES EXPERIENCE (Energy/commodities trading) If you have more »
building a massively distributed cloud-hosted data platform that would be used by the entire firm. Tech Stack: Python/Java, Dremio, Snowflake, Iceberg, PySpark, Glue, EMR, dbt and Airflow/Dagster. Cloud: AWS, Lambdas, ECS services This role would focus on various areas of Data Engineering including: End more »
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management NEXT STEPS: If this role looks of interest, please reach out to Joseph Gregory. more »
Leicester, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
Nottingham, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
directly with clients. Supporting clients in platform discovery, integration, training, and collaboration on data science projects. Proficiency in technical skills, particularly Python, R, SQL, Pyspark, and JavaScript. Assisting users in mastering the platform. Analysing diverse data and ML applications. Providing strategic insights to ensure customer success. Collaborating with customers more »
Requirements: 3+ years as a Business Analyst. Proficiency in ERP/CRM solutions and data, including Workday HCM Strong Azure data skills. Proficiency in PySpark, Java, or Python. Familiarity with Kimball data modeling and SQL. Experience with Power BI and CI/CD practices. Nice to Have: B2B supply more »
of 4 years commercial experience Strong proficiency in AWS and relevant technologies Good communication skills Preferably experience with some or all of the following; Pyspark, Kubernetes, Terraform Expeirence in a start-up/scale up or fast-paced environment is desirable. HOW TO APPLY Please register your interest by more »
London, England, United Kingdom Hybrid / WFH Options
Harnham
continual improvement of their performance. REQUIRED SKILLS AND EXPERIENCE Proficiency in Python, including its associated data and machine learning packages such as numpy, pandas, pyspark, matplotlib, scikit-learn, Keras, TensorFlow, and more. Experience working with object-oriented programming languages. Experience in Forecasting and propensity. Expertise in utilizing SQL for more »
Leicestershire, England, United Kingdom Hybrid / WFH Options
Harnham
continual improvement of their performance. REQUIRED SKILLS AND EXPERIENCE Proficiency in Python, including its associated data and machine learning packages such as numpy, pandas, pyspark, matplotlib, scikit-learn, Keras, TensorFlow, and more. Experience working with object-oriented programming languages. Experience in Forecasting & AB testing. Leadership Expertise in utilizing SQL more »
An industry-leading organisation are looking for an experienced Senior Data Engineer who is well-versed in Databricks, PySpark and SQL to join their growing Data Engineering team. This role can be largely remote, with some travel to their Central London Head Office, likely around 2 times per month. … a wide variety of technical work. Your primary focus will be working with Databricks - you will be building data pipelines in Databricks, coding in PySpark, and supporting internal applications as they are moving away from a legacy application into new bespoke applications which rely on Databricks. This is a … to provide thought leadership, and to innovate and experiment with new technologies to deliver benefits to the business. Requirements Excellent skills in Databricks and Pyspark Excellent experience with SQL and T-SQL programming Experience designing, developing and maintaining data warehouses and data lakes Knowledge of Azure technologies including Data more »
An industry-leading organisation are looking for an experienced Senior Data Engineer who is well-versed in Databricks, PySpark and SQL to join their growing Data Engineering team. This role can be largely remote, with some travel to their Central London Head Office, likely around 2 times per month. … a wide variety of technical work. Your primary focus will be working with Databricks - you will be building data pipelines in Databricks, coding in PySpark, and supporting internal applications as they are moving away from a legacy application into new bespoke applications which rely on Databricks. This is a … to provide thought leadership, and to innovate and experiment with new technologies to deliver benefits to the business. Requirements Excellent skills in Databricks and Pyspark Excellent experience with SQL and T-SQL programming Experience designing, developing and maintaining data warehouses and data lakes Knowledge of Azure technologies including Data more »
to the ideas and delivery of the strategy; Support data queries in SQL (T-SQL/ANSI-SQL) and support data pipelines using in PySpark/Python, Databricks and AWS (Athena, Glue, S3); Analyse data needs and coordinate new data requests and data change requests. Work with clients to … Pivot Charts). Experience of supporting Data Warehousing. Basic SQL experience and understanding XML/JSON files. Basic knowledge/experience of either Python, PySpark, R, Scala etc. Experience using SQL, PowerBI, Tableau or similar tools. Preferred: Knowledge and experience of Financial Systems Support (Access Dimensions) or ServiceNow Support … and Administration. Strong knowledge of using SQL, PowerBI, Tableau etc. Strong knowledge of using Python, PySpark, R, Scala etc. Experience of supporting IT Applications and/or Platforms. Experience of cloud data solutions (AWS, Google, Microsoft Azure), AWS preferred. Degree in Business Analytics or Technology, Computer Science, Math, Statistics more »