for statistical analysis, automation, and working with APIs Comfortable accessing, transforming, analysing, and modelling data using big data frameworks such as Spark with e.g. PySpark or Sparklyr Capable of researching and developing machine learning models and approaches for deploying models into production for real-world applications Understanding of a more »
of MLOps and associated tools such as Azure DevOps/Github, MLFlow, Azure ML Experience working with large datasets/big data architectures; particularly Pyspark/Databricks. Experience deploying container technologies (e.g. Docker, Kubernetes) Experience playing a lead role on technical AI projects. Excellent communication skills with both technical more »
to ensure data accuracy. Preferred Experience: 3+ years in data engineering with strong Python and SQL skills. Knowledge of lakehouse architectures (Dremio, Snowflake, Iceberg, PySpark). Familiarity with AWS services (S3, ECS, EC2/Fargate) is a plus. Experience collaborating with stakeholders to gather business requirements. Knowledge of data more »
in cloud-based data platform modeling Advanced SQL and relational database expertise Proficiency in cloud data engineering tools and services Programming skills in Python, PySpark, and T-SQL Strong DevOps background with CI/CD expertise Azure Cloud experience Please apply now for immediate consideration more »
City Of London, England, United Kingdom Hybrid / WFH Options
Premier Group Recruitment
data modelling Technical Skills Proficient in SQL Server and relational database management Experience with cloud platforms like Azure (Synapse, Data Lake) Programming in Python, PySpark, and T-SQL Additional Skills Familiarity with data analysis tools and workflows Strong communication and collaboration skills Experience in fast-paced, team-oriented environments more »
Requirements Excellent SQL and Python scripting skills Experience designing and developing data warehouses and data lakes/lakehouses Experience designing solutions involving databricks and PySpark Experience with Azure technologies including Data Lake, Data Factory, and Synapse Experience with data visualisation tools such as Power BI Knowledge of Agile methodology more »
City of London, London, United Kingdom Hybrid / WFH Options
Nigel Frank International
Requirements Excellent SQL and Python scripting skills Experience designing and developing data warehouses and data lakes/lakehouses Experience designing solutions involving databricks and PySpark Experience with Azure technologies including Data Lake, Data Factory, and Synapse Experience with data visualisation tools such as Power BI Knowledge of Agile methodology more »
and Synapse SQL pools. Build and maintain dimensional and relational models based on the business requirements. Ensure data model accuracy, scalability, and performance. Use PySpark within Azure Synapse notebooks to extract, transform, and load (ETL/ELT) data from raw formats (e.g., Delta, Parquet, CSV) stored in ADLS Gen2. more »
Strong experience in data pipelines and deploying ML models Preference for experience in retail/marketing but not required Tech across: Python, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs If this role looks of interest, please reach out to Joseph Gregory more »
machine learning research, methodologies, and technologies, integrating them seamlessly into our workflow. What you bring: High-level expertise in the Python programming language including PySpark, Proficiency in utilising machine learning libraries and frameworks like PyTorch, ONNX, and XGBoost. Strong understanding of software testing and CI/CD principles and more »
lifecycle management using Azure Databricks. Good to have skills in containerization like Docker and ACR High-level expertise in the Python programming language including PySpark, Proficiency in utilising machine learning libraries and frameworks like PyTorch, ONNX, and XGBoost. Strong understanding of software testing and CI/CD principles and more »
cycle management using Azure Databricks. Good to have skills in containerization like Docker and ACR High-level expertise in the Python programming language including PySpark Proficiency in utilising machine learning libraries and frameworks like PyTorch, ONNX, and XGBoost. Strong understanding of software testing and CI/CD principles and more »
in a Senior or Lead Data Engineer position Strong experience working with the Azure Data Platform Strong Databricks experience Coding experience with Python/PySpark This role is based out of the company's office in London where you will work collaboratively with other members of the engineering team more »
Spark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such as PySpark, Python (with Pandas if no PySpark), T-SQL, and SparkSQL. Strong understanding of data modeling, ETL processes, and data warehousing concepts. Knowledge of more »
techniques, Data Connection, and Security setup Proficiency in developing data integration pipelines, transformations, pipeline scheduling, Ontology, and applications in Palantir Foundry Excellent skills in PySpark and Spark SQL for data transformations Experience in designing and building interactive data applications and developing parameterized, interactive dashboards in Quiver Desirable skills include more »
systems and offer improvements that will help reduce technical/code/engineering debt. Key Skills: Extensive experience with Machine Learning and Spark/PySpark/Ray/Python Recommendation systems, pattern recognition, data mining, artificial intelligence Modern Parallel Computing; distributed clusters, multicore servers, GPU’s Experience with developing more »
when needed What we are looking for Background in computer science, engineering, information systems or other data related technical fields Experience in python and pyspark is essential, with additional skills in SQL and Java useful Knowledge of APIs, RESTful services and development best practices What we offer Base salary more »
expertise in data validation for accuracy, completeness, and integrity. JMeter : Experience in API testing using JMeter (Load and Performance testing experience not required). PySpark : Skilled in scripting for data processing using PySpark. Data Lakes : Hands-on experience with data lake environments. Kafka and StreamSets : Familiarity with Kafka StreamSets more »
Job Description POSITION OVERVIEW This is exciting opportunity to join a leading games publisher as a Data Developer. Working within the Data Services team, you will be collaborating with team members at our company and across our studios. You will more »
City of London, London, United Kingdom Hybrid / WFH Options
Nigel Frank International
Lead Data Engineer - London - Azure - Hybrid - £95k Great opportunity for an experienced data engineering lead to join a leading company within the legal sector who are putting data at the forefront of every step they take in their industry! If more »
a Data Engineer, with a focus on AWS Proficiency in AWS services like Redshift, S3, Glue, and Lambda Strong skills programming in Python or PySpark Nice to have: AWS Certifications Interviews are already underway with limited slots remaining, don't miss out on your opportunity to secure this amazing … Get in touch ASAP by contacting me at (url removed) or on (phone number removed)! Data Engineer, Senior Data Engineer, Developer, AWS, Apache, Python, PySparkmore »
Machine Learning Engineer (Data Engineering Background) Paying up to £80,000 + 10% bonus Remote first policy - Office in Central London if preferred 2 stage interview process One of La Fosse's best clients who are an industry leader within more »