Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Capgemini
AI Platforms: Google Cloud Platform, Amazon Web Services, Microsoft Azure, Databricks. Experience in one or more of the listed Languages or Packages: Python, R, Pyspark, Scala, PowerBI, Tableau. Proven experience in successfully delivering multiple complex data rich workstreams in parallel to supporting wider strategic ambitions and supporting others in More ❯
Newbury, Berkshire, United Kingdom Hybrid / WFH Options
Intuita - Vacancies
effectiveness, including Azure DevOps. Considerable experience designing and building operationally efficient pipelines, utilising core Azure components, such as Azure Data Factory, Azure Databricks and Pyspark etc. Proven experience in modelling data through a medallion-based architecture, with curated dimensional models in the gold layer built for analytical use. Strong More ❯
and access controls. Monitor and optimize performance of data workflows using CloudWatch, AWS Step Functions, and performance tuning techniques. Automate data processes using Python, PySpark, SQL, or AWS SDKs. Collaborate with cross-functional teams to support AI/ML, analytics, and business intelligence initiatives. Maintain and enhance CI/… a cloud environment. Required Skills & Qualifications: 5+ years of experience in data engineering with a strong focus on AWS cloud technologies. Proficiency in Python, PySpark, SQL, and AWS Glue for ETL development. Hands-on experience with AWS data services, including Redshift, Athena, Glue, EMR, and Kinesis. Strong knowledge of More ❯
Future Talent Pool - GCP Data Engineer, London, hybrid role - digital Google Cloud transformation programme Proficiency in programming languages such as Python, PySpark and Java develop ETL processes for Data ingestion & preparation SparkSQL CloudRun, DataFlow, CloudStorage GCP BigQuery Google Cloud Platform Data Studio Unix/Linux Platform Version control tools More ❯
with proficiency in designing and implementing CI/CD pipelines in Cloud environments. Excellent practical expertise in Performance tuning and system optimisation. Experience with PySpark and Azure Databricks for distributed data processing and large-scale data analysis. Proven experience with web frameworks , including knowledge of Django and experience with More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Syntax Consultancy Limited
with proficiency in designing and implementing CI/CD pipelines in Cloud environments. Excellent practical expertise in Performance tuning and system optimisation. Experience with PySpark and Azure Databricks for distributed data processing and large-scale data analysis. Proven experience with web frameworks , including knowledge of Django and experience with More ❯
Hadoop - Must Have Communication - Must Have Banking/Capital Markets Domain - Good to have Note: Candidate should know Scala/Python (Core) coding language. Pyspark profile will not help here. Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux More ❯
Hadoop - Must Have Communication - Must Have Banking/Capital Markets Domain - Good to have Note: Candidate should know Scala/Python (Core) coding language. Pyspark profile will not help here. Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux More ❯
Knowledge of Data Warehouse/Data Lake architectures and technologies. Strong working knowledge of a language for data analysis and scripting, such as Python, Pyspark, R, Java, or Scala. Experience with any of the following would be desirable but not essential; Microsoft's Fabric data platform, Experience with ADF More ❯
apply! A degree in Mathematics, Engineering, Statistics, Computer Science, Physics, or a related field. An advanced degree is highly preferred. Proficient in Python and PySpark; experience with SQL or similar querying languages. Solid foundation in machine learning principles, including model evaluation, optimization, and deployment best practices. Self-motivated, collaborative More ❯
Analyst: Must hold experience with data analytic techniques, ideally applied to real-life objects. Must hold a year's professional experience using Python/Pyspark/Pandas. Experience with processing data and working with databases/datalakes (SQL). Strong understanding of data manipulation, analysis and processing. Ability to More ❯
Data Analyst: ' • Must hold experience with data analytic techniques, ideally applied to real life objects. ' • Must hold a years professional experience using Python/Pyspark/Pandas. ' • Experience with processing data and working with databases/datalakes (SQL). ' • Strong understanding of data manipulation, analysis and processing.' • Abiliity to More ❯
using data engineering, statistical, and ML/AI approaches to uncover data patterns and build models. We use Microsoft tech stack, including Azure Databricks (Pyspark, python), and we are expanding our data science capabilities. To be successful in the role, you will need to have extensive experience in data More ❯
suit an experienced Director, Senior Manager or Lead of Data Engineering. Technical Expertise: A background of strong working knowledge of technologies including Python, SQL, PySpark and SAS. Education: Bachelor’s degree in business administration, IT, Data Science, or related field (master’s preferred). Further to this, any professional More ❯
Glasgow, Renfrewshire, United Kingdom Hybrid / WFH Options
Cisco Systems, Inc
field. Experienced with cloud based data processing platforms such as AWS, and/or Databricks. You have firm software development skills with Python/PySpark, Terraform, Git, CI/CD, Docker. Comfortable with relational and NoSQL databases/datastores such as Elasticsearch. Familiar with the threat landscape and threat More ❯
aspect of working with data in ADLS is the transformation and modeling process. Companies can leverage Azure Data Factory or Databricks, using languages like PySpark and Scala , to create efficient data processing and transformation workflows. These workflows are designed to handle both batch and streaming data seamlessly. Furthermore, organizations More ❯
huge datasets. Has a solid understanding of blockchain ecosystem elements like DeFi, Exchanges, Wallets, Smart Contracts, mixers and privacy services. Bonus Experience: Databricks and PySpark Analysing blockchain data Building and maintaining data pipelines Deploying machine learning models Use of graph analytics and graph neural networks Following funds on chain More ❯
Leicester, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
IO Associates
machine learning. Experience with deep learning or generative AI is a plus but not essential. Proficiency in (Spark)SQL and Python . Experience with PySpark is beneficial but not required. Experience designing and implementing robust testing frameworks . Strong analytical skills with keen attention to detail. Excellent communication skills More ❯
personal development. What's in it for you? Up to £90k Bonus scheme Skills and Experience Experience in using modern technologies such as Python, Pyspark, Databricks. Experience in using advanced SQL. Experience with Cloud computing, preferably Azure. Experience in working on loyalty scheme projects. If you would like to More ❯
documentation and clarifying complex concepts What we are looking for Background in computer science, engineering, IT or similar technical fields Experience in Python and PySpark, as well as SQL and similar tools Knowledge of APIs, RESTful services and development best practices What’s in it for you? Base salary More ❯
london, south east england, united kingdom Hybrid / WFH Options
Hexegic
documentation and clarifying complex concepts What we are looking for Background in computer science, engineering, IT or similar technical fields Experience in Python and PySpark, as well as SQL and similar tools Knowledge of APIs, RESTful services and development best practices What’s in it for you? Base salary More ❯
and deliver data solutions to solve important problems. Prior experience in the Palantir AIP, Foundry and Gotham would be advantageous. Previous development experience in PySpark, Typescript and front end frameworks like React would also be advantageous. As a Consultant Engineer your responsibilities will include Assist in the design, development More ❯
data technologies Experience in designing, managing and overseeing task assignment for technical teams. Mentoring data engineers Strong Exposure to SQL, Azure Data Factory, Databricks & PySpark is a must have. Experience in Medallion Silver Layer modelling Experience in Agile project environment Insurance experience – Policy and Claims Understanding of DevOps, continuous More ❯
data technologies Experience in designing, managing and overseeing task assignment for technical teams. Mentoring data engineers Strong Exposure to SQL, Azure Data Factory, Databricks & PySpark is a must have. Experience in Medallion Silver Layer modelling Experience in Agile project environment Insurance experience – Policy and Claims Understanding of DevOps, continuous More ❯
Spark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such as PySpark, Python (with Pandas if no PySpark), T-SQL, and SparkSQL. Strong understanding of data modeling, ETL processes, and data warehousing concepts. Knowledge of More ❯