Guildford, England, United Kingdom Hybrid / WFH Options
Hawksworth
warehousing and ETL frameworks Proficiency in working with relational databases (e.g., Oracle, PostgreSQL), Parquet/Delta files and big data technologies (e.g. Synapse, Hadoop, Spark, Kafka) Experience working with Microsoft Azure and associated data services Strong analytical and data interpretation skills, with the ability to communicate findings to technical more »
machine learning techniques, deep learning, graph data analytics, statistical analysis, time series, geospatial, NLP, sentiment analysis, pattern detection, etc.) Experience using Python, R or Spark to extract insights from data Knowledge of SQL for accessing and processing data (PostgreSQL preferred but general SQL knowledge more important) Experience using the more »
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
tools such as Informatica MDM, Informatica AXON, Informatica EDC, and Collibra • MySQL, SQL Server, Oracle, Snowflake, PostgreSQL and NoSQL databases • Programming languages such as Spark or Python • Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : • Base Salary more »
to: Backend technology, Python. Databases like MSSQL. Front-end technology, Java. Cloud platform, AWS. Programming language, JavaScript (React.js) Big data technologies such as Hadoop, Spark, or Kafka. What We Need from You: Essential Skills: A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in more »
Azure Synapse Analytics. Strong SQL and Python skills. Experience with data modeling, ETL processes, and data warehousing. Knowledge of big data technologies such as Spark and Hadoop is a plus. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Experience in the healthcare sector is more »
and coding environments. Bonus Skills: Python/PHP/Typescript/ReactJS AI/ML models and usage ETL pipelines in AWS (Glue/ApacheSpark) API Load testing If you would like more information on the role or like to apply for then please send your CV more »
Modelling. Experience with at least one or more of these programming languages: Python, Scala/Java Experience with distributed data and computing tools, mainly ApacheSpark & Kafka Understanding of critical path approaches, how to iterate to build value, engaging with stakeholders actively at all stages. Able to deal more »
with Git for version control and project management, alongside some knowledge of Linux/Shell. data platform familiarity - previous experience of working with both ApacheSpark and MapReduce data processing and analytics frameworks. and reporting expertise - experience with Tableau, Power BI, Excel alongside notebooks for experiment documentation. What more »
manage several tasks/projects concurrently and prioritize work effectively. • Experience in Risk and Finance or Regulatory reporting. • Understanding of Big Data Technologies, Cloudera, Spark • Experience in CI/CD pipeline implementation • Good exposure in Python Scripting more »
Platforms Must have 8+ years' Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) Must have 3+ more »
Engineer, with expertise developing scalable data pipelines. Strong object oriented programming skills, particularly in Python . Experience with data lakes and data warehousing solutions ( Spark, Dataflow, BigQuery ). Knowledge of SQL and experience with relational databases, as well as NoSQL databases Familiarity with cloud services (preferably GCP ) and understanding more »
Engineering experience Demostrate In-depth knowledge of large-scale data platforms (Databricks, Snowflake) and cloud-native tools (Azure Synapse, RedShift) Experience of analytics technologies (Spark, Hadoop, Kafka) Have familiarity with Data Lakehouse architecture, SQL Server, DataOps, and data lineage concepts Demonstrate In-depth knowledge of large-scale data platforms more »
South East London, London, United Kingdom Hybrid / WFH Options
Stepstone UK
Bachelor's degree in Computer Science or a related field (Master's degree preferred) Nice to have: experience with LLMs, Vector Databases, AWS EMR, Spark, and Python Our commitment: Equal opportunities are important to us. We believe that diversity and inclusion at The Stepstone Group are critical to our more »
CI/CD/YAML/ARM/Terraform MSBI Traditional Stack (SQL, SSAS, SSIS, SSRS) Azure Automation/PowerShell Azure Streaming Analytics/Spark Streaming Azure Functions/C# .NET PowerApps Data Science Master Data Management/MDS WHY ADATIS? There’s a long list of reasons, from more »
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
of OO programming, software design, i.e., SOLID principles, and testing practices. Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark, geospatial data/modelling and insurance are a plus. Exposure to MLOps, model monitoring principles, CI/CD and associated tech, e.g., Docker, MLflow more »
model training, evaluation, and productionization. - Strong programming skills in Python, with proficiency in ML frameworks (e.g., TensorFlow, PyTorch) and data engineering tools (e.g., Kafka, Spark). - Expertise in cloud computing platforms (AWS, Azure) and containerization technologies (Docker) for scalable and reliable ML model deployment. - Solid understanding of data privacy more »
DevOps/Agile Experience of managing environments using IAC (Terraform API's) Experience of designing robust, secured and compliant platform Capabilities. Strong understanding of ApacheSpark including its architecture, components & how to create, monitor, optimize & scale spark jobs. The Package We offer a competitive salary and a more »
Especially MS Azure is recommended as Microsoft Fabric is integrated within Azure services. Experience of designing robust , secure and compliant capabilities. Strong understanding of ApacheSpark, Including its Architecture , Components, and how to create, Monitor, Optimize, and Scale Spark Jobs. Experienced working in a DevOps/Agile more »
Swansea, Wales, United Kingdom Hybrid / WFH Options
CPS Group (UK) Limited
my client will train you): Knowledge of Microsoft SQL Server and packaged BI tools (SSAS and SSIS). Docker, Kubernetes and cloud computing technologies. Apache Kafka and data streaming. Familiarity with ApacheSpark or similar data processing tools. Experience developing and maintaining CICD pipelines, particularly Azure DevOps more »
classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
language, ideally Python but can also be Java or C/c++ SQL expeirence Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau) Get in touch with Ella Alcott - Ella@engagewithus.com more »
you tick these 4 boxes? If so, please read on..... Must haves: Python Data modelling, data warehousing and ETL frameworks Oracle, PostgreSQL Synapse, Hadoop, Spark, Kafka Exciting times to be part of an enterprise environment like this one, there won't be many on the same scale! The experience more »
quality of data. Key Requirements: Strong experience designing data pipelines/warehouses using AWS and Snowflake. Exposure to big data technologies such as Kafka, Spark, or Hadoop. Solid experience with Snowflake, including performance optimisation and cost management. Strong experience with SQL and Data modelling. Excellent understanding of AWS architecture more »