mindset with a desire to solve technical problems and model/forecast intricate real-life systems • Good knowledge of parallel computing techniques (Python multiprocessing, ApacheSpark), and performance profiling and optimisation • Good understanding of data structures and algorithms • The ability to communicate complex technical concepts to those with more »
Data Analytics in Azure Synapse Analytics, Azure Analysis Services Data Ingestion and Storage including Azure Data Factory, Azure Databricks, Azure Data Lake, Kafka and Spark Streaming, Azure EventHub/IoT Hub, and Azure Stream Analytics Experience with Object-oriented/object function scripting languages: Python preferred more »
a GCP (Senior) Data Engineer and interested in this role, please apply by clicking the link below. Desired Skills and Experience Python SQL Kafka Spark Pub/Sub Google Cloud Platform (BigQuery more »
working in an Azure environment with a focus on Azure Data Lake, Databricks, Azure Data Factory and Power BI architectures. Working with Python, SQL, Spark and Databricks, you will build, rebuild and maintain data pipelines as well as have the opportunity to work with CI/CD pipelines, Azure more »
working in an Azure environment with a focus on Azure Data Lake, Databricks, Azure Data Factory and Power BI architectures. Working with Python, SQL, Spark and Databricks, you will build, rebuild and maintain data pipelines as well as have the opportunity to work with CI/CD pipelines, Azure more »
Manchester, North West, United Kingdom Hybrid / WFH Options
Dupen Ltd
Linux, APIs, infrastructure design – load balancing, VMs, PostgreSQL, vector dbs. ML Learning Engineer – desirable skills: Version control (Git), computer vision libraries, Big Data (Hadoop, Spark), Cloud – AWS, Google Cloud, Azure, and a knowledge of secure coding techniques – PCI-DSS, PA-DSS, ISO27001. Note: as there are actually two roles more »
Employment Type: Permanent
Salary: £50000 - £60000/annum To £60,000 + range of benefits
Milton Keynes, Buckinghamshire, South East, United Kingdom Hybrid / WFH Options
Dupen Ltd
Linux, APIs, infrastructure design load balancing, VMs, PostgreSQL, vector dbs. ML Learning Engineer desirable skills: Version control (Git), computer vision libraries, Big Data (Hadoop, Spark), Cloud AWS, Google Cloud, Azure, and a knowledge of secure coding techniques PCI-DSS, PA-DSS, ISO27001. Note: as there are actually two roles more »
Milton Keynes, Bedfordshire, South East, Woolstone, Buckinghamshire, United Kingdom Hybrid / WFH Options
Dupen Ltd
Linux, APIs, infrastructure design – load balancing, VMs, PostgreSQL, vector dbs. ML Learning Engineer – desirable skills: Version control (Git), computer vision libraries, Big Data (Hadoop, Spark), Cloud – AWS, Google Cloud, Azure, and a knowledge of secure coding techniques – PCI-DSS, PA-DSS, ISO27001. Note: as there are actually two roles more »
Employment Type: Permanent
Salary: £50000 - £60000/annum To £60,000 + range of benefits
Google Cloud Composer, AWS Step Functions, Azure Data Factory, UC4, Control-M). * Experience with data quality and validation. * Experience querying massive datasets using Spark, Presto, Hive, Impala, etc. * Experience in optimization of computer-vision applications. * Experience in building highly-scalable performant data pipelines * Experience with Data Modeling. Morgan more »
mining, data analysis, and strong software engineering skills. Strong understanding of Data Engineering Proficiency in AWS, data warehousing (Snowflake, Databricks, Redshift), big data frameworks (Spark, Kafka), container orchestration platforms (Kubernetes), and data integration/ETL tools. Strong written and verbal communication skills, with the ability to explain technical concepts more »
Engineering experience Demostrate In-depth knowledge of large-scale data platforms (Databricks, Snowflake) and cloud-native tools (Azure Synapse, RedShift) Experience of analytics technologies (Spark, Hadoop, Kafka) Have familiarity with Data Lakehouse architecture, SQL Server, DataOps, and data lineage concepts Demonstrate In-depth knowledge of large-scale data platforms more »
CI/CD/YAML/ARM/Terraform MSBI Traditional Stack (SQL, SSAS, SSIS, SSRS) Azure Automation/PowerShell Azure Streaming Analytics/Spark Streaming Azure Functions/C# .NET PowerApps Data Science Master Data Management/MDS WHY ADATIS? There’s a long list of reasons, from more »
artwork. Proven familiarity with NLP approaches like Word2Vec or BERT, including identifying the right KPIs and objective functions. Experience working with big data systems (Spark, EMR, Kafka, S3, Airflow) and programming languages (Java, Scala, or Python). Experience building in-production Machine Learning systems Good understanding of system architecture more »
Cheltenham, England, United Kingdom Hybrid / WFH Options
Ripjar
execute Understand the nuances of dealing with structured and unstructured data, and be experienced in using databases (Mongo ideally) Experience with Linux Experience with Spark (Pyspark), Hadoop or other Big data technologies would be beneficial, but not required Benefits Why we think you'll enjoy it here: Base Salary more »
Staines-Upon-Thames, England, United Kingdom Hybrid / WFH Options
IFS
solutions Proficiency in data pipeline orchestration across hybrid environments, leveraging the latest in Azure and allied technologies. Expertise in data processing with tools like Spark or Dask, and fluency in Python, Scala, C#, or Java. Expertise in DevOps and CI/CD automation , ensuring seamless deployment with tools like more »
of AWS services with the ability to demonstrate working on large engagements * Experience of AWS tools (e.g. Athena, Redshift, Glue, EMR) * Java, Scala, Python, Spark, SQL * Experience of developing enterprise grade ETL/ELT data pipelines. * Deep understanding of data manipulation/wrangling techniques * Demonstrable knowledge of applying Data … DB/Neo4j/Elastic, Google Cloud Datastore. * Snowflake Data Warehouse/Platform * Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. * Experience of working with CI/CD technologies, Git, Jenkins, Spinnaker, GCP Cloud Build, Ansible etc * Experience building and deploying solutions to Cloud more »
Basingstoke, England, United Kingdom Hybrid / WFH Options
Intec Select
cross-functionally across the business to understand the requirements of the products Designing and implementing performance related data ingestion pipelines from multiple sources using ApacheSpark Integrating end-to-end data pipelines ensuring a high level of quality is maintained Working with an Agile delivery/DevOps methodology more »
management and data governance open source platform that we will teach you. Other technologies in use in our space: RESTful services, Maven/Gradle, ApacheSpark, BigData, HTML 5, AngularJs/ReactJs, IntelliJ, Gitlab, Jira. Cloud Technologies: You’ll be involved in building the next generation of finance more »
Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: ApacheSpark and the Hadoop Ecosystem Edge technologies eg NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages Desirable Requirements: Jupyter Hub Awareness more »
classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
level within a typical retail trading environment is key. Experience required: A background in leveraging hands on skills using tools such as Python, R, Spark, Hadoop, SQL and cloud based platforms such as GCP, Azure and AWS to manipulate and analyse various data sets in large volumes Background in more »
dynamic. Knowledge and understanding of OTC products (Interest Rate Swaps, Variance Swaps, CDS, etc.) bookings. Familiarity with C++ and Big Data tools such as Spark, Kafka, Elastic. Join us and be part of a team that values innovation, collaboration, and excellence. Take your career to new heights with a more »
programming language, with experience in leveraging available libraries, like Tensorflow, Keras, Pytorch, Scikit-learn, or others, to dedicated projects. Previous experience in working on Spark, Hive, and SQL, Preferred qualifications, capabilities, and skills Financial service background PhD in one of the above disciplines ABOUT US J.P. Morgan is a more »
platforms, Azure is desirable Software development experience is desirable Data architecture knowledge is desirable API design and deployment experience is desirable Big data (e.g. Spark) experience is desirable NoSQL DB experience is desirable Qualifications 2+ years of data science experience Right to work in the UK and/or more »