Cloud SW ecosystem to help accelerate this growth! Responsibilities: Develop performance tests on a variety of workloads including MySQL, PostgreSQL, NGINX, Redis, MongoDB, Cassandra, Spark, ML, etc. Deliver workload performance analysis reports in written and presentation form Collaborate with external partners to showcase Arm technology Collaborate with internal partners more »
Data, Model and Infrastructure. Minimum 3 years of hands-on experience implementing AI/ML solutions and platform tooling for Data Science. Expert in Spark SQL, PySpark, (Python and/or R programming language) which includes experience in libraries such as Pandas, scikit-learn, R (tidyverse, glm, caret etc more »
years of Data Architecture Experience with cloud-based data platforms (e.g., AWS, Azure, Google Cloud Platform). Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). Experience with marketing analytics tools (e.g., Google Analytics, Adobe Analytics, Salesforce Marketing Cloud) and how to drive key performance indicators. Knowledge of more »
experience developing APIs and microservices. Familiarity with data engineering concepts and tools such as data pipelines, ETL processes, SQL, and big data technologies (e.g., ApacheSpark, Hadoop). Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication and collaboration skills, with the ability to work more »
models, using time-series, machine learning, or deep learning algorithms and techniques. Experience in Data science, machine learning, or optimization models Experience in Python, Spark, Scala, or R, using open source frameworks (for example: scikit learn, TensorFlow, Pytorch) You are knowledgeable of databases, data warehouse design, cloud storage, and more »
and cloud providers like AWS, Azure, GCP or others. Experience in key technologies to be leveraged by the team including Java, Python/PySpark, Spark, CI/CD technologies Experience in Machine Learning Platforms and Data Science Technologies Experience working with a variety of data platforms such as S3 more »
and cloud providers like AWS, Azure, GCP or others. Experience in key technologies to be leveraged by the team including Java, Python/PySpark, Spark, CI/CD technologies Experience in Machine Learning Platforms and Data Science Technologies Experience working with a variety of data platforms such as S3 more »
Nottingham, Nottinghamshire, East Midlands, United Kingdom
Experian Ltd
Redshift, DynamoDB, Glue and SageMaker Infrastructure-as-Code tools and approaches (we use the AWS CDK with CloudFormation) Data processing frameworks such as pandas, Spark and PySpark Machine learning concepts like model training, model registry, model deployment and monitoring Development and CI/CD tools (we use GitHub, CodePipeline more »
network programming. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate and the like. Hands-on with infrastructure-as-code tools and automation, such as Terraform, Ansible, or Helm. The role Tech Lead responsible more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
City of London, London, United Kingdom Hybrid / WFH Options
TECHNOLOGY RECWORKS LIMITED
sponsors) Knowledge and experience of the following would be advantageous: Knowledge of Enterprise Architecture Frameworks Good knowledge of Azure DevOps Pipelines Strong experience in ApacheSpark framework Previous experience in designing and delivering data warehouse and business intelligence solutions using on-premises Microsoft stack (SSIS, SSRS, SSAS) Knowledge more »
Manchester, North West, United Kingdom Hybrid / WFH Options
TECHNOLOGY RECWORKS LIMITED
sponsors) Knowledge and experience of the following would be advantageous: Knowledge of Enterprise Architecture Frameworks Good knowledge of Azure DevOps Pipelines Strong experience in ApacheSpark framework Previous experience in designing and delivering data warehouse and business intelligence solutions using on-premises Microsoft stack (SSIS, SSRS, SSAS) Knowledge more »
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
Lead Data Engineer (Director) - Individual contributor - Azure, Data Factory, Databricks, ApacheSpark - London Based I am hiring for a Lead Data Engineer for a crucial role within one of my Investment Bank clients in London. This role is at Director level as they require a very senior candidate … Leading data engineering practices Support current applications Introduce AI practices to the team/project Communicate key successes with stakeholders Key Skills: Azure Databricks ApacheSpark Datascience, AI, ML Certifications or continued upskilling/contribution to blog posts within Data & AI beneficial but not essential. This is a … without sponsorship, if you are interested please apply or email me directly - aaron.dhammi@nicollcurtin.com Lead Data Engineer (Director) - Individual contributor - Azure, Data Factory, Databricks, ApacheSpark - London Based more »
Data Scientists and Service Engineering teams Experience with design, development and operations that leverages deep knowledge in the use of services like Amazon Kinesis, Apache Kafka, ApacheSpark, Amazon Sagemaker, Amazon EMR, NoSQL technologies and other 3rd parties Develop and define key business questions and to build … a related field Experience of Data platform implementation, including 3+ years of hands-on experience in implementation and performance tuning Kinesis/Kafka/Spark/Storm implementations Experience with analytic solutions applied to the Marketing or Risk needs of enterprises Basic understanding of machine learning fundamentals Ability to … take Machine Learning models and implement them as part of data pipeline IT platform implementation experience Experience with one or more relevant tools ( Flink, Spark, Sqoop, Flume, Kafka, Amazon Kinesis) Experience developing software code in one or more programming languages (Java, JavaScript, Python, etc) Current hands-on implementation experience more »
Spark Architect/SME Contract Role- 6 months to begin with & its extendable Location: Leeds, UK (min 3 days onsite) Context: Legacy ETL code for example DataStage is being refactored into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. … Skills: Spark Architecture – component understanding around Spark Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting recommendations. Spark SME – Be able to review PySpark … and Spark SQL jobs and make performance improvement recommendations. Spark – SME Be able to understand Data Frames/Resilient Distributed Data Sets and understand any memory related problems and make corrective recommendations. Monitoring – Be able to monitor Spark jobs using wider tools such as Grafana to see more »
data engineering or a similar role. > Proficiency in programming languages such as Python, Java, or Scala. > Strong experience with data processing frameworks such as ApacheSpark, Apache Flink, or Hadoop. > Hands-on experience with cloud platforms such as AWS, Google Cloud, or Azure. > Experience with data warehousing more »
more details of the position - Ideal Qualifications Must Have - Platform engineer, Azure DevOps and CI/CD tools, Azure Cloud, Microsoft Fabric, Azure Services, ApacheSpark, Experience of using IAC (terraform, APIs), Data Engineer, Big Data, PySpark Solid understanding of data Engineering concepts & experience of building and maintaining … DevOps/Agile Experience of managing environments using IAC (Terraform API's) Experience of designing robust, secured and compliant platform Capabilities. Strong understanding of ApacheSpark including its architecture, components & how to create, monitor, optimize & scale spark jobs. Please send your resumes to raghava.d@s3staff.com for immediate more »
Strong understanding of RESTful APIs and experience with API development and integration. ● Familiarity with database systems (e.g., SQL, NoSQL) and data processing frameworks (e.g., ApacheSpark, Apache Beam). ● Excellent problem-solving skills and ability to work in a fast-paced startup environment. ● Strong communication skills and more »
development (ideally AWS) Knowledge and ideally hands-on experience with data streaming, event-based architectures and Kafka Strong communication and interpersonal skills Experience with ApacheSpark or Apache Flink would be ideal, but not essential Please note, this role is unable to provide sponsorship. If this role more »
edinburgh, central scotland, United Kingdom Hybrid / WFH Options
Change Digital – Digital & Tech Recruitment
AWS Redshift, and Python Experience with ETL processes, data integration, and data warehousing. Strong SQL skills Experience with Big Data technologies such as Hadoop, Spark, and Kafka Familiarity with cloud platforms (AWS, Azure, Google Cloud) Working knowledge of data visualisation tools (PowerBI, Tableau, Qlik Sense) Additional skills: Client-facing more »
Greater Bristol Area, United Kingdom Hybrid / WFH Options
Anson McCade
and product development, encompassing experience in both stream and batch processing. Designing and deploying production data pipelines, utilizing languages such as Java, Python, Scala, Spark, and SQL. In addition, you should have proficiency or familiarity with: Scripting and data extraction via APIs, along with composing SQL queries. Integrating data more »
tools (e.g., Docker, Kubernetes). CI/CD pipelines and tools (e.g. DBT, Jenkins, GitLab CI) Desirable: Experience with analytics tools and frameworks (e.g., ApacheSpark, Hadoop). SQL Sagemaker, DataRobot Google Cloud and Azure Data platform metadata driven frameworks to ingest, transform and manage data more »
tools (e.g., Docker, Kubernetes). CI/CD pipelines and tools (e.g. DBT, Jenkins, GitLab CI) Desirable: Experience with analytics tools and frameworks (e.g., ApacheSpark, Hadoop). SQL Sagemaker, DataRobot Google Cloud and Azure Data platform metadata driven frameworks to ingest, transform and manage data more »