Greater London, England, United Kingdom Hybrid / WFH Options
Anson McCade
encompassing experience in both stream and batch processing. Proficiency in the design and deployment of production data pipelines, involving languages like Java, Python, Scala, Spark, and SQL. You should also have some, if not all, of the following; Capability in scripting, data extraction via APIs, and the composition of more »
Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as ApacheSpark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands more »
City Of London, England, United Kingdom Hybrid / WFH Options
RJC Group
Azure or AWS experience Data access methods (SQL, GraphQL, APIs) Beneficial Requirements Experience around data science tools and algorithms Manipulation technologies (e.g., WebSockets, Kafka, Spark) TensorFlow, Pandas, pySpark and scikit-learn would be great Salary up to £75K + 20% bonus and benefits package We have interview slots lined more »
Google Cloud Professional Cloud Architect or Professional Cloud Developer certification Very Disrable to have hands-on experience with ETL tools, Hadoop-based technologies (e.g., Spark), and batch/streaming data pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies more »
Modelling. Experience with at least one or more of these programming languages: Python, Scala/Java Experience with distributed data and computing tools, mainly ApacheSpark & Kafka Understanding of critical path approaches, how to iterate to build value, engaging with stakeholders actively at all stages. Able to deal more »
London, England, United Kingdom Hybrid / WFH Options
Harnham
DevOps experience in CI/CD. Experience using Tensorflow, Kubernetes, MLFlow, Kafka, and Airflow. Experience using Python is a must (tools like AWS and Spark are beneficial) Excellent communication skills and team and colleague engagement. A keen interest in problem-solving and using scalable machine learning to solve the more »
Learn, TensorFlow, PyTorch). Solid understanding of ML and data pipeline architectures and best practices. Experience with big data technologies and distributed computing (e.g., Spark, Hadoop) is a plus. Proficient in SQL and experience with relational databases. Strong analytical and problem-solving skills, with a keen attention to detail. more »
as TensorFlow, PyTorch, or Scikit-learn. Strong knowledge of statistical modelling, data mining, and data visualization techniques. Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, GCP, Azure). Strong problem-solving skills and the ability to think critically and creatively. Excellent analytical skills with more »
London, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
ETL processes, and data warehousing solutions. Programming: Utilize Python, Java, Scala, or GoLang to build and optimize data pipelines. Distributed Processing: Work with Hadoop, Spark, and other platforms for large-scale data processing. Real-Time Data Streaming: Develop and manage pipelines using CDC, Kafka, and Apache Spark. Database more »
encompassing experience in both stream and batch processing. Proficiency in the design and deployment of production data pipelines, involving languages like Java, Python, Scala, Spark, and SQL. You should also have some, if not all, of the following; Capability in scripting, data extraction via APIs, and the composition of more »
they are on the lookout for 2 AWS Data Engineers to come in on a contract basis. Key Skills/Requirements: Must have Python & Spark experience Must have strong AWS experience Must have Terraform experience SQL & NoSQL experience Have built out Data Warehouses & built Data Pipelines Strong Databricks & Snowflake more »
Products tools (e.g. BigQuery, Dataflow, DataProc, AI Building Blocks, Looker, Cloud Data Fusion, Data prep, etc.) to build solutions for our customers Experience in Spark (Scala/Python/Java) and Kafka. Experience in MDM, Metadata Management, Data Quality and Data Lineage tools. E2E Data Engineering and Lifecycle (including … Cloud Data Fusion, Data prep, etc.) - Construction Industry Sector Preferred Google Cloud Platform Time-Series Data Building Control and Monitoring Systems Java, Scala, Python, Spark, SQL Experience of developing enterprise grade ETL/ELT data pipelines. Deep understanding of data manipulation/wrangling techniques Demonstrable knowledge of applying Data … DB/Neo4j/Elastic, Google Cloud Datastore. Snowflake Data Warehouse/Platform Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working CI/CD technologies, Git, Jenkins, Spinnaker, GCP Cloud Build, Ansible etc Experience and knowledge of application Containerisation, Docker, Kubernetes more »
Leeds, England, United Kingdom Hybrid / WFH Options
Damia Group
Spark Scala Developer - Scala/ApacheSpark - Hybrid/Leeds - £450-£550 Spark Scala Developer to join our client, one of the biggest financial services organizations in the world, with operations in more than 38 countries. It has an IT infrastructure of 200,000+ servers … 000+ database instances, and over 150 PB of data. As a Spark Scala Engineer, you will have the responsibility to refactor Legacy ETL code, for example DataStage into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. Your responsibilities; As … a Spark Scala Engineer you will be working for the GDT (Global Data Technology) Team, you will be responsible for: Designing, building, and maintaining data pipelines using ApacheSpark and Scala Working on an Enterprise scale Cloud infrastructure and Cloud Services in one of the Clouds (GCP more »
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
tools such as Informatica MDM, Informatica AXON, Informatica EDC, and Collibra • MySQL, SQL Server, Oracle, Snowflake, PostgreSQL and NoSQL databases • Programming languages such as Spark or Python • Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : • Base Salary more »
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
of OO programming, software design, i.e., SOLID principles, and testing practices. Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark, geospatial data/modelling and insurance are a plus. Exposure to MLOps, model monitoring principles, CI/CD and associated tech, e.g., Docker, MLflow more »
London, England, United Kingdom Hybrid / WFH Options
83zero Limited
Products tools (e.g. BigQuery, Dataflow, DataProc, AI Building Blocks, Looker, Cloud Data Fusion, Data prep, etc.) to build solutions for our customers Experience in Spark (Scala/Python/Java) and Kafka. Experience in MDM, Metadata Management, Data Quality and Data Lineage tools. E2E Data Engineering and Lifecycle (including … Cloud Data Fusion, Data prep, etc.) - Construction Industry Sector Preferred Google Cloud Platform Time-Series Data Building Control and Monitoring Systems Java, Scala, Python, Spark, SQL Experience of developing enterprise grade ETL/ELT data pipelines. Deep understanding of data manipulation/wrangling techniques Demonstrable knowledge of applying Data … DB/Neo4j/Elastic, Google Cloud Datastore. Snowflake Data Warehouse/Platform Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working CI/CD technologies, Git, Jenkins, Spinnaker, GCP Cloud Build, Ansible etc Experience and knowledge of application Containerisation, Docker, Kubernetes more »
Swansea, Neath Port Talbot, Wales, United Kingdom Hybrid / WFH Options
Inspire People
processing, and analytics. Programming Skills: Proficiency in Python, SQL, and other relevant programming languages. Big Data Technologies: Experience with big data technologies such as Apache Spark. Data Warehousing: Strong knowledge of data warehousing concepts and solutions. Problem-Solving: Excellent problem-solving skills with a detail-oriented approach. Leadership: Proven more »
APIs Messaging systems Python Java Databases and SQL Data warehousing Data modeling Data governance Cloud platforms and services (e.g., AWS) Big data technologies (e.g., Spark, Kafka) Continuous Data Integrity/Testing platforms If you are looking for a new path where you get to work on some amazing projects more »
Surrey, England, United Kingdom Hybrid / WFH Options
Hawksworth
working in the world of Data Science You're more than capable with SQL & Python You have exposure to big data technologies such as Spark Ideally you will have experience with statistical analysis, machine learning algorithms, and data mining techniques You have excellent communication skills and can communicate well more »
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
refining them to strong results. Exposure to Python data science stack Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark and geospatial data/modelling are a plus. We’ll help you gain… Experience working in a high-performance environment where collaboration and business more »
Skills & Experience At least 10 years experience working with JavaScript or Python/Java Previous experience deploying Software into the Cloud EKS, Docker, Kubernetes ApacheSpark or NiFi Microservice architecture experience Experience with AI/ML systems more »
a Senior Software Engineer for this role, you will collaborate with the founding team to contribute to the broader integration of our product with ApacheSpark and maintain the solution up to date and compatible with a variety of supported runtimes. Your contributions to our core solution will more »
DevOps experience in CI/CD. Experience using Tensorflow, Kubernetes, MLFlow, Kafka, and Airflow. Experience using Python is a must (tools like AWS and Spark are beneficial) Excellent communication skills and team and colleague engagement. A keen interest in problem-solving and using scalable machine learning to solve the more »
Luton, England, United Kingdom Hybrid / WFH Options
Ventula Consulting
science and analytics team in deploying pipelines. Coach and mentor the team to improve development standards. Key requirements: Strong hands-on experience with Databricks, Spark, SQL or Scala. Proven experience designing and building data solutions on a cloud based, big data distributed system (AWS/Azure etc.) Hands-on … models and following best practices. The Ability to develop pipelines using SageMaker, MLFlow or similar frameworks. Strong experience with data programming frameworks such as Apache Spark. Understanding of common Data Science and Machine Learning models, libraries and frameworks. This role provides a competitive salary plus excellent benefits package. In more »
improvements Key Skills 3+ years of Python experience Highly statistical and Analytical Exposure to Google Cloud Platform ( BigQuery, GCS, Datalab, Dataproc, Cloud ML (desirable) Spark & Hadoop experience Strong communication skills Good problem solving skills Qualifications Bachelor's degree or equivalent experience in a quantative field (Statistics, Mathematics, Computer Science … classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau) This is a permanent position, and offers flexibility with Hybrid working, 2-3 days per week in the office, depending on workload more »