Birmingham, England, United Kingdom Hybrid / WFH Options
Xpertise Recruitment
CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributed computing frameworks like ApacheSpark is a plus. ApacheSpark and Airflow would be a bonus. Role overview: If you're looking to work with more »
Newcastle Upon Tyne, England, United Kingdom Hybrid / WFH Options
Xpertise Recruitment
CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributed computing frameworks like ApacheSpark is a plus. ApacheSpark and Airflow would be a bonus. Role overview: If you're looking to work with more »
data pipelines using tools such as Airflow, Jenkins and GitHub actions. · Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala · Help the business harness the power of data within easyJet, supporting them with insight, analytics and data … system. · Significant experience with Python, and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). · Significant experience with ApacheSpark or any other distributed data programming frameworks (e.g. Flink, Arrow, MapR). · Significant experience with SQL – comfortable writing efficient SQL. · Experience using … enterprise scheduling tools (e.g. Apache Airflow, Spring DataFlow, Control-M) · Experience with Linux and containerisation What you’ll get in return ·Competitive base salary ·Up to 20% bonus ·25 days holiday ·BAYE, SAYE & Performance share schemes ·7% pension ·Life Insurance ·Work Away Scheme ·Flexible benefits package ·Excellent staff travel more »
comfortable designing and constructing bespoke solutions and components from scratch to solve the hardest problems. Adept in Java, Scala, and big data technologies like Apache Kafka and ApacheSpark, they bring a deep understanding of engineering best practices. This role involves scoping and sizing, and indeed estimating … be considered. Key responsibilities of the role are summarised below Design and implement large-scale data processing systems using distributed computing frameworks such as Apache Kafka and Apache Spark. Architect cloud-based solutions capable of handling petabytes of data. Lead the automation of CI/CD pipelines for more »
Greater Bristol Area, United Kingdom Hybrid / WFH Options
Anson McCade
and product development, encompassing experience in both stream and batch processing. Designing and deploying production data pipelines, utilizing languages such as Java, Python, Scala, Spark, and SQL. In addition, you should have proficiency or familiarity with: Scripting and data extraction via APIs, along with composing SQL queries. Integrating data more »
data warehouse, data lake design/building, and data movement. Design and deploy production data pipelines in Big data architecture using Java, Python, Scala, Spark, and SQL. Tasks involve scripting, API data extraction, and writing SQL queries. Comfortable designing and building for AWS cloud, encompassing Platform-as-a-Service more »
Cheltenham, Gloucestershire, United Kingdom Hybrid / WFH Options
Third Nexus Group Limited
and product development, encompassing experience in both stream and batch processing. · Designing and deploying production data pipelines, utilizing languages such as Java, Python, Scala, Spark, and SQL. In addition, you should have proficiency or familiarity with: · Scripting and data extraction via APIs, along with composing SQL queries. · Integrating data more »
Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
Bristol, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
and Public Services, Healthcare, Life Sciences, and Transport. Essential Skills & Experience: Design and deploy data pipelines in big data architecture using Java, Python, Scala, Spark, and SQL. Execute tasks involving scripting, API data extraction, and SQL queries. Proficient in data cleaning, wrangling, visualization, and reporting. Specialised in AWS cloud more »
Southampton, Hampshire, South East, United Kingdom Hybrid / WFH Options
Leo Recruitment Limited
in programming languages and tools for data analysis, such as Python, R, and SQL You must be proficient in big data technologies, such as Spark, Kafka and/or Hadoop. A strong understanding of statistical analysis, predictive modelling, machine learning algorithms, and data development and optimisation is essential You more »
Staines-Upon-Thames, England, United Kingdom Hybrid / WFH Options
IFS
with data ingestion tools such as Airbyte and Fivetran, accommodating a wide array of data sources. Mastery of large-scale data processing techniques using Spark or Dask. Strong programming skills in Python, Scala, C#, or Java, and adeptness with cloud SDKs and APIs. Deep understanding of AI/ML more »
Manchester, Greater Manchester, United Kingdom Hybrid / WFH Options
AutoTrader UK
of applying data technologies to solve problems and you can expect to work with a range of technologies including dbt, Kotlin/Java, Python, ApacheSpark and Kafka.Join us as a Principal Software Engineer and, as well as shaping and creating the foundations for insight-driven, market-leading … delivery chain, from data to productsYou will have an understanding of data modelling and experience with data engineering tools and platforms such as, Kafka, Spark, and HadoopComfortable presenting technical ideas to non-technical colleaguesExperience mentoring and coaching and sharing technical expertiseStrong team work ethic, communication and collaboration skillsSupport with … our recruitment process by evaluating candidates at all stages.Although not essential, helpful experience includes:Messaging systems such as Apache Kafka or Google Pub/SubDocker/Container OrchestrationWorking experience with Google CloudEvery candidate brings a unique mix of skills and qualities to the table. We're all about inclusivity more »
Bedford, Bedfordshire, United Kingdom Hybrid / WFH Options
Understanding Recruitment
frameworks (TensorFlow, PyTorch etc.)MLOps experienceNice to have:Familiarity with Git or other Version Control SystemsComputer Vision Library exposureUnderstanding of Big Data Technologies (Hadoop, Spark etc)Experience with Cloud platforms (AWS, GCP or Azure)This is a fully remote role, but may require very occasional travel (once a month more »
Bedford, England, United Kingdom Hybrid / WFH Options
Understanding Recruitment
etc.) MLOps experience Nice to have: Familiarity with Git or other Version Control Systems Computer Vision Library exposure Understanding of Big Data Technologies (Hadoop, Spark etc) Experience with Cloud platforms (AWS, GCP or Azure) This is a fully remote role, but may require very occasional travel (once a month more »
Northampton, Northamptonshire, East Midlands, United Kingdom Hybrid / WFH Options
Dupen Ltd
APIs, infrastructure design load balancing, VMs, PostgreSQL, vector dbs. Senior ML Learning Engineer desirable skills: Version control (Git), computer vision libraries, Big Data (Hadoop, Spark), Cloud AWS, Google Cloud, Azure, and a knowledge of secure coding techniques PCI-DSS, PA-DSS, ISO27001. This is a fantastic opportunity to join more »
to: Backend technology, Python. Databases like MSSQL. Front-end technology, Java. Cloud platform, AWS. Programming language, JavaScript (React.js) Big data technologies such as Hadoop, Spark, or Kafka. What We Need from You: Essential Skills: A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in more »
success. 💼 What You Bring to the Table: Expertise in designing and deploying production data pipelines within a big data architecture using Java, Python, Scala, Spark, and SQL. Proven experience in tasks like scripting, API data extraction, and SQL queries. Collaboration with engineering teams and integration of data engineering components more »
pipeline and workflow management and their tools such as Airflow. Strong understanding of relational SQL and NoSQL databases, including MongoDB and stream-processing systems: Spark-Streaming, Kinesis etc. Ability to understand any scripting language and tools. Rewards & Benefits TCS is consistently voted a Top Employer in the UK and more »
and coding environments. Bonus Skills: Python/PHP/Typescript/ReactJS AI/ML models and usage ETL pipelines in AWS (Glue/ApacheSpark) API Load testing If you would like more information on the role or like to apply for then please send your CV more »
Data Analytics in Azure Synapse Analytics, Azure Analysis Services Data Ingestion and Storage including Azure Data Factory, Azure Databricks, Azure Data Lake, Kafka and Spark Streaming, Azure EventHub/IoT Hub, and Azure Stream Analytics Experience with Object-oriented/object function scripting languages: Python preferred more »