prem solutions to the cloud, including re-architecting Prior experience working on data focused projects e.g. data warehousing, big data, data streaming Proficiency with Apache Kafka, ApacheSpark, Apache Flink etc. We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless more »
Certified Solutions Architect, AWS Certified Data Analytics Specialty, or AWS Certified Big Data Specialty. Experience with other big data and streaming technologies such as ApacheSpark, Apache Flink, or Apache Beam. Knowledge of containerization and orchestration technologies such as Docker and Kubernetes. Experience with data lakes more »
workplace where each employee's privacy and personal dignity is respected and protected from offensive or threatening behaviour including violence and sexual harassment Role: ApacheSpark Application Developer Skills Required: Hands on Experience as a software engineer in a globally distributed team working with Scala, Java programming language … preferably both) Experience with big-data technologies Spark/Databricks and Hadoop/ADLS is a must Experience in any one of the cloud platform Azure (Preferred), AWS or Google Experience building data lakes and data pipelines in cloud using Azure and Databricks or similar tools. Spark Developer more »
data engineering or a similar role. > Proficiency in programming languages such as Python, Java, or Scala. > Strong experience with data processing frameworks such as ApacheSpark, Apache Flink, or Hadoop. > Hands-on experience with cloud platforms such as AWS, Google Cloud, or Azure. > Experience with data warehousing more »
working closely with our product teams on existing projects and new innovations to support company growth and profitability. Our Tech Stack Python Scala Kotlin Spark Google PubSub Elasticsearch Bigquery, PostgresQL Kubernetes, Docker, Airflow Ke y Responsibilities Designing and implementing scalable data pipelines using tools such as ApacheSpark … Data Infrastructure projects, as well as designing and building data intensive applications and services. Experience with data processing and distributed computing frameworks such as ApacheSpark Expert knowledge in one or more of the following languages - Python, Scala, Java, Kotlin Deep knowledge of data modelling, data access, and more »
comfortable designing and constructing bespoke solutions and components from scratch to solve the hardest problems. Adept in Java, Scala, and big data technologies like Apache Kafka and ApacheSpark, they bring a deep understanding of engineering best practices. This role involves scoping and sizing, and indeed estimating … be considered. Key responsibilities of the role are summarised below Design and implement large-scale data processing systems using distributed computing frameworks such as Apache Kafka and Apache Spark. Architect cloud-based solutions capable of handling petabytes of data. Lead the automation of CI/CD pipelines for more »
London, England, United Kingdom Hybrid / WFH Options
Maclean Moore Ltd
to develop unit test cases. Help in backlog grooming. Key skills: Extensive experience in developing Bigdata pipelines in cloud using Bigdata technologies such as ApacheSpark Expertise in performing complex data transformation using Spark SQL queries Experience in orchestrating data pipelines using Apache Airflow Proficiency in more »
Greater Bristol Area, United Kingdom Hybrid / WFH Options
Anson McCade
and product development, encompassing experience in both stream and batch processing. Designing and deploying production data pipelines, utilizing languages such as Java, Python, Scala, Spark, and SQL. In addition, you should have proficiency or familiarity with: Scripting and data extraction via APIs, along with composing SQL queries. Integrating data more »
Cheltenham, Gloucestershire, United Kingdom Hybrid / WFH Options
Third Nexus Group Limited
and product development, encompassing experience in both stream and batch processing. · Designing and deploying production data pipelines, utilizing languages such as Java, Python, Scala, Spark, and SQL. In addition, you should have proficiency or familiarity with: · Scripting and data extraction via APIs, along with composing SQL queries. · Integrating data more »
run on AWS and soon Azure, with plans to also add GCP and on-prem. They are adding extensive usage of distributed compute on Spark, starting with their more complex ETL and advanced analytics functions, e.g. Time Series Processing. They soon plan to integrate other approaches, including native distributed … PyTorch/Tensorflow, Spark-based distributor libraries, or Horovod. TECH STACK: Python, Flask, Redis, Postgres, React, Plotly, Docker. Temporal; AWS Athena SQL, Athena & EMR Spark, ECS Fargate; Azure Synapse/Data Lake Analytics, HDInsight. KEY RESPONSIBILITIES Lead the productionisation of Monolith’s ML models and data processing pipelines … both mid-low-level system and design and exemplary hands-on implementations using Spark and other tech stacks Shape the ML engineering culture and practices around model & data versioning, scalability, model benchmarking, ML-specific branching & release strategy Concisely break down complex high-level ML requirements into smaller deliverables (epic more »
Terraform/Docker/Kubernetes. Write software using either Java/Scala/Python . The following are nice to have, but not required - ApacheSpark jobs and pipelines. Experience with any functional programming language. Database design concepts. Writing and analysing SQL queries. Application overVIOOH Our recruitment team more »
Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
Bristol, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
emphasis on Pyspark and Databricks for this particular role. Technical Skills Required: Azure (ADF, Functions, Blob Storage, Data Lake Storage, Azure Data Bricks) Databricks Spark Delta Lake SQL Python PySpark ADLS Day To Day Responsibilities: Extensive experience in designing, developing, and managing end-to-end data pipelines, ETL (Extract more »
or more of the following tools: Informatica PowerCenter, SAS Data Integration Studio, Microsoft SSIS, Ab Initio, etc. • Ideally, you have experience in Hadoop ecosystem (Spark, Kafka, HDFS, Hive, HBase, …), Docker and orchestration platform (Kubernetes, Openshift, AKS, GKE...), and noSQL Databases (MongoDB, Cassandra, Neo4j) • Any experience with cloud platforms such more »
NumPy, scikit-learn). Understanding of database technologies (ETL) and SQL proficiency for data manipulation, data mining and querying. Knowledge of Big Data Tools (Spark or Hadoop a plus). Power BI, Dashboard design/development. Regulatory Awareness/Compliance Uphold Regulatory/Compliance requirements relevant to your role more »
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
Leo Recruitment Limited
in programming languages and tools for data analysis, such as Python, R, and SQL You must be proficient in big data technologies, such as Spark, Kafka and/or Hadoop. A strong understanding of statistical analysis, predictive modelling, machine learning algorithms, and data development and optimisation is essential You more »
Staines-Upon-Thames, England, United Kingdom Hybrid / WFH Options
IFS
with data ingestion tools such as Airbyte and Fivetran, accommodating a wide array of data sources. Mastery of large-scale data processing techniques using Spark or Dask. Strong programming skills in Python, Scala, C#, or Java, and adeptness with cloud SDKs and APIs. Deep understanding of AI/ML more »
and AI models. Data Engineer Required Experience Data engineering experience (2+ years) Cloud platform proficiency (e.g., AWS, Azure, GCP) Data pipeline development (e.g., Airflow, ApacheSpark) SQL proficiency, database design Visualization tools knowledge (e.g., Tableau, PowerBI, Looker) Data Engineer Application Process This is a 1 year contract requirement more »
for seamless data integration. * Understanding of DevOps best practices for SQL and Power BI projects, including DACPAC, CI/CD, and versioning. * Familiarity with ApacheSpark for big data processing. * Additional development experience in Python or related technologies. * Experience gained within a Media, Travel or Broadcast Media sectors more »
Employment Type: Permanent
Salary: £65000 - £70000/annum Hybrid, Health, Dental, Extra Hols
and expertise in tools like Informatica & Talend MDM Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming Hands-on experience with multiple databases like PostgreSQL, Snowflake, Oracle, MS SQL Server, NOSQL (HBase/Cassandra, MongoDB), is required Knowledge more »
value through improved data handling and analysis. Responsibilities: Build predictive models using machine-learning techniques that generate data-driven insights on modern data platforms (Spark, Hadoop and other map-reduce tools); Develop and productionalize containerized algos for deployment in hybrid cloud environments (GCP, Azure) Connect and blend data from more »
quality testing frameworks. Proficiency in Python and familiarity with modern software engineering practices, including 12factor, CI/CD, and Agile methodologies. Deep understanding of Spark (PySpark), Python (Pandas), orchestration software (e.g. Airflow, Prefect) and databases, data lakes and data warehouses. Experience with cloud technologies, particularly AWS Cloud services, with more »