Scala, or Kotlin. Experience with at least one of the following cloud providers: Amazon Web Services (AWS), Google Cloud Compute (GCP), or Microsoft Azure. Spark, Hive, or Presto. Desirable Skills Familiarity with the Scala programming language and popular frameworks such as: Cats, Cats Effect, ZIO, and http4s. Familiarity with … best practices. Familiarity with Amazon Web Services (AWS), Terraform, and infrastructure as code (IaC) best practices. Familiarity with Python programming language when applied to Spark and machine learning. Familiarity with Databricks and Apache Airflow products. Required Education & Experience Bachelor’s degree in Computer Science, Information Systems, Software, Electrical more »
London, England, United Kingdom Hybrid / WFH Options
Maclean Moore Ltd
to develop unit test cases. Help in backlog grooming. Key skills: Extensive experience in developing Bigdata pipelines in cloud using Bigdata technologies such as ApacheSpark Expertise in performing complex data transformation using Spark SQL queries Experience in orchestrating data pipelines using Apache Airflow Proficiency in more »
Greater London, England, United Kingdom Hybrid / WFH Options
Oliver Bernard
of Big Data -Great understanding of Cloud e.g. Azure and or AWS -Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc -Excellent experience of ETL, data warehousing and handling a variety of data types -Very strong knowledge of database technologies such as NoSQL, Relational more »
quality testing frameworks. Proficiency in Python and familiarity with modern software engineering practices, including 12factor, CI/CD, and Agile methodologies. Deep understanding of Spark (PySpark), Python (Pandas), orchestration software (e.g. Airflow, Prefect) and databases, data lakes and data warehouses. Experience with cloud technologies, particularly AWS Cloud services, with more »
Basingstoke, England, United Kingdom Hybrid / WFH Options
Intec Select
cross-functionally across the business to understand the requirements of the products Designing and implementing performance related data ingestion pipelines from multiple sources using ApacheSpark Integrating end-to-end data pipelines ensuring a high level of quality is maintained Working with an Agile delivery/DevOps methodology more »
Greater London, England, United Kingdom Hybrid / WFH Options
Oliver Bernard
an Architect and excellent knowledge of Big Data -Excellence experience across Azure -Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc -Excellent experience of ETL, data warehousing and handling a variety of data types -Very strong knowledge of database technologies such as NoSQL, Relational more »
workplace where each employee's privacy and personal dignity is respected and protected from offensive or threatening behaviour including violence and sexual harassment Role: ApacheSpark Application Developer Skills Required: Hands on Experience as a software engineer in a globally distributed team working with Scala, Java programming language … preferably both) Experience with big-data technologies Spark/Databricks and Hadoop/ADLS is a must Experience in any one of the cloud platform Azure (Preferred), AWS or Google Experience building data lakes and data pipelines in cloud using Azure and Databricks or similar tools. Spark Developer more »
mining, data analysis, and strong software engineering skills. Strong understanding of Data Engineering Proficiency in AWS, data warehousing (Snowflake, Databricks, Redshift), big data frameworks (Spark, Kafka), container orchestration platforms (Kubernetes), and data integration/ETL tools. Strong written and verbal communication skills, with the ability to explain technical concepts more »
management and data governance open source platform that we will teach you. Other technologies in use in our space: RESTful services, Maven/Gradle, ApacheSpark, BigData, HTML 5, AngularJs/ReactJs, IntelliJ, Gitlab, Jira. Cloud Technologies: You’ll be involved in building the next generation of finance more »
as a Lead Big Data Engineer with excellent knowledge of Big Data -Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc -Excellent experience of ETL, data warehousing and handling a variety of data types -Very strong knowledge of database technologies such as NoSQL, Relational more »
platform you build to train and release machine learning models. What You’ll Do: Writing code – lots of it. We use Python, Java, Scala, Spark, and SQL. We welcome programmers of all backgrounds as long as you focus on data engineering solutions and attention to quality. Foster and strengthen … similar role. Demonstrable experience with any programming or scripting language (Python/Java/Scala/Ruby etc.).’ Experience using big data technologies (Spark, Presto, etc.) Experience using cloud technologies such as EMR, Lambda, EC2, and data pipelines. Experience leading data warehousing and analytics projects, including using technologies more »
explain and present the findings of technical work to non-expert audiences Fluency with Python machine learning and data science packages (pandas, scikit-learn, Apache, Spark, DASK, Tensorflow, etc.) or experience with programming languages and willingness to learn Python For engineering, experience in a DevOps role, ideally in more »
working closely with our product teams on existing projects and new innovations to support company growth and profitability. Our Tech Stack Python Scala Kotlin Spark Google PubSub Elasticsearch Bigquery, PostgresQL Kubernetes, Docker, Airflow Ke y Responsibilities Designing and implementing scalable data pipelines using tools such as ApacheSpark … Data Infrastructure projects, as well as designing and building data intensive applications and services. Experience with data processing and distributed computing frameworks such as ApacheSpark Expert knowledge in one or more of the following languages - Python, Scala, Java, Kotlin Deep knowledge of data modelling, data access, and more »
programming language (Java, C++, Kotlin would be beneficial) Cloud experience (we use Azure, AWS or GCP welcome) Kafka or exposure to ActiveMQ, RabbitMQ or Spark Orchestration and Containerisation experience (Kubernetes, Docker and Microservices) Creating greenfield microservices, this team plan to add a wealth of functionality to existing systems as more »
Birmingham, England, United Kingdom Hybrid / WFH Options
Xpertise Recruitment
using the tech you think is required! Skills desired/what you will learn: Microsoft Azure Azure SQL Microsoft Fabric Delta Lake, Databricks and Spark Statistical Modelling Azure ML Studio Python and familiarity with libraries and frameworks for data analysis and machine learning (e.g., TensorFlow, sci-kit-learn, Pandas more »
positive societal impact. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives that question the status quo and spark change. BCG delivers solutions through leading-edge management consulting, technology and design, and corporate and digital ventures. We work in a uniquely collaborative model more »
Responsibilities: As a pivotal member of our rapidly expanding business, your role involves: Designing and Maintaining Data Architectures and Pipelines: Create and optimize robust data architectures and pipelines. Assemble intricate data sets that align with both functional and non-functional more »
City of London, London, United Kingdom Hybrid / WFH Options
Oliver Bernard Ltd
Experience with a JVM language, Kotlin, Java, Scala, Clojure Knowledge of Typescript and React is beneficial Exposure to data pipelines using technologies such as Spark and Kafka Experience with cloud services (ideally AWS) Hybrid working 1-2 days per week in Central London. £110,000 depending on experience. Please more »
major advantage Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate and the like. Hands-on with infrastructure-as-code tools and automation, such as Terraform, Ansible, or Helm. The role Tech Lead responsible more »
GCP. Essential Skills and Experience: * AWS (e.g., Athena, Redshift, Glue, EMR) * Strong AWS Data Solution Architect Experience on Data Related Projects * Java, Scala, Python, Spark, SQL * Experience of developing enterprise grade ETL/ELT data pipelines. * Deep understanding of data manipulation/wrangling techniques * Demonstrable knowledge of applying Data … Cloud Datastore. * BigQuery and Data Studio/Looker. * Snowflake Data Warehouse/Platform * Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. * Experience of working CI/CD technologies, Git, Jenkins, Spinnaker, GCP Cloud Build, Ansible etc. * Experience and knowledge of application Containerisation, Docker, Kubernetes more »
Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: ApacheSpark and the Hadoop Ecosystem Edge technologies eg NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages Desirable Requirements: Jupyter Hub Awareness more »
comfortable designing and constructing bespoke solutions and components from scratch to solve the hardest problems. Adept in Java, Scala, and big data technologies like Apache Kafka and ApacheSpark, they bring a deep understanding of engineering best practices. This role involves scoping and sizing, and indeed estimating … be considered. Key responsibilities of the role are summarised below Design and implement large-scale data processing systems using distributed computing frameworks such as Apache Kafka and Apache Spark. Architect cloud-based solutions capable of handling petabytes of data. Lead the automation of CI/CD pipelines for more »
Terraform/Docker/Kubernetes. Write software using either Java/Scala/Python . The following are nice to have, but not required - ApacheSpark jobs and pipelines. Experience with any functional programming language. Database design concepts. Writing and analysing SQL queries. Application overVIOOH Our recruitment team more »
skills include: Experience deploying, securing and supporting cloud infrastructure platforms Understanding of security frameworks/standards Understanding of data streaming and messaging frameworks (Kafka, Spark, etc.) and modern database technologies (Cockroach etc.) Understanding of distributed tracing and monitoring (Zipkin, OpenTracing, Prometheus, ELK stack, Micrometer metrics, etc.) Experience with containers more »
to: Backend technology, Python. Databases like MSSQL. Front-end technology, Java. Cloud platform, AWS. Programming language, JavaScript (React.js) Big data technologies such as Hadoop, Spark, or Kafka. What We Need from You: Essential Skills: A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in more »