understanding of networking and IP packet structure Experience working in a DevOps team designing, developing and supporting solutions Experience in web-site development using Apache and PHP The successful applicant will work within the network monitoring and intrusion detection & prevention team. Your role will involve working closely with the more »
and Saltstack CI/CD: Jenkins, GitLab CI/CD Data/Messaging: Amazon Aurora (Postgres), ElastiCache (Redis), AmazonMQ (RabbitMQ) API: Tyk API Gateway, Apache Monitoring/Logging: Datadog and SumoLogic Security: IAM (Identity Access Management), Security Groups, mTLS Other: VPC and general networking In return, they would be more »
Hackney, Greater London, Shoreditch, United Kingdom
Talent Smart
role. Proven experience with Snowflake data warehouse, including data loading, transformations, and performance tuning. Strong expertise in ETL tools and processes (e.g., Talend, Informatica, Apache Nifi, etc.). Experience with data visualization tools, particularly Power BI. Excellent problem-solving and analytical skills. Strong communication skills, with the ability to more »
Leatherhead, Surrey, South East, United Kingdom Hybrid / WFH Options
RINA
experiencing a real breadth and variety of project work on some of the most technically advanced platforms in UK Defence, including unmanned air systems, Apache, Wildcat, Chinook and Typhoon, to name a few. The successful candidate will take the lead on Cyber Assurance projects, with emphasis on identifying Cyber more »
required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI more »
required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI more »
delivering moderate-to-complex data flows as part of a development team in collaboration with others. You’ll be confident using technologies such as: Apache Kafka, Apache NiFi, SAS DI Studio, or other data integration platforms. You can implement, deliver, and translate several data models, including unstructured data … and recognised standards to build solutions using various traditional or big data languages such as: SQL, PL/SQL, SAS Macro Language, Python, Scala, Apache Spark, Java, JavaScript etc, using various tools including SAS, Hue (Hive/Impala), Kibana (Elastic Search). Knowledge of data management on Cloud platforms more »
Westminster, Colorado, United States Hybrid / WFH Options
Maxar Technologies
Prior experience with CI/CD technologies such as Jenkins Prior experience with any of the following: Trino/Starburst, dbt (core or cloud), Apache Superset, OpenMetadata, Apache Airflow, Tableau. Prior experience with RDS databases, or Postgres. Agile software development lifecycle experience These skills would be amazing: Holds more »
designing and building robust, scalable, distributed data systems and pipelines, using open source and public cloud technologies. Strong experience with data orchestration tools: e.g. Apache Airflow, Dagster. Experience with big data storage and processing technologies: e.g. DBT, Spark, SQL, Athena/Trino, Redshift, Snowflake, RDBMSs (PostgreSQL/MySQL). … Knowledge of event-driven architectures and streaming technologies: e.g. Apache Kafka, Kafka Streams, Apache Flink. Experience with public cloud environments: e.g. AWS, GCP, Azure, Terraform. Strong knowledge of software engineering practices: e.g. testing, CI/CD (Jenkins, Github Actions), agile development, git/version control, containers etc. Strong more »
Data Scientists and Service Engineering teams Experience with design, development and operations that leverages deep knowledge in the use of services like Amazon Kinesis, Apache Kafka, Apache Spark, Amazon Sagemaker, Amazon EMR, NoSQL technologies and other 3rd parties Develop and define key business questions and to build data more »
service incidents. Responsible for the technical design, development, installation, monitoring and ongoing support and maintenance of a diverse set of middleware technologies including WebSphere, Apache, Tomcat, and Jboss. The role is a technical, hands-on opportunity with a heavy focus on automation, resilient design and deployment of middleware ready … impacting issues within the day-to-day role to management. Responsibilities Strong engineering experience in installation and maintenance of WebSphere application server, Tomcat, and Apache platforms Implement DevOps practices through GitOps framework Implement Configuration Management and Infrastructure as Code (e.g. Terraform, Python, Chef, Ansible, and Bash) Achieves product commitments more »
objectives. So each team leverages the technology that fits their needs best. You’ll see us working with data processing/streaming frameworks like Apache Flink and Spark; Database technologies like MySQL, PostgreSQL, DynamoDB and Redis; and breaking things using in-house chaos principles and tools such as Gatling … latency, near real-time products: Java and Scala based Web Services, Databricks Data Lakes (Delta Lakes), AWS Kinesis and MSK, AWS ElasticSearch, AWS RDS, Apache Flink & Spark, scripting using Python, Terraform’s infrastructure as a code. The interview process Our interview aims to take a relaxed & practical approach that more »
Docker, Linux, Git, Azure DevOps Databases: Azure SQL, MongoDB Backends: Java/Spring, C#/.NET Core Frontends: Typescript/ReactJS Search and AI: Apache Lucene, Huggingface Data: Azure Data Factory, Apache Hop Test: Playwright Legacy: Windows, .NET Framework What is in it for you? We offer an more »
Philadelphia, Pennsylvania, United States Hybrid / WFH Options
Comcast Corporation
use Jira, Confluence, and Git in an Agile development environment; perform DevOps processes using Concourse, Docker, and Kubernetes; perform large-scale data processing using Apache Spark; manage big data on Cloudera; perform Machine Learning, including developing and deploying predictive models leveraging ML algorithms; use AWS cloud platform; deploy tools … related technical or quantitative field; and one (1) year of experience programming using Python and Scala; using Jira; performing large-scale data processing using Apache Spark; managing big data on Cloudera; performing Machine Learning; using AWS cloud platform; deploying tools and applications on Unix; and writing SQL in Hive more »
Scala Kotlin Spark Google PubSub Elasticsearch Bigquery, PostgresQL Kubernetes, Docker, Airflow Ke y Responsibilities Designing and implementing scalable data pipelines using tools such as Apache Spark, Google PubSub etc. Optimising data storage and retrieval systems for maximum performance using both relational an d NoSQL databases. Continuously monitoring and improving … Data Infrastructure projects, as well as designing and building data intensive applications and services. Experience with data processing and distributed computing frameworks such as Apache Spark Expert knowledge in one or more of the following languages - Python, Scala, Java, Kotlin Deep knowledge of data modelling, data access, and data more »
GoLang. - Significant experience with Hadoop, Spark and other distributed processing platforms and frameworks. - Experience working with Open table/storage formats like delta lake, apache iceberg or apache hudi. - Experience of developing and managing real time data streaming pipelines using Change data capture (CDC), Kafka and Apachemore »
Engineering, providing DevOps support, and/or RHEL administration for mission-critical platforms, ideally Kafka. 4+ years of combined experience with Kafka (Confluent Kafka, Apache Kafka, Amazon MSK) 4+ years of experience with Ansible automation Must be able to obtain and maintain a Public Trust. Contract requirement. Selected candidate … Solid experience using version control software such as Git/Bitbucket including peer reviewing Ansible playbooks Hands-on experience administrating Kafka platform (Confluent Kafka, Apache Kafka, Amazon MSK) via Ansible playbooks or other automation. Understanding of Kafka architecture, including partition strategy, replication, transactions, tiered storage, and disaster recovery strategies. … STAND OUT FROM THE CROWD (Desired Skills) Showcase your knowledge of modern development through the following experience or skills: Preferred Confluent Certified Administrator for Apache Kafka (CCAAK) or Confluent Certified Developer for Apache Kafka (CCDAK) Practical experience with event-driven applications and at least one event processing framework more »
pipelines Know your way around Unix based operating system Experience working with any major cloud provider (AWS, GCP, Azure) Fluency in English Experience using Apache Airflow Experience using Docker Experience using Apache Spark Benefits: Salary £40-50K per annum dependant on skills and experience 25 Days annual more »
Docker, Linux, Git, Azure DevOps Databases: Azure SQL, MongoDB Backends: Java/Spring, C#/.NET Core Frontends: Typescript/ReactJS Search and AI: Apache Lucene, Huggingface Data: Azure Data Factory, Apache Hop Test: Playwright Legacy: Windows, .NET Framework What is in it for you? We offer an more »
development (ideally AWS) Knowledge and ideally hands-on experience with data streaming, event-based architectures and Kafka Strong communication and interpersonal skills Experience with Apache Spark or Apache Flink would be ideal, but not essential Please note, this role is unable to provide sponsorship. If this role sounds more »
Experienced creating data pipelines on a cloud (preferably AWS) environment CI/CD experience Containerization experience (Docker, Kubernetes, etc.) Experience with SQS/SNS, Apache Kafka, RabbitMQ Other interesting/bonus skills – Airflow, Trino, Apache Iceburg, Postgres, MongoDB You *must* be eligible to work in your chosen country more »
Manchester, North West, United Kingdom Hybrid / WFH Options
Adria Solutions
data tasks. Knowledge of CI/CD approaches for Data Platforms using Bitbucket and Bitbucket Pipelines. Knowledge of AWS data lake approaches using Athena & Apache Iceberg tables. Exposure to visualisation development using Power BI. Knowledge of MS SQL Server, SSIS, Visual Studio, and SSDT projects. Experience in a relevant … maintaining data pipelines on Bitbucket using Bitbucket Pipelines. You will also use your knowledge of AWS data lake approaches to optimize performance by implementing Apache Iceberg tables. This is an exciting chance to gain experience in visualisation development using Power BI and working with MS SQL Server, SSIS, Visual more »
My client is a leading global technology consulting and digital solutions company who specializes in sectors such as banking, insurance, manufacturing, and healthcare. They leverage advanced technologies like cloud computing, AI, and data analytics to deliver scalable, cutting-edge solutions. more »