system. · Significant experience with Python, and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). · Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Arrow, MapR). · Significant experience with SQL – comfortable writing efficient SQL. · Experience using enterprise … scheduling tools (e.g. Apache Airflow, Spring DataFlow, Control-M) · Experience with Linux and containerisation What you’ll get in return ·Competitive base salary ·Up to 20% bonus ·25 days holiday ·BAYE, SAYE & Performance share schemes ·7% pension ·Life Insurance ·Work Away Scheme ·Flexible benefits package ·Excellent staff travel benefits more »
Scala Kotlin Spark Google PubSub Elasticsearch Bigquery, PostgresQL Kubernetes, Docker, Airflow Ke y Responsibilities Designing and implementing scalable data pipelines using tools such as Apache Spark, Google PubSub etc. Optimising data storage and retrieval systems for maximum performance using both relational an d NoSQL databases. Continuously monitoring and improving … Data Infrastructure projects, as well as designing and building data intensive applications and services. Experience with data processing and distributed computing frameworks such as Apache Spark Expert knowledge in one or more of the following languages - Python, Scala, Java, Kotlin Deep knowledge of data modelling, data access, and data more »
Nottingham, Nottinghamshire, East Midlands, United Kingdom
Microlise
data practices Possess strong knowledge of data tools, data management tools, and various data and information technologies. E.g. DAMA DMBOK, Microsoft SQL Server, Couchbase, Apache Druid, Spark, Kafka, Airflow, etc In-depth understanding of modern data principles, methodologies, and tools Excellent communication and collaboration skills, with the ability to … native computing concepts and experience working with hybrid or private cloud platforms is a plus. Demonstrable technical experience working with a Microsoft, Redhat, and Apache data and software engineering environment. A team-oriented individual with a passion for engineered excellence and the ability to lead and motivate a team more »
managers, to understand data requirements and deliver high-quality solutions as well as architecting data ingestion, transformation, and storage processes using tools such as Apache Spark, Azure Data Factory, and other similar technologies. Other core duties include optimizing data pipeline performance, ensuring data accuracy, reliability, and timely delivery. Requirements … Services Certifications in relevant technologies, such as Azure Data Engineer or Databricks Certified Developer Experience with real-time data processing and streaming technologies like Apache Kafka or Azure Event Hubs Knowledge of data visualization tools, such as Power BI or Tableau Contributions to open-source projects or active participation more »
syntax • Network basics ( nslookup, ping, netcat, traceroute, tcpdump, etc ) • Coordinate and collaborate well with other team members and external partners Desired Experience • Familiarity with Apache Tomcat and ApacheHTTPServer • Familiarity with Cisco Splunk querying • Familiarity with Genesys configuration manager • Some understanding of Session Initiation Protocol (SIP more »
opportunity for an experienced DevOPS to join our expanding team, managing our servers. You must have performance tuning experience with Linux (preferably Centos 6.X), Apache 2.X, MySQL 5.X and PHP 5.X. Overview: We currently develop and host a bespoke high performance e-commerce platform on behalf of our established … running of Aurora Commerce production and development servers What You Need for this Position: Required Technologies with performance tuning experience : Linux - preferably Centos 6.X Apache 2.X MySQL 5.X PHP5.X Required Skills : *Solid Linux systems engineering/administration experience *At least 5 years experience working in a high traffic, production more »