Liverpool, Lancashire, United Kingdom Hybrid / WFH Options
Intuita - Vacancies
Azure Storage, Medallion Architecture, and working with data formats such as JSON, CSV, and Parquet. • Strong understanding of IT concepts, including security, IAM, Key Vault, and networking. • Exposure to Apache Airflow and DBT is a bonus. • Familiarity with agile principles and practices. • Experience with Azure DevOps pipelines. The "Nice to Haves": • Certification in Azure or related technologies. • Experience with More ❯
OCI Images), GitHub Actions, Gradle, Jenkins (legacy, moving towards GitHub Actions), Maven, SonarCloud Data : Elasticsearch, MongoDB, MySQL, Neo4J IaC : Ansible, Terraform Languages : Java, Python, TypeScript Monitoring : Grafana, Prometheus Misc : Apache (legacy, moving towards AWS CloudFront/API Gateway), Git (GitHub), Linux (Ubuntu), RabbitMQ We are looking to start or make more use of the following AWS services : CloudTrail Secrets More ❯
MySQL Exposure to Docker, Kubernetes, AWS, Helm, Terraform, Vault, Grafana, ELK Stack, New Relic Relevant experience in the maintenance of data APIs and data lake architectures, including experience with Apache Iceberg, Trino/Presto, Clickhouse, Snowflake, BigQuery. Master's degree in Computer Science or Engineering-related field Get to know us better YouGov is a global online research company More ❯
as required; comfortable building multi page Web Applications from scratch. Expertise with Application Server integration; JBoss 7, SpringBoot or later preferred. Proficient in developing microservices with SpringBoot Knowledge of Apache Web Server preferred. Database Skills with working knowledge of Structured Query Language (e.g. SQL/NoSQL commands and queries). 2+ years Working with Oracle, MySQL, MS SQL and … as required; comfortable building multi page Web Applications from scratch. Expertise with Application Server integration; JBoss 7, SpringBoot or later preferred. Proficient in developing microservices with SpringBoot Knowledge of Apache Web Server preferred. Database Skills with working knowledge of Structured Query Language (e.g. SQL/NoSQL commands and queries). 2+ years Working with Oracle, MySQL, MS SQL and More ❯
technical direction to a growing team of developers globally. The platform is a Greenfield build using standard modern technologies such as Java, Spring Boot, Kubernetes, Kafka, MongoDB, RabbitMQ, Solace, Apache Ignite. The platform runs in a hybrid mode both on-premise and in AWS utilising technologies such as EKS, S3, FSX. The main purpose of this role is to More ❯
Responsibilities: Develop, optimize, and maintain data ingest flows using Apache Kafka, Apache Nifi and MySQL/PostGreSQL Develop within the components in the AWS cloud platform using services such as RedShift, SageMaker, API Gateway, QuickSight, and Athena Communicate with data owners to set up and ensure configuration parameters Document SOP related to streaming configuration, batch configuration or API … machine learning techniques Strong understanding of programming languages like Python, R, and Java Expertise in building modern data pipelines and ETL (extract, transform, load) processes using tools such as Apache Kafka and Apache Nifi Proficient in programming languages like Java, Scala, or Python Experience or expertise using, managing, and/or testing API Gateway tools and Rest APIs More ❯
In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a … data processing workloads Implement CI/CD pipelines for data workflows to ensure smooth and reliable deployments. Big Data & Analytics: Build and optimize large-scale data processing pipelines using Apache Spark and PySpark Implement data partitioning, caching, and performance tuning for Spark-based workloads. Work with diverse data formats (structured and unstructured) to support advanced analytics and machine learning More ❯
to-end, scalable data and AI solutions using the Databricks Lakehouse (Delta Lake, Unity Catalog, MLflow). Design and lead the development of modular, high-performance data pipelines using Apache Spark and PySpark. Champion the adoption of Lakehouse architecture (bronze/silver/gold layers) to ensure scalable, governed data platforms. Collaborate with stakeholders, analysts, and data scientists to … performance tuning, cost optimisation, and monitoring across data workloads. Mentor engineering teams and support architectural decisions as a recognised Databricks expert. Essential Skills & Experience: Demonstrable expertise with Databricks and Apache Spark in production environments. Proficiency in PySpark, SQL, and working within one or more cloud platforms (Azure, AWS, or GCP). In-depth understanding of Lakehouse concepts, medallion architecture More ❯
utilizing the Django web framework for the backends and React for developing the client facing portion of the application Create, extract, transform, and load (ETL) pipelines using Hadoop and Apache Airflow for various production big data sources to fulfill intelligence data availability requirements Automate retrieval of data from various sources via API and direct database queries for intelligence analysts … for military personnel Required Qualifications: Active TS/SCI Required 7-10 years experience Preferred Qualifications: Bachelor's degree in related field preferred Windows 7/10, MS Project Apache Airflow Python, Java, JavaScript, React, Flask, HTML, CSS, SQL, R, Docker, Kubernetes, HDFS, Postgres, Linux AutoCAD JIRA, Gitlab, Confluence About Us: IntelliBridge delivers IT strategy, cloud, cybersecurity, application, data More ❯
Data Storage & Databases: SQL & NoSQL Databases: Experience with databases like PostgreSQL, MySQL, MongoDB, and Cassandra. Big Data Ecosystems: Hadoop, Spark, Hive, and HBase. Data Integration & ETL: Data Pipelining Tools: Apache NiFi, Apache Kafka, and Apache Flink. ETL Tools: AWS Glue, Azure Data Factory, Talend, and Apache Airflow. AI & Machine Learning: Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras More ❯
and well-tested solutions to automate data ingestion, transformation, and orchestration across systems. Own data operations infrastructure: Manage and optimise key data infrastructure components within AWS, including Amazon Redshift, Apache Airflow for workflow orchestration and other analytical tools. You will be responsible for ensuring the performance, reliability, and scalability of these systems to meet the growing demands of data … pipelines , data warehouses , and leveraging AWS data services . Strong proficiency in DataOps methodologies and tools, including experience with CI/CD pipelines, containerized applications , and workflow orchestration using Apache Airflow . Familiar with ETL frameworks, and bonus experience with Big Data processing (Spark, Hive, Trino), and data streaming. Proven track record - You've made a demonstrable impact in More ❯
Columbia, South Carolina, United States Hybrid / WFH Options
Systemtec Inc
technologies and cloud-based technologies AWS Services, State Machines, CDK, Glue, TypeScript, CloudWatch, Lambda, CloudFormation, S3, Glacier Archival Storage, DataSync, Lake Formation, AppFlow, RDS PostgreSQL, Aurora, Athena, Amazon MSK, Apache Iceberg, Spark, Python ONSITE: Partially onsite 3 days per week (Tue, Wed, Thurs) and as needed. Standard work hours: 8:30 AM - 5:00 PM Required Qualifications of the More ❯
to manage competing technical requirements across complex systems. Strong communication and stakeholder engagement skills, enabling you to translate technical solutions into business value. Desirable experience with tools like Snowflake, Apache Airflow, and AWS certification-or demonstrable equivalent knowledge-will help you thrive from day one. You'll benefit from Our compensation package includes a competitive salary, company bonus, holiday More ❯
Salary: 50.000 - 60.000 € per year Requirements: • 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark • Strong programming skills in Python, with experience in data manipulation libraries (e.g., PySpark, Spark SQL) • Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables • Solid understanding of data warehousing principles More ❯
solutions using Databricks on Azure or AWS. Databricks Components : Proficient in Delta Lake, Unity Catalog, MLflow, and other core Databricks tools. Programming & Query Languages : Strong skills in SQL and Apache Spark (Scala or Python). Relational Databases : Experience with on-premises and cloud-based SQL databases. Data Engineering Techniques : Skilled in Data Governance, Architecture, Data Modelling, ETL/ELT More ❯
as AWS, Azure, GCP, and Snowflake. Understanding of cloud platform infrastructure and its impact on data architecture. Data Technology Skills: A solid understanding of big data technologies such as Apache Spark, and knowledge of Hadoop ecosystems. Knowledge of programming languages such as Python, R, or Java is beneficial. Exposure to ETL/ELT processes, SQL, NoSQL databases is a More ❯
models in close cooperation with our data science team Experiment in your domain to improve precision, recall, or cost savings Requirements Expert skills in Java or Python Experience with Apache Spark or PySpark Experience writing software for the cloud (AWS or GCP) Speaking and writing in English enables you to take part in day-to-day conversations in the More ❯
methodologies. Collaborating with stakeholders to define data strategies, implement data governance policies, and ensure data security and compliance. About you: Strong technical proficiency in data engineering technologies, such as Apache Airflow, ClickHouse, ETL tools, and SQL databases. Deep understanding of data modeling, ETL processes, data integration, and data warehousing concepts. Proficiency in programming languages commonly used in data engineering More ❯
data engineering, and cloud technologies to continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and More ❯
data engineering, and cloud technologies to continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and More ❯
data engineering, and cloud technologies to continuously improve our tools and approaches Profil Essential Skills: 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and More ❯
Bash, Ansible DevOps & CI/CD: Jenkins, GitLab CI/CD, Terraform Cloud & Infrastructure: AWS Testing & Quality: Cucumber, SonarQube Monitoring & Logging: ELK Stack (Elasticsearch, Logstash, Kibana), Grafana Dataflow & Integration: Apache NiFi Experience across multiple areas is desirable; we don't expect you to know everything but a willingness to learn and contribute across the stack is key. #LI-JS2 More ❯
microservice architecture, API development. Machine Learning (ML): Deep understanding of machine learning principles, algorithms, and techniques. Experience with popular ML frameworks and libraries like TensorFlow, PyTorch, scikit-learn, or Apache Spark. Proficiency in data preprocessing, feature engineering, and model evaluation. Knowledge of ML model deployment and serving strategies, including containerization and microservices. Familiarity with ML lifecycle management, including versioning More ❯
Proven desire to expand your cloud/platform engineering capabilities Experience working with Big Data Experience of data storage technologies: Delta Lake, Iceberg, Hudi Proven knowledge and understanding of Apache Spark, Databricks or Hadoop Ability to take business requirements and translate these into tech specifications Competence in evaluating and selecting development tools and technologies Sound like the role you More ❯