Preferred Qualifications: Experience with machine learning and preparing data for AI/ML model training. Familiarity with stream processing frameworks (e.g., Apache Kafka, ApacheFlink). Certification in cloud platforms (e.g., AWS Certified Big Data - Specialty, Google Cloud Professional Data Engineer). Experience with DevOps practices and CI/ More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Smart DCC
real-time data processing pipelines using platforms like Apache Kafka or cloud-native tools. Optimize batch processing workflows with tools like Apache Spark and Flink for scalable performance. Infrastructure Automation: Implement Infrastructure as Code (IaC) using tools like Terraform and Ansible. Leverage cloud-native services (AWS, Azure) to streamline More ❯
certifications. Experience with cloud platforms (AWS, Azure, GCP). Familiarity with cybersecurity concepts and tools. Experience with real-time data processing frameworks (e.g., ApacheFlink, Apache Spark). Where it's done: Remote (Herndon, VA). More ❯
Arlington, Virginia, United States Hybrid / WFH Options
Full Visibility LLC
orchestration (e.g., Apache Airflow, Luigi, Prefect) Strong programming skills in Python, SQL, or Scala Experience with open-source data processing tools (e.g., Kafka, Spark, Flink, Hadoop) Familiarity with database technologies (PostgreSQL, MySQL, or NoSQL solutions) Ability to work in a fast-paced environment with large-scale datasets Preferred: • Experience More ❯
managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, Spark Streaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Programming skills in Python More ❯
managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, Spark Streaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Ability to optimise and More ❯
cloud platforms (AWS, GCP, or Azure) and data lake/data warehouse technologies Exposure to real-time data processing and stream frameworks (e.g., Kafka, Flink) Contributions to data engineering tooling or open-source projects More ❯
AI/ML components or interest in learning data workflows for ML applications . Bonus if you have e xposure to Kafka, Spark, or Flink . Experience with data compliance regulations (GDPR). What you can expect from us: Opportunity for annual bonuses Medical Insurance Cycle to work scheme More ❯
advanced analytics infrastructure. Familiarity with infrastructure-as-code (IaC) tools such as Terraform or CloudFormation. Experience with modern data engineering technologies (e.g., Kafka, Spark, Flink, etc.). Why join YouLend? Award-Winning Workplace: YouLend has been recognised as one of the "Best Places to Work 2024" by the Sunday More ❯
models for real-time analytics. Proven experience in managing real-time data pipelines across multiple initiatives. Expertise in distributed streaming platforms (Kafka, Spark Streaming, Flink). Experience with GCP (preferred), AWS, or Azure for real-time data ingestion and storage. Strong programming skills in Python, Java, or Scala . More ❯
models for real-time analytics. Proven experience in managing real-time data pipelines across multiple initiatives. Expertise in distributed streaming platforms (Kafka, Spark Streaming, Flink). Experience with GCP (preferred), AWS, or Azure for real-time data ingestion and storage. Strong programming skills in Python, Java, or Scala . More ❯
Chantilly, Virginia, United States Hybrid / WFH Options
Aerospace Corporation
teams toward software development best practices Experience in SQL, NoSQL, Cypher and other big data querying languages Experience with big data frameworks (Hadoop, Spark, Flink etc.) Experience with ML lifecycle management tools (MLflow, Kubeflow, etc.) Familiarity with data pipelining and streaming technologies (Apache Kafka, Apache Nifi, etc.) Demonstrated contributions More ❯
/Nice to have Experience with a messaging middleware platform like Solace, Kafka or RabbitMQ. Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark More ❯
engineering experience Experience with data modeling, warehousing and building ETL pipelines Bachelor's degree Knowledge of batch and streaming data architectures like Kafka, Kinesis, Flink, Storm, Beam Knowledge of distributed systems as it pertains to data storage and computing Experience programming with at least one modern language such as More ❯
development experience using SQL. Hands-on experience with MPP databases such as Redshift, BigQuery, or Snowflake, and modern transformation/query engines like Spark, Flink, Trino. Familiarity with workflow management tools (e.g., Airflow) and/or dbt for transformations. Comprehensive understanding of modern data platforms, including data governance and More ❯
and Redis. In-depth knowledge of ETL/ELT pipelines, data transformation, and storage optimization. Skilled in working with big data frameworks like Spark, Flink, and Druid. Hands-on experience with both bare metal and AWS environments. Strong programming skills in Python, Java, and other relevant languages. Proficiency in More ❯
and experience with DevOps practices (CI/CD). Familiarity with containerization (Docker, Kubernetes), RESTful APIs, microservices architecture, and big data technologies (Hadoop, Spark, Flink). Knowledge of NoSQL databases (MongoDB, Cassandra, DynamoDB), message queueing systems (Kafka, RabbitMQ), and version control systems (Git). Preferred Skills: Experience with natural More ❯
efficient integration into Feast feature store. Requirements Good knowledge of programming languages such as Python or Java. Strong experience with streaming technologies (Spark, PySpark, Flink, KSQL or similar) for developing data transformation pipelines. Solid understanding and practical experience with SQL and relational databases (PostgreSQL preferred). Proficiency with AWS More ❯
systems, with proficiency in networking, multi-threading and implementation of REST APIs Experience with the Spring framework, messaging frameworks (Kafka, RabbitMQ), streaming analytics (ApacheFlink, Spark), management of containerized applications (Kubernetes). Experience with Enabling tools (Git, Maven, Jira), DevOps (Bamboo, Jenkins, GitLab Cl/Pipelines), Continuous Monitoring (ELK More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Yelp USA
solutions. Exposure to some of the following technologies: Python, AWS Redshift, AWS Athena/Apache Presto, Big Data technologies (e.g S3, Hadoop, Hive, Spark, Flink, Kafka etc), NoSQL systems like Cassandra, DBT is nice to have. What you'll get: Full responsibility for projects from day one, a collaborative More ❯
experience with data modeling and dbt. You have programming experience with Java and Python. Experience with event streaming technologies such as Apache Kafka and Flink is a plus. Ability to drive business value through technical champion building and high level POV's & POC's More ❯
OAuth, JWT, and data encryption. Fluent in English with strong communication and collaboration skills. Preferred Qualifications Experience with big data processing frameworks like ApacheFlink or Spark. Familiarity with machine learning models and AI-driven analytics. Understanding of front-end and mobile app interactions with backend services. Expertise in More ❯
london, south east england, United Kingdom Hybrid / WFH Options
eTeam
OAuth, JWT, and data encryption. Fluent in English with strong communication and collaboration skills. Preferred Qualifications Experience with big data processing frameworks like ApacheFlink or Spark. Familiarity with machine learning models and AI-driven analytics. Understanding of front-end and mobile app interactions with backend services. Expertise in More ❯
with testing frameworks like Jest, Cypress, or React Testing Library. Experience with authentication strategies using OAuth, JWT, or Cognito Familiarity with Apache Spark/Flink for real-time data processing is an advantage. Hands-on experience with CI/CD tools Commercial awareness and knowledge of public sector. Excellent More ❯
experience in cross-functional teams and able to communicate effectively about technical and operational challenges. Preferred Qualifications: Proficiency with scalable data frameworks (Spark, Kafka, Flink) Proven Expertise with Infrastructure as Code and Cloud best practices Proficiency with monitoring and logging tools (e.g., Prometheus, Grafana) Working at Lila Sciences, you More ❯