plus Experience with Terraform and Kubernetes is a plus! A genuine excitement for significantly scaling large data systems Technologies we use (experience not required): AWS serverless architectures Kubernetes Spark Flink Databricks Parquet. Iceberg, Delta lake, Paimon Terraform Github including Github Actions Java PostgreSQL About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the More ❯
MongoDB and Elasticsearch Experience with modern web UI frameworks such as Angular, Vue, React or Ember API development experience Familiarity with streaming/event-based architecture (Apache Spark, ApacheFlink) Familiarity with NiFi The Benefits Package Wyetech believes in generously supporting employees as they prepare for retirement. The company automatically contributes 20% of each employee's gross compensation to More ❯
Apache Hadoop/Cloudera) (all genders) Aufgaben Administrate, monitor and optimize our Big Data environment based on Apache Hadoop from Cloudera (AWS-Cloud) Manage and maintain services like Kafka, Flink, NiFi, DynamoDB and Iceberg Tables IaaC deployment via Terraform Plan and execute updates/upgrades Advise our Data Engineers and Data Scientists on the selection of Hadoop services for More ❯
Apache Hadoop/Cloudera) (all genders) Aufgaben Administrate, monitor and optimize our Big Data environment based on Apache Hadoop from Cloudera (AWS-Cloud) Manage and maintain services like Kafka, Flink, NiFi, DynamoDB and Iceberg Tables IaaC deployment via Terraform Plan and execute updates/upgrades Advise our Data Engineers and Data Scientists on the selection of Hadoop services for More ❯
Apache Hadoop/Cloudera) (all genders) Aufgaben Administrate, monitor and optimize our Big Data environment based on Apache Hadoop from Cloudera (AWS-Cloud) Manage and maintain services like Kafka, Flink, NiFi, DynamoDB and Iceberg Tables IaaC deployment via Terraform Plan and execute updates/upgrades Advise our Data Engineers and Data Scientists on the selection of Hadoop services for More ❯
Apache Hadoop/Cloudera) (all genders) Aufgaben Administrate, monitor and optimize our Big Data environment based on Apache Hadoop from Cloudera (AWS-Cloud) Manage and maintain services like Kafka, Flink, NiFi, DynamoDB and Iceberg Tables IaaC deployment via Terraform Plan and execute updates/upgrades Advise our Data Engineers and Data Scientists on the selection of Hadoop services for More ❯
Apache Hadoop/Cloudera) (all genders) Aufgaben Administrate, monitor and optimize our Big Data environment based on Apache Hadoop from Cloudera (AWS-Cloud) Manage and maintain services like Kafka, Flink, NiFi, DynamoDB and Iceberg Tables IaaC deployment via Terraform Plan and execute updates/upgrades Advise our Data Engineers and Data Scientists on the selection of Hadoop services for More ❯
ADMIRAL Technologies - A clear victory for your future! Aufgaben Administrate, monitor and optimize our Big Data environments based on Apache Hadoop (AWS Cloud) Manage and maintain services like Kafka, Flink, NiFi, DynamoDB and Iceberg Tables IaaC deployment via Terraform Plan and execute updates and upgrades Advise our Data Engineers and Data Scientists on the selection of Hadoop services for More ❯
to cross-functional teams, ensuring best practices in data architecture, security and cloud computing Proficiency in data modelling, ETL processes, data warehousing, distributed systems and metadata systems Utilise ApacheFlink and other streaming technologies to build real-time data processing systems that handle large-scale, high-throughput data Ensure all data solutions comply with industry standards and government regulations … not limited to EC2, S3, RDS, Lambda and Redshift. Experience with other cloud providers (e.g., Azure, GCP) is a plus In-depth knowledge and hands-on experience with ApacheFlink for real-time data processing Proven experience in mentoring and managing teams, with a focus on developing talent and fostering a collaborative work environment Strong ability to engage with More ❯
Your work will establish the foundation for the future of eBay's data platform infrastructure. We are seeking Data Platform Software Engineers, not Data Engineers. While familiarity with Spark, Flink, and other tools in the Hadoop environment is a definite advantage, your focus will be on building the data platform rather than just creating data pipelines. If you are … in Java/Python (or equivalent) and in infra-as-code, CI/CD, and containerized environments. Hands-on deep internal expertise in several of the following: Kafka/Flink, Spark,, Delta/Iceberg, GraphQL/REST APIs, RDBMS/NoSQL, Kubernetes, Airflow. Experience building both streaming and batch data platforms, improving reliability, quality, and developer velocity. Demonstrated ability More ❯
At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to More ❯
At the core of VAULT is big data at scale. Our systems handle massive ingestion pipelines, long-term storage, and high-performance querying. We leverage distributed technologies (Kafka, Spark, Flink, Cassandra, Airflow, etc.) to deliver resilient, low-latency access to trillions of records, while continuously optimizing for scalability, efficiency, and reliability. We'll trust you to: Build high-performance … oriented programming language Deep background in distributed, high-volume, high-availability systems Fluency in AI development tools We would love to see: Experience with big data ecosystems (Kafka, Spark, Flink, Cassandra, Redis, Airflow) Familiarity with cloud platforms (AWS, Azure, GCP) and S3-compatible storage SaaS/PaaS development experience Container technologies (Docker, Kubernetes) Salary Range = 160000 - 240000 USD Annually More ❯
play a key part in strengthening the foundation of eBay's data platform infrastructure. This role is focused on Data Platform Engineering — not data engineering. While familiarity with Spark, Flink, and other tools in the Hadoop ecosystem is valuable, your primary responsibility will be building and evolving the platform itself, not just authoring data pipelines. If you are an … Proven ability to design and deliver critical systems with impact. Proficiency in Java/Python, CI/CD, and containerized environments. Hands-on expertise in tools like Kafka/Flink, Spark, Delta/Iceberg, Kubernetes, NoSQL/columnar stores. Experience in streaming and batch data platforms. Strong foundation in algorithms and distributed design. BS/MS in CS or More ❯
Computer Science or related field (or equivalent experience). Strong proficiency in Java and common design patterns. Hands-on experience with streaming and messaging technologies such as Apache Kafka, Flink, and Pulsar. Proven problem-solving skills and expertise in troubleshooting production issues; Familiarity with monitoring and observability tools like Grafana, Prometheus, and ELK. Experience with Kafka and FlinkMore ❯
the vision and roadmap for AI infrastructure and data engineering. Lead, mentor, and scale global engineering teams. Oversee large-scale distributed compute, storage, and streaming systems (e.g., Spark, Kafka, Flink, Iceberg/Delta Lakes). Collaborate with cross-functional teams (Data Science, ML Engineering, Product). Build and maintain pipelines, governance frameworks, dashboards, and analytics systems. Ensure reliability, scalability …/DevOps, and security. Excellent leadership, communication, and stakeholder management skills. Preferred Advanced technical degree (MS/PhD). Strong knowledge of data systems (SQL/NoSQL, Spark, Kafka, Flink). Familiarity with AI-assisted data workflows and intelligent data interfaces. Holiday purchase Private medical Income protection Life insurance Lenovo and Motorola products discounts Mortgage advice and support We More ❯
the platform. Your Impact Build and maintain core platform capabilities that support high-throughput batch, streaming, and AI-powered workloads. Develop resilient, observable, and scalable systems using Apache Kafka, Flink, Pulsar, and cloud-native tools. Collaborate with AI/ML engineers to operationalize models and enable generative AI use cases such as prompt-based insights or automation. Deliver reliable … experience (or equivalent) with deep experience in platform/backend systems. Expert-level skills in Java, with strong proficiency in Python. Experience building distributed data pipelines using Apache Kafka, Flink, and Pulsar. Familiarity with data lakes and scalable data storage patterns. Demonstrated experience integrating with AI/ML models, including LLMs and prompt-based applications. Proven capability in fullstack More ❯
distributed team with the remit being the delivery of a new state of the art platforms for a specific business area. The technical stack includes Java, Oracle, Spring, ApacheFlink, Apache Kafka, Apache Ignite, Angular You will proactively influence design whilst also being Development Lead, promoting the highest software development standards. Experience Extensive Java experience in a complex software … development environment. Spring Framework experience. Specialisation in any of the following: messaging middleware, databases, such as Oracle, Flink, Ignite, Kafka, or Kubernetes. SDLC Automation tools such as Jira, Bitbucket, Artifactory, or Jenkins. Experience working in a global team, aiding others through pair programming and knowledge sharing to help the team improve their development practices. Coaching and mentoring experience. Please More ❯
Technology Product Manager, Enterprise Services - Financial Solutions Location New York Business Area Sales and Client Service Ref # Description & Requirements Bloomberg's Enterprise Technology team is responsible for ensuring clients can robustly connect, integrate and develop with Bloomberg's capabilities More ❯