Spark Streaming Job Vacancies

18 of 18 Spark Streaming Jobs

Data SME

London Area, United Kingdom
HCLTech
RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯
Posted:

Data SME

City of London, London, United Kingdom
HCLTech
RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯
Posted:

Senior Software Developer with Security Clearance

Chantilly, Virginia, United States
NS2 Mission
and data pipelines. Solid understanding of SQL and relational databases (e.g., MySQL, PostgreSQL, Hive). Familiarity with the Apache Hadoop ecosystem (HDFS, MapReduce, YARN). Working knowledge of Apache Spark and its modules (e.g., Spark SQL, Spark Streaming, MLlib). Experience with cloud-based data platforms like AWS Glue, Google Cloud Dataflow More ❯
Employment Type: Permanent
Salary: USD Annual
Posted:

Scala Spark Developer

City of London, London, United Kingdom
Ubique Systems
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based … Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A history of delivering against agreed objectives • Ability to multi-task and work under pressure • Demonstrated problem solving and decision-making skills • Excellent More ❯
Posted:

Scala Spark Developer

London Area, United Kingdom
Ubique Systems
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based … Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A history of delivering against agreed objectives • Ability to multi-task and work under pressure • Demonstrated problem solving and decision-making skills • Excellent More ❯
Posted:

Scala Spark Developer

london, south east england, united kingdom
Ubique Systems
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based … Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A history of delivering against agreed objectives • Ability to multi-task and work under pressure • Demonstrated problem solving and decision-making skills • Excellent More ❯
Posted:

Scala Spark Developer

slough, south east england, united kingdom
Ubique Systems
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based … Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A history of delivering against agreed objectives • Ability to multi-task and work under pressure • Demonstrated problem solving and decision-making skills • Excellent More ❯
Posted:

Scala Spark Developer

london (city of london), south east england, united kingdom
Ubique Systems
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based … Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A history of delivering against agreed objectives • Ability to multi-task and work under pressure • Demonstrated problem solving and decision-making skills • Excellent More ❯
Posted:

Senior AWS Data Engineer

London, United Kingdom
Hybrid / WFH Options
Capco
experience across AWS Glue, Lambda, Step Functions, RDS, Redshift, and Boto3. Proficient in one of Python, Scala or Java, with strong experience in Big Data technologies such as: Spark, Hadoop etc. Practical knowledge of building Real Time event streaming pipelines (eg, Kafka, Spark Streaming, Kinesis). Proven experience developing modern data architectures … data governance including GDPR. Bonus Points For Expertise in Data Modelling, schema design, and handling both structured and semi-structured data. Familiarity with distributed systems such as Hadoop, Spark, HDFS, Hive, Databricks. Exposure to AWS Lake Formation and automation of ingestion and transformation layers. Background in delivering solutions for highly regulated industries. Passion for mentoring and enabling data More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer with Security Clearance

Washington, Washington DC, United States
steampunk
relevant to the product being deployed and/or maintained. 5-7 years direct experience in Data Engineering with experience in tools such as: Big data tools: Hadoop, Spark, Kafka, etc. Relational SQL and NoSQL databases, including Postgres and Cassandra. Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. AWS cloud services: EC2, EMR, RDS, Redshift (or … Azure equivalents) Data streaming systems: Storm, Spark-Streaming, etc. Search tools: Solr, Lucene, Elasticsearch Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. Advanced working SQL knowledge and experience working with relational databases, query authoring and optimization (SQL) as well as working familiarity with a variety of databases. Experience with message More ❯
Employment Type: Permanent
Salary: USD 180,000 Annual
Posted:

Data Architect

United Kingdom
Hybrid / WFH Options
WebLife Labs
multi-tenant SaaS data platforms with strategies for data partitioning, tenant isolation, and cost management Exposure to real-time data processing technologies such as Kafka, Kinesis, Flink, or Spark Streaming, alongside batch processing capabilities Strong knowledge of SaaS compliance practices and security frameworks Core Competencies Excellent problem-solving abilities with the capacity to translate requirements into More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer with Security Clearance

Reston, Virginia, United States
ICF
We are looking for a seasoned Senior Data Engineer who will be a key driver to make this happen. Responsibilities: Design, develop, and maintain scalable data pipelines using Spark, Hive, and Airflow Develop and deploy data processing workflows on the Databricks platform Develop API services to facilitate data access and integration Create interactive data visualizations and reports using … quarter US domestically is required Preferred Qualifications: "U.S. Citizenship or Green Card is highly prioritized due to federal contract requirements" Experience working with Copado (strongly preferred) Experienced in Spark and Hive for big data processing Experience building job workflows with the Databricks platform Strong understanding of AWS products including S3, Redshift, RDS, EMR, AWS Glue, AWS Glue DataBrew … software and tools including relational NoSQL and SQL databases including Cassandra and Postgres; workflow management and pipeline tools such as Airflow, Luigi and Azkaban; stream-processing systems like Spark-Streaming and Storm; and object function/object-oriented scripting languages including Scala, C++, Java and Python. Familiar with DevOps methodologies, including CI/CD pipelines (Github More ❯
Employment Type: Permanent
Salary: USD 151,646 Annual
Posted:

Data Engineer with Security Clearance

Arlington, Virginia, United States
Innovative Defense Technologies
meet functional/non-functional project requirements Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, streaming and 'big data' technologies Implement data pipelines to ingest data to the platform, standardize and transform the data Support the development of analytics tools that utilize the data pipeline … industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Design and architect solutions with Big Data technologies (e.g Hadoop, Hive, Spark, Kafka) Design and implement systems that run at scale leveraging containerized deployments Design, build, and scale data pipelines across a variety of source systems and streams (internal, third-party … Science, Computer Engineering, Informatics, Information Systems, or another quantitative field Minimum 5 years of experience in a Data Engineer role Required Skills: Experience with big data tools: Hadoop, Spark, etc. Experience with relational SQL and NoSQL databases, including Postgres Experience with AWS cloud or remote services: EC2, EMR, RDS, Redshift Experience with stream-processing systems: Kafka, Storm, Spark More ❯
Employment Type: Permanent
Salary: USD 184,000 Annual
Posted:

Principal Software Architect with Security Clearance

Arlington, Virginia, United States
Hybrid / WFH Options
STR
metadata, dependency and workload management Expert SQL knowledge and experience working with a variety of databases Experience using the following software/tools: Big Data tools: e.g. Hadoop, Spark, Kafka, ElasticSearch AWS: Athena, RDB, AWS credentials from Cloud Practitioner to Solutions Architect Data Lakes: e.g. Delta Lake, Apache Hudi, Apache Iceberg Distributed SQL interfaces: e.g. Apache Hive, Presto …/Trino, Spark Data pipeline and workflow management tools: e.g Luigi, Airflow Dashboard frontends: e.g. Grafana, Kibana Stream-processing systems: e.g. Storm, Spark-Streaming, etc. STR is a growing technology company with locations near Boston, MA, Arlington, VA, near Dayton, OH, Melbourne, FL, and Carlsbad, CA. We specialize in advanced research and development for More ❯
Employment Type: Permanent
Salary: USD Annual
Posted:

Senior Software Engineer with Security Clearance

Arlington, Virginia, United States
STR
workload management Experience with development of REST APIs, access control, and auditing Experience with DevOps pipelines Experience using the following software/tools: Big Data tools: e.g. Hadoop, Spark, Kafka, ElasticSearch Data Lakes: e.g. Delta Lake, Apache Hudi, Apache Iceberg Distributed Data Warehouse Frontends: e.g. Apache Hive, Presto Data pipeline and workflow management tools: e.g Luigi, Airflow Dashboard … frontends: e.g. Grafana, Kibana Stream-processing systems: e.g. Storm, Spark-Streaming, etc. STR is a growing technology company with locations near Boston, MA, Arlington, VA, near Dayton, OH, Melbourne, FL, and Carlsbad, CA. We specialize in advanced research and development for defense, intelligence, and national security in: cyber; next generation sensors, radar, sonar, communications, and electronic More ❯
Employment Type: Permanent
Salary: USD Annual
Posted:

4105 Big Data Architect with Security Clearance

Chantilly, Virginia, United States
Procession Systems
and test new database programs, data lakes, and associated microservices using Java, NiFi flows, and Python. Search engine technology such as Solr, ElasticSearch Hands on Experience in Handling Spark and Kafka Cluster management Experience as software engineer lead or architect directly supporting Government technical stakeholders DESIRED QUALIFICATIONS: Experiencing interacting with AWS SDK, AWS API, AWS CLI, and AWS … Kafka . Developed scripts and automated data management from end to end and sync up between all the clusters. Developed and Configured Kafka brokers to pipeline data into spark streaming. Developed Spark scripts by using scala shell commands as per the requirement. Developed spark code and spark-SQL/streaming More ❯
Employment Type: Permanent
Salary: USD Annual
Posted:

Full Stack Developers - $$$ - FS Poly with Security Clearance

Reston, Virginia, United States
SRC
that specializes in custom Software Development, Big Data Analytics, Data Science, and Cloud Computing for both government and commercial customers. They provide cloud solutions utilizing technologies such as Spark, AWS, Azure, Cloudera, Kubernetes, and Google Cloud and offer real-time analytics with Spark Streaming and Tensorflow. The culture here is focused on creativity, collaboration More ❯
Employment Type: Permanent
Salary: USD Annual
Posted:

Data Engineer (Part-Time, Remote)

Kenney, Texas, United States
Hybrid / WFH Options
Futuremindz llc
for experienced professionals who prefer flexible hours. Key Responsibilities • Design, develop, and maintain scalable ETL/ELT pipelines and data workflows. • Build and optimize data lakes, warehouses, and streaming pipelines. • Work closely with data scientists, analysts, and product teams to ensure reliable data access. • Maintain data quality, lineage, and governance standards. • Monitor and improve pipeline performance and scalability. … Hands-on experience with cloud data platforms (AWS Glue, Redshift, GCP BigQuery, Azure Synapse, etc.). • Solid understanding of data warehousing, partitioning, schema design, and optimization. • Familiarity with streaming technologies like Kafka or Spark Streaming. • Strong problem-solving, debugging, and communication skills. Please reach out to Sudheer Kumar.V Email: Phone: +1 More ❯
Employment Type: Permanent
Salary: USD Annual
Posted: