Apache Spark Jobs

1 to 25 of 371 Apache Spark Jobs

Data Engineer

London, England, United Kingdom
Hybrid / WFH Options
Aventum Group
Profisee), Snowflake Data Integration, Azure Service Bus, Deltalake, BigQuery, Azure DevOps, Azure Monitor, Azure Data Factory, SQL Server, Azure DataLake Storage, Azure App Service, Apache Airflow, Apache Iceberg, Apache Spark, Apache Hudi, Apache Kafka, Power BI, BigQuery, Azure ML is a plus Experience with … Azure SQL Database, Cosmos DB, NoSQL, MongoDB Experience with Agile, DevOps methodologies Awareness and knowledge of ELT/ETL, DWH, APIs (RESTful), Spark APIs, FTP protocols, SSL, SFTP, PKI (Public Key Infrastructure) and Integration testing Skills and Abilities Knowledge of Python, SQL, SSIS, and Spark languages. Demonstrative ability more »
Posted:

Data Architect

London Area, United Kingdom
Mirai Talent
What you’ll be using: Platforms & Tools : Cloud Computing platforms (ADLS Gen2), Microsoft Stack (Synapse, DataBricks, Fabric, Profisee), Snowflake Data Integration, Azure Service Bus, Apache Airflow, Apache Iceberg, Apache Spark, Apache Hudi, Apache Kafka, Power BI, BigQuery, DeltaLake, Azure DevOps, Azure Monitor, Azure Data … Server, Azure DataLake Storage, Azure App Service, Azure ML is a plus. Languages : Python, SQL, T-SQL, SSIS and high-level programming knowledge on Spark is a plus. DB: Azure SQL Database, Cosmos DB, NoSQL, MongoDB, and HBase are a plus. Methodologies: Agile and DevOps must have. Concepts: ELT …/ETL, DWH, APIs (RESTful), Spark APIs, FTP protocols, SSL, SFTP, PKI (Public Key Infrastructure) and Integration testing. If this sounds like you, be sure to get in touch – we are shortlisting right away. If you like the sound of the opportunity, but don’t quite tick every box more »
Posted:

Senior Data Engineer

London Area, United Kingdom
Lorien
Azure Data Lake Storage, Azure Data Factory, Azure Synapse Analytics, Azure Databricks, Azure SQL Database, Azure Stream Analytics, etc. Strong Python or Scala with Spark, PySpark experience Experience with relational databases and NoSQL databases Significant experience and in-depth knowledge of creating data pipelines and associated design principles, standards … Ability to design and implement data warehousing solutions using Azure Synapse Analytics. Azure Databricks: Proficiency in using Azure Databricks for data processing and analytics. Apache Spark: Deep understanding of Apache Spark for large-scale data processing. Azure Blob Storage and Azure Data Lake Storage: Expertise in more »
Posted:

Data Architect

London Area, United Kingdom
HCLTech
Certified Solutions Architect, AWS Certified Data Analytics Specialty, or AWS Certified Big Data Specialty. Experience with other big data and streaming technologies such as Apache Spark, Apache Flink, or Apache Beam. Knowledge of containerization and orchestration technologies such as Docker and Kubernetes. Experience with data lakes more »
Posted:

Appian Software Engineer

Chicago, Illinois, United States
Hybrid / WFH Options
Request Technology - Robyn Honquest
required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based more »
Employment Type: Permanent
Salary: USD 145,000 Annual
Posted:

Machine Learning Engineer - AI - MLOps

Birmingham, England, United Kingdom
Hybrid / WFH Options
Xpertise Recruitment
CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with more »
Posted:

Machine Learning Engineer - AI - MLOps

Newcastle Upon Tyne, England, United Kingdom
Hybrid / WFH Options
Xpertise Recruitment
CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with more »
Posted:

Associate Principal, Appian Development

Chicago, Illinois, United States
Request Technology
required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and more »
Employment Type: Permanent
Salary: USD 150,000 Annual
Posted:

Associate Principal, Appian Development

Dallas, Texas, United States
Request Technology
required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and more »
Employment Type: Permanent
Salary: USD 150,000 Annual
Posted:

Apache Spark application Developer

Greater London, England, United Kingdom
Wipro
workplace where each employee's privacy and personal dignity is respected and protected from offensive or threatening behaviour including violence and sexual harassment Role: Apache Spark Application Developer Skills Required: Hands on Experience as a software engineer in a globally distributed team working with Scala, Java programming language … preferably both) Experience with big-data technologies Spark/Databricks and Hadoop/ADLS is a must Experience in any one of the cloud platform Azure (Preferred), AWS or Google Experience building data lakes and data pipelines in cloud using Azure and Databricks or similar tools. Spark Developer more »
Posted:

Data Engineer - Dallas

Dallas, Texas, United States
NTT DATA
modeling techniques and experience with data modeling tools Proficiency in designing and optimizing data pipelines using ETL/ELT frameworks and tools (e.g., Informatica, Apache Spark, Airflow, AWS Glue) Working knowledge on Data warehousing Familiarity with cloud-based data platforms and services (e.g., Snowflake , AWS, Google Cloud, Azure … ETL/ELT tools like Informatica, Apache Spark Alteryx (good to have) Experience with version control systems (e.g., Git ) and agile software development methodologies Strong communication skills to effectively convey technical concepts to both technical and non-technical stakeholders Excellent problem-solving skills and the ability to work more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Senior Data Engineer - FullCircl

London Area, United Kingdom
FullCircl
working closely with our product teams on existing projects and new innovations to support company growth and profitability. Our Tech Stack Python Scala Kotlin Spark Google PubSub Elasticsearch, Bigquery, PostgresQL Kubernetes, Docker, Airflow Key Responsibilities Designing and implementing scalable data pipelines using tools such as Apache Spark … Data Infrastructure projects, as well as designing and building data intensive applications and services. Experience with data processing and distributed computing frameworks such as Apache Spark Expert knowledge in one or more of the following languages - Python, Scala, Java, Kotlin Deep knowledge of data modelling, data access, and more »
Posted:

Senior Data Engineer

Luton, England, United Kingdom
easyJet
data pipelines using tools such as Airflow, Jenkins and GitHub actions. · Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala · Help the business harness the power of data within easyJet, supporting them with insight, analytics and data … system. · Significant experience with Python, and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). · Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Arrow, MapR). · Significant experience with SQL – comfortable writing efficient SQL. · Experience using … enterprise scheduling tools (e.g. Apache Airflow, Spring DataFlow, Control-M) · Experience with Linux and containerisation What you’ll get in return ·Competitive base salary ·Up to 20% bonus ·25 days holiday ·BAYE, SAYE & Performance share schemes ·7% pension ·Life Insurance ·Work Away Scheme ·Flexible benefits package ·Excellent staff travel more »
Posted:

Senior Software Engineer, Query Performance, Java/Scala

United Kingdom
Xonai
Software Engineer for this role, you will collaborate with the founding team to expand the integration of our Big Data processing acceleration technology with Apache Spark to drive new optimizations and broader SQL operation coverage. Your contributions to our core solution will directly impact data infrastructure processing 10s … as batch processing code, data parsing, shuffling and data partitioning algorithms. Maintain the solution up to date and compatible with a variety of supported Apache Spark runtimes. Independently and diligently write, test and deploy production code driven by modern software engineering practices. Work with the internals of leading more »
Posted:

Senior Data Engineer

London Area, United Kingdom
Hybrid / WFH Options
Lawrence Harvey
of the company's data infrastructure. You will work with some of the most innovative tools in the market including Snowflake, AWS (Glue, S3), Apache Spark, Apache Airflow and DBT!! The role is hybrid, with 2 days in the office in central London and the company is more »
Posted:

Data Solution Architect

Manchester Area, United Kingdom
hackajob
comfortable designing and constructing bespoke solutions and components from scratch to solve the hardest problems. Adept in Java, Scala, and big data technologies like Apache Kafka and Apache Spark, they bring a deep understanding of engineering best practices. This role involves scoping and sizing, and indeed estimating … be considered. Key responsibilities of the role are summarised below Design and implement large-scale data processing systems using distributed computing frameworks such as Apache Kafka and Apache Spark. Architect cloud-based solutions capable of handling petabytes of data. Lead the automation of CI/CD pipelines for more »
Posted:

Data Engineer

London, United Kingdom
Uniting People
Data Engineer 6 Month Contract Inside IR35 £450/day Hiring Immediately Job Description (Apache Iceberg, Spark, Big Data) Job Details Overview: Overall IT experience of 5+ years of total experience with strong programming skills Excellent skill in Apache Iceberg, Spark, Big Data 3+ years of … Big Data project development experience Hands on experience in working areas like Apache Iceberg & Spark, Hadoop, Hive Must have knowledge in any Database Ex: Postgres, Oracle, MongoDB Excellent in SDLC Processes and DevOps knowledge (Jira, Jenkins pipeline) Working in Agile POD and with team collaboration Ability to participate more »
Employment Type: Contract
Rate: £500/day Inside IR35
Posted:

Senior Java Software Engineer

London Area, United Kingdom
E-Resourcing Ltd - Specialist I.T. Recruitment
development (ideally AWS) and container technologies Strong communication and interpersonal skills Experience managing projects and working with external third party teams Ideally experience with Apache Spark or Apache Flink (but not essential) Please note, this role is unable to provide sponsorship. If this role sounds of interest more »
Posted:

Software Engineer (DV Security Clearance)

Glocuester, England - South West, United Kingdom
CGI
IaC), automation & configuration management; Ansible (plus Puppet, Saltstack), Terraform, CloudFormation; NodeJS, REACT/MaterialUI (plus Angular), Python, JavaScript; Big data processing and analysis, e.g. Apache Hadoop (CDH), Apache Spark; RedHat Enterprise Linux, CentOS, Debian or Ubuntu. Java 8, Spring framework (preferably Spring boot), AMQP RabbitMQ, Open source more »
Employment Type: Full Time
Posted:

Head of Data - Principal Engineer

London, United Kingdom
JP Morgan Chase
and develop innovative solutionsData engineering skills: Proficiency in designing, building, and optimizing data pipelines, as well as experience with big data processing tools like Apache Spark, Hadoop, and DataflowExperience in designing & operating Operational Datastore/Data Lake/Data Warehouse platforms at scale with high-availabilityData integration: Familiarity … with data integration tools and techniques, including ETL (Extract, Transform, Load) processes and real-time data streaming (e.g., using Apache Kafka, Kinesis, or Pub/Sub), exposing data sets via GraphQLCloud platforms expertise: Deep understanding of GCP/AWS services, architectures, and best practices, with experience in designing and more »
Salary: £ 80 K
Posted:

Data Engineer - Databricks - Remote

London Area, United Kingdom
Hybrid / WFH Options
Primus Connect
the UK). Role Overview: In this vital role, you will develop and maintain enterprise-grade software systems leveraging your expertise in Databricks, Python, Spark, R, and SQL. You will collaborate closely with our architecture team to design scalable, clean solutions that support continuous delivery and improvement. Your contributions more »
Posted:

Director of Applied AI and Data Science

Houston, Texas, United States
Request Technology - Craig Johnson
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Director of Applied AI and Data Science

Chicago, Illinois, United States
Request Technology - Craig Johnson
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Director of Applied AI and Data Science

Salt Lake City, Utah, United States
Request Technology - Craig Johnson
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Director of Applied AI and Data Science COE + RD

Salt Lake City, Utah, United States
Request Technology - Robyn Honquest
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
Employment Type: Permanent
Salary: USD Annual
Posted:
Apache Spark
10th Percentile
£50,000
25th Percentile
£61,180
Median
£80,000
75th Percentile
£105,000
90th Percentile
£118,750