Permanent Apache Spark Jobs

1 to 25 of 313 Permanent Apache Spark Jobs

Senior Scala Developer - Apache Spark

London, England, United Kingdom
Hybrid / WFH Options
Pioneer Search
Senior Scala Developer - Apache Spark - Urgent Requirement Contract Length: 6 Months IR35 status: Inside Location: London - Hybrid working A Senior Scala Developer with experience in Apache Spark is needed for a British consultancy organisation. You will be an integral member of the team providing technical expertise … you will be able to implement ETL pipelines to process, transform, and standardize data from various sources as well as optimise the performance of Spark applications. Work closely with data scientists, software engineers, and machine learning experts to enhance the data platform and contribute to the development of cloud more »
Posted:

Lead Data Engineer (Director Level)

London Area, United Kingdom
Nicoll Curtin
Lead Data Engineer (Director) - Individual contributor - Azure, Data Factory, Databricks, Apache Spark - London Based I am hiring for a Lead Data Engineer for a crucial role within one of my Investment Bank clients in London. This role is at Director level as they require a very senior candidate … Leading data engineering practices Support current applications Introduce AI practices to the team/project Communicate key successes with stakeholders Key Skills: Azure Databricks Apache Spark Datascience, AI, ML Certifications or continued upskilling/contribution to blog posts within Data & AI beneficial but not essential. This is a … without sponsorship, if you are interested please apply or email me directly - aaron.dhammi@nicollcurtin.com Lead Data Engineer (Director) - Individual contributor - Azure, Data Factory, Databricks, Apache Spark - London Based more »
Posted:

Machine Learning Engineer

Cheshire East, England, United Kingdom
Wipro
Data Scientists and Service Engineering teams Experience with design, development and operations that leverages deep knowledge in the use of services like Amazon Kinesis, Apache Kafka, Apache Spark, Amazon Sagemaker, Amazon EMR, NoSQL technologies and other 3rd parties Develop and define key business questions and to build … a related field Experience of Data platform implementation, including 3+ years of hands-on experience in implementation and performance tuning Kinesis/Kafka/Spark/Storm implementations Experience with analytic solutions applied to the Marketing or Risk needs of enterprises Basic understanding of machine learning fundamentals Ability to … take Machine Learning models and implement them as part of data pipeline IT platform implementation experience Experience with one or more relevant tools ( Flink, Spark, Sqoop, Flume, Kafka, Amazon Kinesis) Experience developing software code in one or more programming languages (Java, JavaScript, Python, etc) Current hands-on implementation experience more »
Posted:

Spark Architect

Leeds, England, United Kingdom
PRACYVA
Spark Architect/SME Contract Role- 6 months to begin with & its extendable Location: Leeds, UK (min 3 days onsite) Context: Legacy ETL code for example DataStage is being refactored into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. … Skills: Spark Architecture – component understanding around Spark Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting recommendations. Spark SME – Be able to review PySpark … and Spark SQL jobs and make performance improvement recommendations. Spark – SME Be able to understand Data Frames/Resilient Distributed Data Sets and understand any memory related problems and make corrective recommendations. Monitoring – Be able to monitor Spark jobs using wider tools such as Grafana to see more »
Posted:

Appian Software Engineer

Chicago, Illinois, United States
Hybrid / WFH Options
Request Technology - Robyn Honquest
required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based more »
Employment Type: Permanent
Salary: USD 145,000 Annual
Posted:

Associate Principal, Appian Development

Chicago, Illinois, United States
Request Technology
required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and more »
Employment Type: Permanent
Salary: USD 150,000 Annual
Posted:

Associate Principal, Appian Development

Dallas, Texas, United States
Request Technology
required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and more »
Employment Type: Permanent
Salary: USD 150,000 Annual
Posted:

Lead Data Engineer

London, United Kingdom
Hybrid / WFH Options
DueDil
working closely with our product teams on existing projects and new innovations to support company growth and profitability. OUR TECH STACK · Python · Scala · Kotlin · Spark · Google PubSub · Elasticsearch, Bigquery, PostgresQL FullCircl 3 Lead_Data_Engineer 04.24 · Kubernetes, Docker, Airflow KEY RESPONSIBILITIES · Designing and implementing scalable data pipelines using tools … such as Apache Spark, Google PubSub etc. · Optimizing data storage and retrieval systems for maximum performance using both relational and NoSQL databases. · Continuously monitoring and improving the performance of our data solutions to meet our clients' needs. · Collaborating with cross-functional teams to understand business requirements and provide … Data Infrastructure projects, as well as designing and building data intensive applications and services. · Experience with data processing and distributed computing frameworks such as Apache Spark · Expert knowledge in one or more of the following languages - Python, Scala, Java, Kotlin · Deep knowledge of data modelling, data access, and more »
Salary: £ 70 K
Posted:

Lead Data Engineer

London Area, United Kingdom
FullCircl
working closely with our product teams on existing projects and new innovations to support company growth and profitability. Our Tech Stack Python Scala Kotlin Spark Google PubSub Elasticsearch Bigquery, PostgresQL Kubernetes, Docker, Airflow Ke y Responsibilities Designing and implementing scalable data pipelines using tools such as Apache Spark … Data Infrastructure projects, as well as designing and building data intensive applications and services. Experience with data processing and distributed computing frameworks such as Apache Spark Expert knowledge in one or more of the following languages - Python, Scala, Java, Kotlin Deep knowledge of data modelling, data access, and more »
Posted:

Scientist 3, Data Science - 4606

Philadelphia, Pennsylvania, United States
Hybrid / WFH Options
Comcast Corporation
use Jira, Confluence, and Git in an Agile development environment; perform DevOps processes using Concourse, Docker, and Kubernetes; perform large-scale data processing using Apache Spark; manage big data on Cloudera; perform Machine Learning, including developing and deploying predictive models leveraging ML algorithms; use AWS cloud platform; deploy … related technical or quantitative field; and one (1) year of experience programming using Python and Scala; using Jira; performing large-scale data processing using Apache Spark; managing big data on Cloudera; performing Machine Learning; using AWS cloud platform; deploying tools and applications on Unix; and writing SQL in more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Senior Big Data Platform Engineer

Edinburgh, Scotland, United Kingdom
Swift Strategic Solutions Inc
more details of the position - Ideal Qualifications Must Have - Platform engineer, Azure DevOps and CI/CD tools, Azure Cloud, Microsoft Fabric, Azure Services, Apache Spark, Experience of using IAC (terraform, APIs), Data Engineer, Big Data, PySpark Solid understanding of data Engineering concepts & experience of building and maintaining … DevOps/Agile Experience of managing environments using IAC (Terraform API's) Experience of designing robust, secured and compliant platform Capabilities. Strong understanding of Apache Spark including its architecture, components & how to create, monitor, optimize & scale spark jobs. Please send your resumes to adithya.thakur@s3staff.com for immediate more »
Posted:

Data Engineer

Derby, England, United Kingdom
Mirai Talent
data components such as Azure Data Factory, Azure SQL DB, Azure Data Lake, etc. Strong Python and SQL skills for data manipulation Experience with Apache Spark and/or Databricks. Familiarity with BI visualization tools like Power BI Experience in managing end-to-end analytics pipelines (batch and … such as Azure Data Engineer Associate are desirable. Knowledge of data ingestion methods for real-time and batch processing Proficiency in PySpark and debugging Apache Spark workloads. What’s in it for you? Annual bonus scheme – up to 10% Excellent pension scheme Flexible working Enhanced family friendly policies more »
Posted:

DevSecOps Engineer

London Area, United Kingdom
Capgemini
are looking for a skilled DevSecOps Engineer to join our team. The ideal candidate will have strong knowledge in operational procedures, data transformation using Apache Spark, AWS RDS (MySQL), and working with Hadoop. Familiarity with Tableau and Red Hat Decision Center is also required. Security clearance (SC) is … mandatory. Requirements: Experience in a DevSecOps role. Strong operational procedures knowledge. Proficient in Apache Spark, AWS RDS (MySQL), and Hadoop. Knowledge of Tableau and Red Hat Decision Central Key Responsibilities: Manage operational procedures. Transform and process data using Apache Spark. Administer AWS RDS with MySQL. Work with more »
Posted:

DevSecOps Engineer

London Area, United Kingdom
Hybrid / WFH Options
Damia Group
analysis. Your expertise will be instrumental in ensuring the security and efficiency of the data handling and reporting processes. Key Responsibilities: Data Processing: Utilize Apache Spark, AWS RDS, and Hadoop to process large datasets efficiently and securely. Reporting: Generate comprehensive and insightful reports using Tableau. Business Rules Management … adherence to best practices and maintaining high-security standards. Requirements: Security Clearance: Must hold a current and valid Security Clearance. Technical Skills: Proficient with Apache Spark, AWS RDS, and Hadoop. Experienced in using Tableau for data visualization and reporting. Familiarity with Red Hat Decision Manager for business rules more »
Posted:

Data Engineer

London Area, United Kingdom
Hybrid / WFH Options
Careers at MI5, SIS and GCHQ
delivering moderate-to-complex data flows as part of a development team in collaboration with others. You’ll be confident using technologies such as: Apache Kafka, Apache NiFi, SAS DI Studio, or other data integration platforms. You can implement, deliver, and translate several data models, including unstructured data … and recognised standards to build solutions using various traditional or big data languages such as: SQL, PL/SQL, SAS Macro Language, Python, Scala, Apache Spark, Java, JavaScript etc, using various tools including SAS, Hue (Hive/Impala), Kibana (Elastic Search). Knowledge of data management on Cloud more »
Posted:

Head of Data - Principal Software Engineer

London, United Kingdom
JP Morgan Chase
develop innovative solutionsData engineering skills: Proficiency in designing, building, developing, and optimizing data pipelines, as well as experience with big data processing tools like Apache Spark, Hadoop, and DataflowExperience in designing & operating Operational Datastore/Data Lake/Data Warehouse platforms at scale with high-availabilityData integration: Familiarity … with data integration tools and techniques, including ETL (Extract, Transform, Load) processes and real-time data streaming (e.g., using Apache Kafka, Kinesis, or Pub/Sub), exposing data sets via GraphQLCloud platforms expertise: Deep understanding of GCP/AWS services, architectures, and best practices, with hands-on experience in more »
Salary: £ 80 K
Posted:

Senior Java Software Engineer

London Area, United Kingdom
E-Resourcing Ltd - Specialist I.T. Recruitment
development (ideally AWS) Knowledge and ideally hands-on experience with data streaming, event-based architectures and Kafka Strong communication and interpersonal skills Experience with Apache Spark or Apache Flink would be ideal, but not essential Please note, this role is unable to provide sponsorship. If this role more »
Posted:

Senior Data Engineer

London Area, United Kingdom
Hybrid / WFH Options
Solirius Consulting
Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands more »
Posted:

Director of Applied AI and Data Science

New York, United States
Request Technology - Craig Johnson
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Director of Applied AI and Data Science

Houston, Texas, United States
Request Technology - Craig Johnson
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Director of Applied AI and Data Science

Chicago, Illinois, United States
Request Technology - Craig Johnson
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Director of Applied AI and Data Science

Los Angeles (Downtown), California, United States
Request Technology - Craig Johnson
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Director of Applied AI and Data Science

Salt Lake City, Utah, United States
Request Technology - Craig Johnson
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Director of Applied AI and Data Science COE + RD

Salt Lake City, Utah, United States
Request Technology - Robyn Honquest
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Director of Applied AI and Data Science COE + RD

Chicago, Illinois, United States
Request Technology - Robyn Honquest
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
Employment Type: Permanent
Salary: USD Annual
Posted:
Apache Spark
10th Percentile
£48,250
25th Percentile
£63,750
Median
£80,000
75th Percentile
£102,500
90th Percentile
£118,750