Permanent PySpark Jobs

76 to 100 of 120 Permanent PySpark Jobs

Analytic Developer

Odenton, Maryland, United States
Leidos
analysis, software design, implementation, testing, integration, deployment/installation, and maintenance) and programming. Develop in Python Juypter Notebook using data science best practices Develop PySpark analytics using big data best practices Document in Jira and Confluence using Agile methodologies Commit code in Gitlab using versioning control best practices You more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Analytic Developer

Catonsville, Maryland, United States
Leidos
analysis, software design, implementation, testing, integration, deployment/installation, and maintenance) and programming. Develop in Python Juypter Notebook using data science best practices Develop PySpark analytics using big data best practices Document in Jira and Confluence using Agile methodologies Commit code in Gitlab using versioning control best practices You more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Analytic Developer

Riverdale, Maryland, United States
Leidos
analysis, software design, implementation, testing, integration, deployment/installation, and maintenance) and programming. Develop in Python Juypter Notebook using data science best practices Develop PySpark analytics using big data best practices Document in Jira and Confluence using Agile methodologies Commit code in Gitlab using versioning control best practices You more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Analytic Developer

Severn, Maryland, United States
Leidos
analysis, software design, implementation, testing, integration, deployment/installation, and maintenance) and programming. Develop in Python Juypter Notebook using data science best practices Develop PySpark analytics using big data best practices Document in Jira and Confluence using Agile methodologies Commit code in Gitlab using versioning control best practices You more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Analytic Developer

Fulton, Maryland, United States
Leidos
analysis, software design, implementation, testing, integration, deployment/installation, and maintenance) and programming. Develop in Python Juypter Notebook using data science best practices Develop PySpark analytics using big data best practices Document in Jira and Confluence using Agile methodologies Commit code in Gitlab using versioning control best practices You more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Analytic Developer

Laurel, Maryland, United States
Leidos
analysis, software design, implementation, testing, integration, deployment/installation, and maintenance) and programming. Develop in Python Juypter Notebook using data science best practices Develop PySpark analytics using big data best practices Document in Jira and Confluence using Agile methodologies Commit code in Gitlab using versioning control best practices You more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Analytic Developer

Hanover, Maryland, United States
Leidos
analysis, software design, implementation, testing, integration, deployment/installation, and maintenance) and programming. Develop in Python Juypter Notebook using data science best practices Develop PySpark analytics using big data best practices Document in Jira and Confluence using Agile methodologies Commit code in Gitlab using versioning control best practices You more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Analytic Developer

Columbia, Maryland, United States
Leidos
analysis, software design, implementation, testing, integration, deployment/installation, and maintenance) and programming. Develop in Python Juypter Notebook using data science best practices Develop PySpark analytics using big data best practices Document in Jira and Confluence using Agile methodologies Commit code in Gitlab using versioning control best practices You more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Analytic Developer

Ellicott City, Maryland, United States
Leidos
analysis, software design, implementation, testing, integration, deployment/installation, and maintenance) and programming. Develop in Python Juypter Notebook using data science best practices Develop PySpark analytics using big data best practices Document in Jira and Confluence using Agile methodologies Commit code in Gitlab using versioning control best practices You more »
Employment Type: Permanent
Salary: USD Annual
Posted:

Data Engineer

London Area, United Kingdom
Axtria - Ingenious Insights
Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI/… environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS tools such more »
Posted:

Senior Data Engineer

London Area, United Kingdom
Hybrid / WFH Options
Venturi
candidate needs experience and confidence to speak up when several teams can be involved in delivering the project. There is a massive emphasis on Pyspark and Databricks for this particular role. Technical Skills Required: Azure (ADF, Functions, Blob Storage, Data Lake Storage, Azure Data Bricks) Databricks Spark Delta Lake … SQL Python PySpark ADLS Day To Day Responsibilities: Extensive experience in designing, developing, and managing end-to-end data pipelines, ETL (Extract, Transform, Load), and ELT (Extract, Load, Transform) solutions. Maintains a proactive approach to staying updated with emerging technologies and a strong desire to continuously learn and adapt more »
Posted:

Senior Data Engineer

London Area, United Kingdom
Hybrid / WFH Options
HENI
scientific Python toolset. Our tech stack includes Airbyte for data ingestion, Prefect for pipeline orchestration, AWS Glue for managed ETL, along with Pandas and PySpark for pipeline logic implementation. We utilize Delta Lake and PostgreSQL for data storage, emphasizing the importance of data integrity and version control in our … testing frameworks. Proficiency in Python and familiarity with modern software engineering practices, including 12factor, CI/CD, and Agile methodologies. Deep understanding of Spark (PySpark), Python (Pandas), orchestration software (e.g. Airflow, Prefect) and databases, data lakes and data warehouses. Experience with cloud technologies, particularly AWS Cloud services, with a more »
Posted:

Engineering Delivery Lead (AWS/Snowflake/Python/Scala) - UK based - 6 Month contract + extensions - £450-550/day + negotiable

England, United Kingdom
Orbis Group
experience leading a data engineering team. Key Tech: - AWS (S3, Glue, EMR, Athena, Lambda) - Snowflake, Redshift - DBT (Data Build Tool) - Programming: Python, Scala, Spark, PySpark or Ab Initio - Data pipeline orchestration (Apache Airflow) - Knowledge of SQL This is a 6 month initial contract with a trusted client of ours. more »
Posted:

Data Engineer

Greater London, England, United Kingdom
Hybrid / WFH Options
Agora Talent
come with scaling a company • The ability to translate complex and sometimes ambiguous business requirements into clean and maintainable data pipelines • Excellent knowledge of PySpark, Python and SQL fundamentals • Experience in contributing to complex shared repositories. What’s nice to have: • Prior early-stage B2B SaaS experience involving client more »
Posted:

Data Engineer

London Area, United Kingdom
Harrington Starr
start interviewing ASAP. Responsibilities: Azure Cloud Data Engineering using Azure Databricks Data Warehousing Data Engineering Very strong with the Microsoft Stack ESSENTIAL knowledge of PySpark clusters Python & C# Scripting experience Experience of message queues (Kafka) Experience of containerization (Docker) FINANCIAL SERVICES EXPERIENCE (Energy/commodities trading) If you have more »
Posted:

Data Engineer

London Area, United Kingdom
Hybrid / WFH Options
Durlston Partners
building a massively distributed cloud-hosted data platform that would be used by the entire firm. Tech Stack: Python/Java, Dremio, Snowflake, Iceberg, PySpark, Glue, EMR, dbt and Airflow/Dagster. Cloud: AWS, Lambdas, ECS services This role would focus on various areas of Data Engineering including: End more »
Posted:

Lead Data Scientist

Leicester, England, United Kingdom
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management NEXT STEPS: If this role looks of interest, please reach out to Joseph Gregory. more »
Posted:

Lead Data Scientist

Leicester, England, United Kingdom
Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
Posted:

Lead Data Scientist

Nottingham, England, United Kingdom
Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
Posted:

Java Data Engineer

United Kingdom
Amber Labs
experience in PowerBi would also be useful. You will be a Engineer with past experience with java, data, and infrastructure (devOps). Java, Python, PySpark Mechanisms: MongoDB, Redshift, AWS S3 Environments/Infra: AWS (required), [AWS Lambda, Terraform] (nice to have) Platforms: Creating data pipelines within Databricks or equivalent more »
Posted:

Data Modeler

United Kingdom
Hybrid / WFH Options
Tata Consultancy Services
vision to convert data requirements into logical models and physical solutions. with data warehousing solutions (e.g. Snowflake) and data lake architecture, Azure Databricks/Pyspark & ADF. retail data model standards - ADRM. communication skills, organisational and time management skills. to innovate, support change and problem solve. attitude, with the ability more »
Posted:

Data Scientist

London Area, United Kingdom
Vallum Associates
directly with clients. Supporting clients in platform discovery, integration, training, and collaboration on data science projects. Proficiency in technical skills, particularly Python, R, SQL, Pyspark, and JavaScript. Assisting users in mastering the platform. Analysing diverse data and ML applications. Providing strategic insights to ensure customer success. Collaborating with customers more »
Posted:

Python App Developer

Glasgow, Scotland, United Kingdom
Hybrid / WFH Options
Synchro
contract position. If you possess a solid background in software application development, with experience in cloud or microservice architecture, and proficiency in Python or PySpark, we encourage you to apply. Python App Developer Requirements: Proficiency with Python or PySpark. Exposure to cloud technologies such as Airflow, Astronomer, Kubernetes, AWS more »
Posted:

Senior Data Engineer (AWS)

London Area, United Kingdom
Xcede
of 4 years commercial experience Strong proficiency in AWS and relevant technologies Good communication skills Preferably experience with some or all of the following; Pyspark, Kubernetes, Terraform Expeirence in a start-up/scale up or fast-paced environment is desirable. HOW TO APPLY Please register your interest by more »
Posted:

Data Pipeline Engineer

United Kingdom
Hexegic
not received on time. Communicating outages with the end users of a data pipeline What We Value Comfortable reading and writing code in Python, Pyspark and Java. Basic understanding of Spark and interested in learning the basics of tuning Spark jobs. Data pipeline monitoring team members should be able more »
Posted:
PySpark
10th Percentile
£52,500
25th Percentile
£57,500
Median
£70,000
75th Percentile
£87,500
90th Percentile
£99,000