Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Client Server
Data Engineer (Python SparkSQL) *Newcastle Onsite* to £70k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a partner … by minimum A A B grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, ApacheSpark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience … earn a competitive salary (to £70k) plus significant bonus and benefits package. Apply now to find out more about this Data Engineer (Python SparkSQL) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. More ❯
Naperville, Illinois, United States Hybrid / WFH Options
esrhealthcare
Duration: Long Term Experience level: 10+ years Mandatory skills: ADF, Azure Data Bricks, PySpark, SparkSQL, PL/SQL, Python Job Description: Core Skills required ADF, Azure Data Bricks, PySpark, SparkSQL, PL/SQL, Python At least … Lead the Data Governance Program Extensive hands-on experience implementing data migration and data processing using Azure services:, Serverless Architecture, Azure Storage, Azure SQL DB/DW, Data Factory, Azure Stream Analytics, Azure Analysis Service, HDInsight, Databricks Azure Data Catalog, ML Studio, AI/ML, Azure Functions, ARM … available in the industry for data management, data ingestion, capture, processing and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc. Familiarity with Networking, Windows/Linux virtual machines, Container, Storage, ELB, AutoScaling is a plus Experience developing and deploying ETL More ❯
experiences, powered by best-in-class understanding of customer behavior and automation. Our work spans multiple technical disciplines: from deep-dive analytics using SQL and SparkSQL for large-scale data processing, to building automated marketing solutions with Python, Lambda, React.js, and leveraging internal … results to senior leadership. Experience with data visualization using Tableau, Quicksight, or similar tools. Experience in scripting for automation (e.g. Python) and advanced SQL skills. Experience programming to extract, transform and clean large (multi-TB) data sets. Experience with statistical analytics and programming languages such as R, Python … science, machine learning and data mining. Experience with theory and practice of design of experiments and statistical analysis of results. Experience with Python, SparkSQL, QuickSight, AWS Lambda & React.js - Core tools of team. Amazon is an equal opportunities employer. We believe passionately that employing a diverse More ❯
we drive improvements in how millions of customers discover and evaluate products. Our work spans multiple technical disciplines: from deep-dive analytics using SQL and SparkSQL for large-scale data processing, to building automated marketing solutions with Python, Lambda, React.js, and leveraging internal … results to senior leadership. - Experience with data visualization using Tableau, Quicksight, or similar tools. - Experience in scripting for automation (e.g. Python) and advanced SQL skills. - Experience programming to extract, transform and clean large (multi-TB) data sets. - Experience with statistical analytics and programming languages such as R, Python … science, machine learning and data mining. - Experience with theory and practice of design of experiments and statistical analysis of results. - Experience with Python, SparkSQL, QuickSight, AWS Lambda & React.js - Core tools of team. More ❯
we drive improvements in how millions of customers discover and evaluate products. Our work spans multiple technical disciplines: from deep-dive analytics using SQL and SparkSQL for large-scale data processing, to building automated marketing solutions with Python, Lambda, React.js, and leveraging internal … results to senior leadership Experience with data visualization using Tableau, Quicksight, or similar tools Experience in scripting for automation (e.g. Python) and advanced SQL skills. Experience programming to extract, transform and clean large (multi-TB) data sets Experience with statistical analytics and programming languages such as R, Python … science, machine learning and data mining Experience with theory and practice of design of experiments and statistical analysis of results Experience with Python, SparkSQL, QuickSight, AWS Lambda & React.js - Core tools of team Amazon is an equal opportunities employer. We believe passionately that employing a diverse More ❯
architectures that business engineering teams buy into and build their applications around. Required Qualifications, Capabilities, and Skills: Experience across the data lifecycle with Spark-based frameworks for end-to-end ETL, ELT & reporting solutions using key components like SparkSQL & Spark Streaming. … end-to-end engineering experience supported by excellent tooling and automation. Preferred Qualifications, Capabilities, and Skills: Good understanding of the Big Data stack (Spark/Iceberg). Ability to learn new technologies and patterns on the job and apply them effectively. Good understanding of established patterns, such as More ❯
in Information Systems or Computer Science and have a keen interest in manipulating data and drawing analytics out of it. Experience with Python, SparkSQL and cloud technologies is ideal for this role. Join ATCC and be part of a team that supports the global scientific … or equivalent experience (degree in information systems or computer science preferred). Strong understanding of data infrastructure and systems development. Experience with Python, SparkSQL and cloud technologies. Ability to collaborate cross-functionally and translate business requirements into technical specifications. Excellent problem-solving skills and the More ❯
role: This role sits within the Group Enterprise Systems (GES) Technology team. The right candidate would be an experienced Microsoft data warehouse developer (SQL Server, SSIS, SSAS) who can work both independently and as a member of a team to deliver enterprise-class data warehouse solutions and analytics … data models. Build and maintain automated pipelines to support data solutions across BI and analytics use cases. Work with Enterprise-grade technology, primarily SQL Server 2019 and potentially Azure technologies. Build patterns, common ways of working, and standardized data pipelines for DLG to ensure consistency across the organization. … Essential Core Technical Experience: 5 to 10+ years' extensive experience in SQL Server data warehouse or data provisioning architectures. Advanced SQL query writing & SQL procedure experience. Experience developing ETL solutions in SQL Server including SSIS & T-SQL. Experience in Microsoft BI technologies More ❯
scalable data pipelines and infrastructure using AWS (Glue, Athena, Redshift, Kinesis, Step Functions, Lake Formation). Utilise PySpark for distributed data processing, ETL, SQL querying, and real-time data streaming. Architect and implement robust data solutions for analytics, reporting, machine learning, and data science initiatives. Establish and enforce … including Glue, Athena, Redshift, Kinesis, Step Functions, and Lake Formation. Strong programming skills in Python and PySpark for data processing and automation. Extensive SQL experience (Spark-SQL, MySQL, Presto SQL) and familiarity with NoSQL databases (DynamoDB, MongoDB, etc.). Proficiency in Infrastructure More ❯
scalable data pipelines and infrastructure using AWS (Glue, Athena, Redshift, Kinesis, Step Functions, Lake Formation). Utilise PySpark for distributed data processing, ETL, SQL querying, and real-time data streaming. Architect and implement robust data solutions for analytics, reporting, machine learning, and data science initiatives. Establish and enforce … including Glue, Athena, Redshift, Kinesis, Step Functions, and Lake Formation. Strong programming skills in Python and PySpark for data processing and automation. Extensive SQL experience (Spark-SQL, MySQL, Presto SQL) and familiarity with NoSQL databases (DynamoDB, MongoDB, etc.). Proficiency in Infrastructure More ❯
data lakes, and associated microservices using Java, NiFi flows, and Python. Search engine technology such as Solr, ElasticSearch Hands on Experience in Handling Spark and Kafka Cluster management Experience as software engineer lead or architect directly supporting Government technical stakeholders DESIRED QUALIFICATIONS: Experiencing interacting with AWS SDK, AWS … automated data management from end to end and sync up between all the clusters. Developed and Configured Kafka brokers to pipeline data into spark streaming. Developed Spark scripts by using scala shell commands as per the requirement. Developed spark code and spark-SQL/streaming for faster testing and processing of data. Experience with version control and release tools such as Ant, Maven, Subversion and GIT Understanding of incorporating application framework/design patterns at an enterprise level Ability to produce quality code that adheres to coding standards More ❯
align with business needs and industry standards. The ideal candidate will have expertise in Java, SQL, Python, and Spark (PySpark & SparkSQL) while also being comfortable working with Microsoft Power Platform. Experience with Microsoft Purview is a plus. The role requires strong communication skills to collaborate effectively … 1. Data Architecture & Engineering Design and implement scalable data architectures that align with business objectives. Work with Java, SQL, Python, PySpark, and SparkSQL to build robust data pipelines. Develop and maintain data models tailored to organizational needs. Reverse-engineer data models from existing live systems. Utilize Microsoft Power … solutions with business goals. Analyze and mitigate the impact of data standard breaches. Required Skills & Qualifications: Strong proficiency in Java, SQL, Python, SparkSQL, and PySpark. Experience with Microsoft Power Platform (PowerApps, Power Automate, etc.). Good understanding of data governance, metadata management, and compliance frameworks. Ability to communicate More ❯
tools, proven ability to learn new tools. Experience with other tools like Qlik or Microstrategy is a plus. Strong experience with querying languages (SQL, PL/SQL, Scala/SparkSQL, etc) Expertise with developing complex dashboards containing aggregates and rolled up More ❯
within 30 days) Mastery in developing software code in one or more programming languages (Python, JavaScript, Java, Matlab, etc.) Expert knowledge in databases (SQL, NoSQL, Graph, etc.) and data architecture (Data Lake, Lakehouse) Knowledgeable in machine learning/AI methodologies Strong writing and oral communication skills to deliver … design documents, technical reports, and presentations to a variety of audiencesPreferred Qualifications: Proficiency in Azure solutions (Databricks, Spark Streaming, OpenAI Service). Experience with PostgreSQL, ElasticSearch, MongoDB, and graph databases. Experience with GenAI enabled daily workflows (coding, testing, analytics). Experience with one or more SQL-on-Hadoop technology (SparkSQL, Hive, Impala, Presto, etc.) Experience in short-release cycles and the full software lifecycle Experience with Agile development methodology (e.g., Scrum)Benefits: Expression Networks offers competitive salaries and benefits, such as: 401k matching PPO and HDHP medical/dental More ❯
or more programming languages (Python, JavaScript, Java, Matlab, etc.) 5+ years of experience creating ETL processes and working with a variety of databases (SQL, NoSQL, Graph, etc.) and data architectures (Data Lake, Lakehouse) 3+ years of experience leading a team in an Agile environment. 3+ years of experience … deliver design documents, technical reports, and presentations to a variety of audiences Strong interpersonal and organizational skills.Preferred Qualifications: Proficiency in Azure solutions (Databricks, Spark Streaming, OpenAI Service). Experience with PostgreSQL, ElasticSearch, MongoDB, and graph databases. Experience with GenAI enabled daily workflows (coding, testing, analytics). Experience with … one or more SQL-on-Hadoop technology (SparkSQL, Hive, Impala, Presto, etc.)Benefits: 401k matching PPO and HDHP medical/dental/vision insurance Education reimbursement Complimentary life insurance Generous PTO and holiday leave Onsite office gym access Commuter Benefits Plan In-office More ❯
within 30 days) Fluency in developing software code in one or more programming languages (Python, JavaScript, Java, Matlab, etc.) Advanced knowledge in databases (SQL, NoSQL, Graph, etc.) and data architecture (Data Lake, Lakehouse) Knowledgeable in machine learning/AI methodologies Strong writing and oral communication skillsPreferred Qualifications: Experience … with Azure solutions (Databricks, Spark Streaming, OpenAI Service). Experience with PostgreSQL, ElasticSearch, MongoDB, and graph databases. Experience with GenAI enabled daily workflows (coding, testing, analytics). Experience with one or more SQL-on-Hadoop technology (SparkSQL, Hive, Impala, Presto, etc. More ❯
minimum experience using Python 4+ years of Azure or AWS cloud platform experience At least 2 years of Hands-on experience working with ApacheSpark and SparkSQL dev Preferred Qualifications: Good understanding of data integration, data quality and data architecture Good general knowledge More ❯
data quality projects. You have an extended experience with Business Glossary, Data Catalog, Data Lineage or Reporting Governance You know all about the SQL You have experience in Power BI, including DAX, data modeling techniques, minimum Star Schemas (Kimball) and others are nice to have. You get even … You have a good Experience with master data management You are familiar with data quality tool like Azure Purview (Collibra, Informatica, Soda) Python, Spark, PySpark, SparkSQL Other Security protocols like CLS (column Level Security, Object Level Security) You are fluent in Dutch and More ❯
Birmingham, England, United Kingdom Hybrid / WFH Options
Nine Twenty Recruitment
stakeholders to understand and translate complex data needs Developing and optimising data pipelines with Azure Data Factory and Azure Synapse Analytics Working with Spark notebooks in Microsoft Fabric, using PySpark, SparkSQL, and potentially some Scala Creating effective data models, reports, and dashboards in … understanding of BI platform modernisation) Solid grasp of data warehousing and ETL/ELT principles Strong communication and stakeholder engagement skills Experience with SQL is essential, as the current structure is 90% SQL-based Basic familiarity with Python (we're all at beginner level, but it More ❯
and implement data storage solutions, including relational databases, NoSQL databases, data lakes, and cloud storage services. Define company data assets, including Spark, SparkSQL, and HiveSQL jobs to populate data models. Data Integration and API Development Build and maintain integrations with internal and external data sources and APIs. Implement … Science, Data Science, or Information Science-related field required; Master's degree preferred. Experience with real-time data processing frameworks (e.g., Apache Kafka, Spark Streaming). Experience with data visualization tools (e.g., Tableau, Power BI, Looker). At least three years of related experience required. More ❯
Job Title: Sr. Hadoop with SQL, Hive Work Location Tampa FL Duration: Full time Job Description: Mandatory Certificate Databricks Certified Developer ApacheSpark 3.0 Skills Python PySpark SparkSQL Hadoop Hive Responsibilities Ensure effective Design Development Validation and Support activities in line … to work with multiple stakeholders Education: Bachelors Or Masters degree in Computer Science Engineering or related field Mandatory Skills : Big Data Hadoop Ecosystem, Python, SparkSQL LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, colour, creed More ❯
S3, Glue, Redshift, etc. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Collaborate with Data Scientists and … BI infrastructure including Data Warehousing, reporting, and analytics platforms. Contribute to the development of the BI tools, skills, culture, and impact. Write advanced SQL queries and Python code to develop solutions A day in the life of this role requires you to live at the intersection of data … warehousing, and building ETL pipelines Experience with one or more query languages (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting languages (e.g., Python, KornShell) Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience More ❯
tools to manage the platform, ensuring resilience and optimal performance are maintained. Data Integration and Transformation Integrate and transform data from multiple organisational SQL databases and SaaS applications using end-to-end dependency-based data pipelines, to establish an enterprise source of truth. Create ETL and ELT processes … using Azure Databricks, ensuring audit-ready financial data pipelines and secure data exchange with Databricks Delta Sharing and SQL Warehouse endpoints. Governance and Compliance Ensure compliance with information security standards in our highly regulated financial landscape by implementing Databricks Unity Catalog for governance, data quality monitoring, and ADLS … architecture. Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog More ❯
tools to manage the platform, ensuring resilience and optimal performance are maintained. Data Integration and Transformation Integrate and transform data from multiple organisational SQL databases and SaaS applications using end-to-end dependency-based data pipelines, to establish an enterprise source of truth. Create ETL and ELT processes … using Azure Databricks, ensuring audit-ready financial data pipelines and secure data exchange with Databricks Delta Sharing and SQL Warehouse endpoints. Governance and Compliance Ensure compliance with information security standards in our highly regulated financial landscape by implementing Databricks Unity Catalog for governance, data quality monitoring, and ADLS … architecture. Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog More ❯
Azure Databricks, Azure Data Factory, Delta Lake, Azure Data Lake (ADLS), Power BI. Solid hands-on experience with Azure Databricks - Pyspark coding and SparkSQL coding - Must have. Very good knowledge of data warehousing skills including dimensional modeling, slowly changing dimension patterns, and time travel. Experience More ❯