Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI/… environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS tools such more »
in STEM subjects. Strong experience in data pipelines and deploying ML models Preference for experience in retail/marketing Tech across: Python, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs Experience in feature engineering and third-party data Apply below more »
development Build ontologies to aid in the creation and management of structured data assets Data pipeline building for the creation of data pipelines Utilise PySpark for large scale data pipelines Collaborate on software development practices within team settings Retrieve data and integration into data processing pipelines What we are more »
not received on time. Communicating outages with the end users of a data pipeline What We Value Comfortable reading and writing code in Python, Pyspark and Java. Basic understanding of Spark and interested in learning the basics of tuning Spark jobs. Data pipeline monitoring team members should be able more »
performance, scalability, and reliability. Technical Skills required: RedShift Glue (inc. Glue Studio, Glue Data Quality, Glue DataBrew) Step Functions Athena Lambda Kinesis Python, Spark, Pyspark, SQL Your contributions as a Data Engineer will directly impact the organization's operations and revenue. In addition to a competitive annual salary, we more »
West Midlands, England, United Kingdom Hybrid / WFH Options
Harnham
continual improvement of their performance. REQUIRED SKILLS AND EXPERIENCE Proficiency in Python, including its associated data and machine learning packages such as numpy, pandas, pyspark, matplotlib, scikit-learn, Keras, TensorFlow, and more. Experience working with object-oriented programming languages. Experience in Forecasting/Pricing & AB testing. Leadership Expertise in more »
Birmingham, England, United Kingdom Hybrid / WFH Options
Harnham
continual improvement of their performance. REQUIRED SKILLS AND EXPERIENCE Proficiency in Python, including its associated data and machine learning packages such as numpy, pandas, pyspark, matplotlib, scikit-learn, Keras, TensorFlow, and more. Experience working with object-oriented programming languages. Experience in Forecasting/Pricing & AB testing. Leadership Expertise in more »
of 4 years commercial experience Strong proficiency in AWS and relevant technologies Good communication skills Preferably experience with some or all of the following; Pyspark, Kubernetes, Terraform Expeirence in a start-up/scale up or fast-paced environment is desirable. HOW TO APPLY Please register your interest by more »
not been used in the last 12 months, we're unable to consider your application. The tech landscape is: Azure DevOps, Power BI Python, Pyspark, Pandas Library Strong SQL Skills needed Strong Backend Data Experience Knowledge of Synthetic Data Generation tools & techniques This is a: 🏚 Remote ⏰ 6 months initial more »
engineering leaders/stakeholders in decision making and implementing the models into production. You will need to have hands on skills in Python and PySpark, experience working in a cloud environment and knowledge of development tools like Git or Docker. You can also expect to work with the latest more »
experience as a Data Engineer. Strong proficiency in AWS and relevant technologies Good communication skills Preferably experience with some or all of the following; Pyspark, Kubernetes, Terraform Expeirence in a start-up/scale up or fast-paced environment is desirable. HOW TO APPLY Please register your interest by more »
to the table. Key Responsibilities Engineer and orchestrate data flows & pipelines in a cloud environment using a progressive tech stack e.g. Databricks, Spark, Python, PySpark, Delta Lake, SQL, Logic Apps, Azure Functions, ADLS, Parquet, Neo4J, Flask Ingest and integrate data from a large number of disparate data sources Design … Spark/Databricks or similar Experience working in a cloud environment (Azure, AWS, GCP) Experience in at least one of: Python (or similar), SQL, PySpark Experience in building data pipeline/ETL/ELT solutions Ability and strong desire to research and learn new technologies and languages Interest in more »
engineering concepts and technologies Experience working in a cloud environment Experience with modern and traditional data warehousing and data processing technologies and concepts (Hadoop, PySpark, Streaming Data) #J-18808-Ljbffr more »
Job description: hands-on experience in business intelligence development or data engineering experience,Solid experience with data modelling, data warehouse design and data lake concepts and practices; Exposure working in a Microsoft Azure Data Platform environment. Exposer inAzure Data Factory more »
Engineer you will be pivotal in designing, developing, and maintaining data architecture and infrastructure. The ideal candidate should have a strong foundation in Python, PySpark, SQL, and ETL processes, along with proven experience in implementing solutions in a cloud environment. Roles & Responsibilities: Experienced Data Engineer with a background in … and mastering to management and distribution of large datasets. Mandatory Skills: 6+ Years of experience in Design, build, and maintain data pipelines using Python, PySpark and SQL. Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS. Collaborate with data scientists more »
Azure services such as Data Factory, Event Hubs, Data Lake, Synapse, and Azure SQL Server. Create and optimize data processing workflows in Databricks using PySpark and Spark SQL. Ensure ETL coding standards are met, including self-documenting code and reliable testing. Apply best practice data encryption techniques and standards … design. Extensive experience with Azure data products including Data Factory, Event Hubs, Data Lake, Synapse, and Azure SQL Server. Proficient in developing with Databricks, PySpark, and Spark SQL. Strong understanding of ETL coding standards, including standardized, self-documenting code and reliable testing. Knowledge of data encryption techniques and standards. more »
Lead Azure Data Engineer | PySpark (Python) & Synapse | Tech for Good/Charity Rate: £500-650 per day Duration: 6-12 months IR35: Outside Location: Remote with occasional travel to London (Once every two months max) Essential skills required: Azure – solid experience required of the Azure Data ecosystem Python - ESSENTIAL … as PySpark is used heavily. You will be tested on PySpark Azure Synapse - ESSENTIAL as is used heavily Spark Azure Data Lake/Data Bricks/Data Factory Be happy to act as a lead and mentor to the other permanent Azure Data Engineers This is the chance … tool experience. Familiar with building Catalogs and lineage This is an urgent contract, so if you are interested apply ASAP. Lead Azure Data Engineer | PySpark (Python), Synapse, Data Lake | Tech for Good/Charity more »
Lead Azure Data Engineer PySpark (Python) & Synapse Tech for Good/Charity Essential skills required: Azure solid experience required of the Azure Data ecosystem Python - ESSENTIAL as PySpark is used heavily. You will be tested on PySpark Azure Synapse - ESSENTIAL as is used heavily Spark Azure Data … people comprising developers, data engineers, QA and DevOps. Essential skills required: Azure solid experience required of the Azure Data ecosystem Python - ESSENTIAL as PySpark is used heavily. You will be tested on PySpark Azure Synapse - ESSENTIAL as is used heavily Spark Azure Data Lake/Data Bricks/… architecture Familiar with Synapse CI/CD Azure Purview or another governance tool experience. Familiar with building Catalogs and lineage Lead Azure Data Engineer PySpark (Python), Synapse, Data Lake Tech for Good/Charity more »
Pyspark/Python Engineer - 6 Month Contract - Bournemouth - Inside IR35 Do you want to contribute and work for a Multinational Finance Company? You will be working with the worlds newest and leading technology. This is a 6 month contract with the exciting possibility of extension. You'll play a … edge technology products. Your expertise will ensure the development of secure, stable, and scalable software applications and systems. Are you a seasoned expert in pyspark/python engineering ensuring seamless delivery? If so, this is your golden opportunity to make a significant impact and advance your career in a more »
hire: FTE Extensive experience working with large data sets with hands on technology skills to design and build robust Big Data solutions using SparkPySpark framework and Industry standard frameworks like Databricks Azure DevOps and other tools technologies on Azure Cloud Platform Key Roles and Responsibilities Understand the requirement … up with a high-performance Data Architecture Good to have the Retail functional Knowledge Must have Good Knowledge and Hands on experience in PythonPySpark ADF and ADB Good to have Knowledge in ADF CI CD Experience in designing architecting and implementing large scale data processing data storage data … DevOps experience Supporting business development and ensuring high levels of client satisfaction during delivery Skills Must have strong hands-on technical Skills in PythonPyspark Azure Databricks Spark ETL Cloud Azure preferred Good to have knowledge on ADF CI CD more »
Lead Azure Data Engineer | PySpark (Python) & Synapse | Tech for Good/Charity Essential skills required: Azure – solid experience required of the Azure Data ecosystem Python - ESSENTIAL as PySpark is used heavily. You will be tested on PySpark Azure Synapse - ESSENTIAL as is used heavily Spark Azure Data … people comprising developers, data engineers, QA and DevOps. Essential skills required: Azure – solid experience required of the Azure Data ecosystem Python - ESSENTIAL as PySpark is used heavily. You will be tested on PySpark Azure Synapse - ESSENTIAL as is used heavily Spark Azure Data Lake/Data Bricks/… experience required Terraform used for IaC Azure DevOps used for CI/CD DP-203 certification preferred but not essential Lead Azure Data Engineer | PySpark (Python), Synapse, Data Lake | Tech for Good/Charity more »
get the most from their data. They are looking for someone with core skills in SQL complimented with Azure experience (Azure Data Factory, Databricks, PySpark etc) This is a very exciting time to join as they shake things up across the industry, so please get in touch asap to … Working with and modelling data warehouses. Skills & Qualifications Strong technical expertise using SQL Server and Azure Data Factory for ETL Solid experience with Databricks, PySpark etc. Understanding of Agile methodologies including use of Git Experience with Python and Spark Benefits £65,000 - £75,000 Bonus To apply for this more »
Key Knowledge/Skills Detailed working knowledge of ETL/ELT, data warehousing/business intelligence methodologies and best practice including dealing with big data, cloud technology and unstructured data and the relative required approaches Knowledge of star schema structure more »