vision to convert data requirements into logical models and physical solutions. with data warehousing solutions (e.g. Snowflake) and data lake architecture, Azure Databricks/Pyspark & ADF. retail data model standards - ADRM. communication skills, organisational and time management skills. to innovate, support change and problem solve. attitude, with the ability more »
Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI/… environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS tools such more »
directly with clients. Supporting clients in platform discovery, integration, training, and collaboration on data science projects. Proficiency in technical skills, particularly Python, R, SQL, Pyspark, and JavaScript. Assisting users in mastering the platform. Analysing diverse data and ML applications. Providing strategic insights to ensure customer success. Collaborating with customers more »
Data inventory and data familiarisation Efficient data ingestion and ingestion pipelines Data cleaning and transformation Databricks (ideally with Unity Catalog) & Databricks Notebooks Python and PySpark CI/CD (ideally with Azure Devops) Unit testing (PyTest) If interested, please get in touch Thanks Will Xpertise Recruitment more »
in STEM subjects. Strong experience in data pipelines and deploying ML models Preference for experience in retail/marketing Tech across: Python, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs Experience in feature engineering and third-party data Apply below more »
Mid-or-Senior level Data Scientist Solid knowledge of Data Engineering principles, including productionisation Technical experience with some or all of the following: Python, PySpark, scikit-learn, pandas, Azure Data Services, Databricks. If this sounds of interest, please apply. more »
Glasgow, Scotland, United Kingdom Hybrid / WFH Options
Synchro
contract position. If you possess a solid background in software application development, with experience in cloud or microservice architecture, and proficiency in Python or PySpark, we encourage you to apply. Python App Developer Requirements: Proficiency with Python or PySpark. Exposure to cloud technologies such as Airflow, Astronomer, Kubernetes, AWS more »
incorrect or not received on time. outages with the end users of a data pipeline What We Value reading and writing code in Python, Pyspark and Java. understanding of Spark and interested in learning the basics of tuning Spark jobs. pipeline monitoring team members should be able to use more »
advice to analytical users on how they can access and utilise the new datasets. Qualities Comfortable with Python - ideally experience with Apache Spark and Pyspark Previous data analytics software experience Able to scope new integrations and translate analytical user needs into technical requirement. UK based – data analytics system can more »
performance, scalability, and reliability. Technical Skills required: RedShift Glue (inc. Glue Studio, Glue Data Quality, Glue DataBrew) Step Functions Athena Lambda Kinesis Python, Spark, Pyspark, SQL Your contributions as a Data Engineer will directly impact the organization's operations and revenue. In addition to a competitive annual salary, we more »
of 4 years commercial experience Strong proficiency in AWS and relevant technologies Good communication skills Preferably experience with some or all of the following; Pyspark, Kubernetes, Terraform Expeirence in a start-up/scale up or fast-paced environment is desirable. HOW TO APPLY Please register your interest by more »
Requirements: 3+ years as a Business Analyst. Proficiency in ERP/CRM solutions and data, including Workday HCM Strong Azure data skills. Proficiency in PySpark, Java, or Python. Familiarity with Kimball data modeling and SQL. Experience with Power BI and CI/CD practices. Nice to Have: B2B supply more »
on very complex systems Strong experience with Computer vision Longevity in their previous roles Experience with Remote Sensing highly desirable Stack: Python, PyTorch, AirFlow, PySpark (equivalent tools are fine more »
engineering leaders/stakeholders in decision making and implementing the models into production. You will need to have hands on skills in Python and PySpark, experience working in a cloud environment and knowledge of development tools like Git or Docker. You can also expect to work with the latest more »
experience as a Data Engineer. Strong proficiency in AWS and relevant technologies Good communication skills Preferably experience with some or all of the following; Pyspark, Kubernetes, Terraform Expeirence in a start-up/scale up or fast-paced environment is desirable. HOW TO APPLY Please register your interest by more »
to the table. Key Responsibilities Engineer and orchestrate data flows & pipelines in a cloud environment using a progressive tech stack e.g. Databricks, Spark, Python, PySpark, Delta Lake, SQL, Logic Apps, Azure Functions, ADLS, Parquet, Neo4J, Flask Ingest and integrate data from a large number of disparate data sources Design … Spark/Databricks or similar Experience working in a cloud environment (Azure, AWS, GCP) Experience in at least one of: Python (or similar), SQL, PySpark Experience in building data pipeline/ETL/ELT solutions Ability and strong desire to research and learn new technologies and languages Interest in more »
Python Data Engineer (Software Engineer Programmer Developer Data Engineer PythonPySparkSpark Glue Athena Iceberg Airflow Dagster DBT Java Agile AWS GCP Buy Side Asset Manager Investment Management Finance Front Office Trading Financial Services Pandas Numpy Scipy Banking) required by our asset management client in London. You MUST have … the following: Strong experience as a Python Data Engineer/Developer/Software Engineer/Programmer Excellent PythonPySpark Excellent data engineering AWS, GCP or Azure Agile The following is DESIRABLE, not essential: Iceberg Airflow or Dagster Dremio or DBT Java Finance Role: Python Data Engineer (Software Engineer Programmer … Developer Data Engineer PythonPySparkSpark Glue Athena Iceberg Airflow Dagster DBT Java Agile AWS GCP Buy Side Asset Manager Investment Management Finance Front Office Trading Financial Services Pandas Numpy Scipy Banking) required by our asset management client in London. You will join a team that has built a more »
Engineer you will be pivotal in designing, developing, and maintaining data architecture and infrastructure. The ideal candidate should have a strong foundation in Python, PySpark, SQL, and ETL processes, along with proven experience in implementing solutions in a cloud environment. Roles & Responsibilities: Experienced Data Engineer with a background in … and mastering to management and distribution of large datasets. Mandatory Skills: 6+ Years of experience in Design, build, and maintain data pipelines using Python, PySpark and SQL. Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS. Collaborate with data scientists more »
Engineer you will be pivotal in designing, developing, and maintaining data architecture and infrastructure. The ideal candidate should have a strong foundation in Python, PySpark, SQL, and ETL processes, along with proven experience in implementing solutions in a cloud environment. Roles & Responsibilities: Experienced Data Engineer with a background in … and mastering to management and distribution of large datasets. Mandatory Skills: 6+ Years of experience in Design, build, and maintain data pipelines using Python, PySpark and SQL. Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS. Collaborate with data scientists more »
Lead Azure Data Engineer | PySpark (Python) & Synapse | Tech for Good/Charity Rate: £500-650 per day Duration: 6-12 months IR35: Outside Location: Remote with occasional travel to London (Once every two months max) Essential skills required: Azure – solid experience required of the Azure Data ecosystem Python - ESSENTIAL … as PySpark is used heavily. You will be tested on PySpark Azure Synapse - ESSENTIAL as is used heavily Spark Azure Data Lake/Data Bricks/Data Factory Be happy to act as a lead and mentor to the other permanent Azure Data Engineers This is the chance … tool experience. Familiar with building Catalogs and lineage This is an urgent contract, so if you are interested apply ASAP. Lead Azure Data Engineer | PySpark (Python), Synapse, Data Lake | Tech for Good/Charity more »
Bristol, England, United Kingdom Hybrid / WFH Options
Adecco
experience in developing data ingestion pipelines with advanced ML elements. Degree in Computer Science, Data Science, Engineering, or related field. Proficient in Python, SQL, PySpark, and Databricks. Experience with modern NLP techniques and tools. Familiarity with Git for version control. Knowledge of the insurance sector is advantageous but not … Expertise in applied machine learning, probability, statistics, and quantitative risk modelling. High proficiency in Python and SQL, with experience in big data technologies (Databricks, PySpark). Experience in the insurance, cyber, or related domain preferred. Strong problem-solving and communication skills.Preferred Skills (Nice to Have): Knowledge of Monte Carlo more »
Lead Azure Data Engineer | PySpark (Python) & Synapse | Tech for Good/Charity Essential skills required: Azure – solid experience required of the Azure Data ecosystem Python - ESSENTIAL as PySpark is used heavily. You will be tested on PySpark Azure Synapse - ESSENTIAL as is used heavily Spark Azure Data … people comprising developers, data engineers, QA and DevOps. Essential skills required: Azure – solid experience required of the Azure Data ecosystem Python - ESSENTIAL as PySpark is used heavily. You will be tested on PySpark Azure Synapse - ESSENTIAL as is used heavily Spark Azure Data Lake/Data Bricks/… architecture Familiar with Synapse CI/CD Azure Purview or another governance tool experience. Familiar with building Catalogs and lineage Lead Azure Data Engineer | PySpark (Python), Synapse, Data Lake | Tech for Good/Charity more »
up with a high performance Data Architecture Good to have the Retail functional Knowledge Must have Good Knowledge and Hands on experience in PythonPySpark ADF and ADB Good to have Knowledge in ADF CI CD Experience in designing architecting and implementing large scale data processing data storage data … delivery teams Supporting business development and ensuring high levels of client satisfaction during delivery Skills Must have strong hands-on technical Skills in PythonPyspark Azure Databricks Spark ETL Cloud Azure preferred Good to have knowledge on ADF CI CD more »