a plus. DB: Azure SQL Database, Cosmos DB, NoSQL, MongoDB, and HBase are a plus. Methodologies: Agile and DevOps must have. Concepts: ELT/ETL, DWH, APIs (RESTful), Spark APIs, FTP protocols, SSL, SFTP, PKI (Public Key Infrastructure) and Integration testing. If this sounds like you, be sure to get more »
Databricks Spark Delta Lake SQL Python PySpark ADLS Day To Day Responsibilities: Extensive experience in designing, developing, and managing end-to-end data pipelines, ETL (Extract, Transform, Load), and ELT (Extract, Load, Transform) solutions. Maintains a proactive approach to staying updated with emerging technologies and a strong desire to continuously more »
of new technologies Ability to maintain delivery momentum We’d Love If You Also Have These: Highly proficient in SQL Experience using Python based ETL tools such as Databricks Experience using Data Ingestion tools such as FiveTran or Stitch Experience using Business Intelligence tools such as PowerBI, Looker or Tableau more »
Should have strong implementation experience in all the below technology areas (breadth) and deep technical expertise in some of the below technologies: Data integration – ETL tools like Talend and Informatica. Ingestion mechanism like Flume & Kafka Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Experience in more »
encryptions, etc.). In-depth knowledge and experience with Azure data storage (SQL Server, Data lake, Synapse, etc.) & access tools, APIs, cloud connectivity, andETL processes. Knowledge and some experience of MS Office/MS Office 365 suite, SharePoint Online, Power Apps, GitHub, MS Teams, etc. In-depth knowledge & experience more »
legacy data to our new cloud-based data platform. Design and implement master and transactional data models to ensure data integrity and consistency. Develop ETL pipelines and cloud data storage solutions that meet business requirements and non-functional needs. Collaborate closely with data analysts, visualization engineers, and project managers to more »
with relational databases and NoSQL databases Significant experience and in-depth knowledge of creating data pipelines and associated design principles, standards, Data modelling concepts, ETL procedures, and all steps of data production process Experience with unit and integration testing, and data quality frameworks such as Deequ, Great Expectations or Delta more »
Advanced skills in SQL programming, with the ability to design, optimize, and maintain complex queries and databases. Strong experience in designing, implementing, and managing ETL processes using Azure Data Factory and other relevant tools. Proficiency in building and managing data warehouses using Azure Synapse Analytics and other data warehousing solutions. more »
West London, London, United Kingdom Hybrid / WFH Options
Morgan Hunt UK Limited
/SQL Proven expertise with Microsoft SQL Server Integration Services and BI tools like Power BI, Power Pivot, and SSRS Experience in data warehousing, ETL, data modelling, and data orchestration Excellent stakeholder management and communication skills A degree in Computer Science, IT, or a related STEM subject Morgan Hunt is more »
and its tooling (Snowpark, Snowpipe, etc.). Familiarity with Fivetran, DBT, TensorFlow, PyTorch, and other modern data stack components. Knowledge of data integration andETL frameworks and tools. Understanding of DataOps, data mining, and data visualization/BI tools. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. more »
demonstrate working on large engagements * Experience of AWS tools (e.g. Athena, Redshift, Glue, EMR) * Java, Scala, Python, Spark, SQL * Experience of developing enterprise grade ETL/ELT data pipelines. * Deep understanding of data manipulation/wrangling techniques * Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit more »
open-source data engineering and scientific Python toolset. Our tech stack includes Airbyte for data ingestion, Prefect for pipeline orchestration, AWS Glue for managed ETL, along with Pandas and PySpark for pipeline logic implementation. We utilize Delta Lake and PostgreSQL for data storage, emphasizing the importance of data integrity andmore »
with programming languages such as Python or R is a plus. Knowledge of statistical analysis techniques and methodologies. Familiarity with data warehousing concepts andETL processes. more »
the different data architecture patterns: Data Fabric, Data Mesh, Data Warehouse, Data Marts, data modeling, ontologies & knowledge graphs, MicroServices • You have experience in implementing ETL data flows and data pipelines, and know one or more of the following tools: Informatica PowerCenter, SAS Data Integration Studio, Microsoft SSIS, Ab Initio, etc. more »
London, England, United Kingdom Hybrid / WFH Options
Aventum Group
responsible for accessing, validating, and querying data from various repositories using available tools. Build and maintain data integration processes using SQL Services and other ETL/ELT processes and scripting tools as well as ongoing requests and projects related to the data warehouse, MI, or fast-moving financial data. Designing … Architecting, building, testing, and maintaining data platform. Develop and support a wide range of data transformations and migrations for the whole business. Construct custom ETL processes: Design and implement data pipelines, data marts and schemas, access versatile data sources and apply data quality measures. Monitoring the complete process and applying … ML is a plus Experience with Azure SQL Database, Cosmos DB, NoSQL, MongoDB Experience with Agile, DevOps methodologies Awareness and knowledge of ELT/ETL, DWH, APIs (RESTful), Spark APIs, FTP protocols, SSL, SFTP, PKI (Public Key Infrastructure) and Integration testing Skills and Abilities Knowledge of Python, SQL, SSIS, andmore »
high-performance data systems that are foundational to driving business growth and success. Key responsibilities: Implementing scalable data architectures and systems. Developing and maintaining ETL (Extract, Transform, Load) pipelines. Managing data storage, backup, and recovery mechanisms. Writing complex SQL queries to extract data for analysis. Developing and implementing data security more »
requirements into technical requirements & development experience using data discovery tools such as MS Power BI, QlikView, Tableau Ability to model andtransform data, build ETL solutions and present data in a useful business context Experience developing data warehouses & data marts using the Kimball methodology Experience or awareness of Big Data more »
Technologies such as Docker and orchestration tools like Kubernetes for containerized deployments. Workflow management tools such as Airflow for orchestrating complex data pipelines andETL processes. Certifications in Azure cloud services and data engineering technologies, demonstrating expertise and proficiency in the Azure ecosystem. Rewards & Benefits: TCS is consistently voted a more »
Databricks, Power BI Strong in delivering solutions at scale, on time and within budget. Strong background in data warehousing, data architecture, data modelling, integration, ETL/ELT processes Please apply as directed more »
pipelines, security and networking Expertise with data warehousing, data lakes and data lake houses Experience with master data management software and technologies Experience with ETL technologies including SQL Server, SSIS, Azure Data Factory Working knowledge of agile development, CI/CD, test and data automation Working knowledge or experience of more »
and Business teams to enhance our customer-centric culture. Ideal Candidate Profile: Proven track record in big data engineering with a solid understanding of ETL pipelines and data system projections. Proficiency in Python, PySpark, SQL, and familiarity with data science tools like R, ML, AI. Strong foundation in database management more »
data from various sources into our data warehouse and data lakes. Uphold data integrity, quality, and consistency throughout the entire process. Create and enhance ETL/ELT workflows capable of handling substantial data volumes. Collaborate with data analysts, data scientists, and other stakeholders to comprehend their data needs. Develop andmore »
support organizational growth. Essential Requirements: Minimum 5 years of experience in a similar role. Proven track record in designing and building data infrastructure andETL pipelines. Proficiency in Azure Platform, including Data Lake, Data Factory, Synapse, Logic Apps, and Function Apps. SQL Server, including Store Procedures, T-SQL, or similar more »
technology, Computer Science or related with/or proven continual relevant learning. Knowledge of AI and machine learning processes Understanding of geospatial data andETL (Desirable) Project management experience, Agile (Desirable) Interested? As well as a starting salary circa £60,000, the role offer a fantastic range of benefits, flexible more »
guide and motivate junior team members Deep understanding of the Microsoft technology stack, specifically PowerBI, Databricks & Azure Cloud Proficiency in SQL, strong understanding of ETLand ELT processes Strong communication skills, ability to translate complex messages into concise, easy-to-understand messages Strong story telling skills, ability to influence decision more »