Microsoft Fabric, including Lakehouse (Delta format), OneLake, Pipelines & Dataflows Gen2, Notebooks (PySpark), Power BI & Semantic Models. Possess a solid understanding of data integration patterns, ETL/ELT, and modern data architectures. Be familiar with CI/CD practices in a data engineering context. Have excellent SQL and Spark (PySpark) skills. More ❯
business requirements. Experience with Big Data technologies, Data Lakes, Data Warehouses, Lakehouses. Proficiency in Databricks and Python, including concurrency and error handling. Experience with ETL tools and data visualization tools. Preferred qualifications, capabilities, and skills Experience with AWS services like Lambdas and Terraform. Knowledge of Java and front-end development. More ❯
system knowledge. Troubleshoot data warehouse and integration issues and implement enhancements for improved performance and reliability. Contribute to the design, development, and maintenance of ETL pipelines and reporting systems. Requirements: Microsoft SQL Server Database deployed on-premise and in the cloud. Always On replication to a secondary DR host. Ability More ❯
it's properly understood, organized and governed. Knowledge of some Data Management technologies such as Relational and Columnar Databases, and/or Data Integration (ETL) or API development. Knowledge of some Data Formats such as JSON, XML and binary formats such as Avro or Google Protocol Buffers. Experience collaborating with More ❯