Glasgow, Renfrewshire, United Kingdom Hybrid / WFH Options
Hymans Robertson LLP
beneficial skills (not essential): Experience with Power BI, Python, C#, Angular, Power Automate, or data science practices. Knowledge of optimizing data pipelines, architectures, andETL/ELT processes. Experience provisioning APIs for data consumption. A detailed list of requirements is available upon request. What we offer: A competitive salary andMore ❯
Glasgow, Scotland, United Kingdom Hybrid / WFH Options
In Technology Group
in Computer Science, Engineering, or a related field. Proven experience as a Data Engineer or in a similar role. Proficiency in SQL, Python, andETL tools. Experience with cloud platforms (e.g., AWS, Azure, Google Cloud). Strong understanding of data warehousing concepts. Excellent problem-solving skills and attention to detail. More ❯
design and functionality problems independently with little to no oversight. Experience performing data analytics on AWS platforms. Experience in writing efficient SQLs, implementing complex ETL transformations on big data platforms. Experience in Big Data technologies (Spark, Impala, Hive, Redshift, Kafka, etc.). Experience in data quality testing; adept at writing More ❯
in a data environment. Understand data structures and data model (dimensional & relational) concepts like Star schema or Fact & Dimension tables, to design and develop ETL patterns/mechanisms to ingest, analyse, validate, normalize and cleanse data. Liaise with data/business SME to understand/confirm data requirements and obtain More ❯
in a data environment. Understand data structures and data model (dimensional & relational) concepts like Star schema or Fact & Dimension tables, to design and develop ETL patterns/mechanisms to ingest, analyse, validate, normalize and cleanse data. Understand and produce 'Source to Target mapping' (STTM) documents, containing data structures, business & data More ❯
company's CRM, so experience here would be advantageous. Desired Skills Designing and building robust data pipelines for scalable data processing Developing and maintaining ETL workflows to support analytics and reporting Hands-on with Azure ecosystem: Data Factory, Databricks, and Synapse Analytics Experience working with data warehousing solutions and architecture More ❯
experience writing technical documentation Comfortable working with Agile, TOGAF, or similar frameworks Desirable: Experience with Python and data libraries (Pandas, Scikit-learn) Knowledge of ETL tools (Airflow, Talend, NiFi) Familiarity with analytics platforms (SAS, Posit) Prior work in high-performance or large-scale data environments Why Join? This is more More ❯
Experience with PySpark, including analysis, pipeline building, tuning, and feature engineering. Knowledge of SQL and NoSQL databases, including joins, aggregations, and tuning. Experience with ETL processes and real-time data processing. Experience developing, debugging, and maintaining code in large corporate environments using modern programming languages and database querying languages like More ❯
proficient applied experience. Hands-on practical experience in system design, application development, testing, and operational stability. Experience in Java, Spring Boot, Spring Data, JDBC, ETL, OpenAPI Doc, WSDL, JUnit, Kubernetes, Splunk, Dynatrace. Advanced in building REST API with Java, Oracle. Experience in developing UI applications using React. Experience in developing More ❯
Experience with cloud technologies (AWS or GCP), via hands-on work or certification. Practical experience with data lake or data warehouse technologies (e.g., Spark, ETL, Databricks). Preferred Qualifications, Capabilities, and Skills Experience with metadata processes and technology, along with a background in data management and data quality. Hands-on More ❯
performance and scalability. Data Science & Engineering: Handle large datasets and implement data pipelines, utilizing Python, SQL, and Azure data services for preprocessing and transformation. ETL & Real-Time Processing: Implement ETL pipelines and real-time data processing to support AI solutions. AI Solution Integration: Enhance enterprise applications with AI services like More ❯
objects and write code in and around the data stack. • Interact with upstream and downstream systems and API for data ingestion and egestion. • Build ETL pipelines and data framework for monitoring the pipeline. • Ensure performance, quality, and consistency in the data and processes. • Champion good data design practices and promote … stack, building data pipelines, data warehouses/lakes, and performance optimization. You will have strong experience in: • Microsoft Azure data stack and SQL server, ETL; SSIS and Data Factory. • Stack design and performance optimization. • Data programming languages: SQL, Spark, C#, Python. The right candidate will have good attention to detail More ❯
glasgow, central scotland, United Kingdom Hybrid / WFH Options
Net Talent
managing databases, delivering high-impact reporting solutions, and powering data-driven decision-making across the business. What You’ll Be Doing 🚀 Develop and maintain ETL processes using SSIS and Power Platform flows , transforming data from internal systems into a centralized SQL Server data warehouse . Support the company’s BI … Experience: Strong SQL development skills Solid understanding of database and data warehousing principles Experience with Power BI or similar data visualization tools Familiarity with ETL design and data integration best practices Excellent communicator who can engage technical and non-technical stakeholders Bonus (Desirable) Skills: DAX (Data Analysis Expressions) PowerShell or More ❯