Financial Services domain knowledge Do you tick these 4 boxes? Must haves: Python coding experience (Pure Python) - JSON format Data modelling, data warehousing andETL frameworks SQL Our client have 2,500 lines of Python code for you to maintain and optimize. You'll need hands-on coding for this more »
financial services. Strong leadership skills with a track record of successfully managing and developing high-performing teams. In-depth knowledge of data engineering concepts, ETL processes, and data warehouse architectures. Expertise in working with big data technologies and cloud platforms (preferably AWS or Azure). Familiarity with asset management industry more »
Science, Engineering, or a related field. Strong Python development. Experience with Pandas is desirable. 3+ years of experience in data engineering. Experienced in building ETL data pipelines. Relational database experience w/PostgreSQL. Understanding of tech within our stack: AWS/Apache beam/Kafka. Experience with Object Orientated Programming more »
processes. Desired : experience in Infrastructure as Code developments and tools e.g. Terraform Desired : experience with MLOps deployment and maintenance. Desired: Data Engineering technologies e.g. ETL , Spark , Dataflow , BigQuery Please note: even if you don't have exactly the background indicated, do contact us now if this type of job is more »
with plans to also add GCP and on-prem. They are adding extensive usage of distributed compute on Spark, starting with their more complex ETLand advanced analytics functions, e.g. Time Series Processing. They soon plan to integrate other approaches, including native distributed PyTorch/Tensorflow, Spark-based distributor libraries more »
processes. Desired : experience in Infrastructure as Code developments and tools e.g. Terraform Desired : experience with MLOps deployment and maintenance. Desired: Data Engineering technologies e.g. ETL , Spark , Dataflow , BigQuery Please note: even if you don't have exactly the background indicated, do contact us now if this type of job is more »
Science or related fields. Proven experience with Insurance Broking Systems data migration (ideally Acturis). Proficiency in SQL and data manipulation languages. Experience with ETL tools. Strong analytical and critical thinking skills, with a focus on practical solutions. Excellent communication and people skills for conveying data concepts to diverse audiences. more »
+ benefits Purpose: Design, build, and maintain scalable data architectures, including pipelines and cloud-based data warehouses. Tech: Python (NumPy, Pandas), SQL, ETL, Cloud (AWS, Azure or GCP), Snowflake, Airflow, BigQuery, PowerBI/Tableau Industry: Fintech, Maritime trading Immersum are supporting the growth of a specialist consultancy who solely specialise … Pandas and NumPy. SQL: Advanced skills in complex querying and data manipulation. Data Modelling: Proven ability in designing efficient models for scalability and performance. ETL Processes: Deep expertise in developing and optimising ETL pipelines. Version Control: Experience with Git for code collaboration and change tracking. Data Pipeline Tools: Proficiency with more »
and optimize automation processes, effective monitoring, and infrastructure-as-code using Terraform. Collaborate closely with our engineering teams to support the orchestration of our ETL pipelines using Airflow and manage our tech stack including Python, Next.js, Airflow, PostgreSQL MongoDB, Kafka and Apache Iceberg. Optimize infrastructure costs and develop strategies for more »
requirements and be able to analyse complex datasets to identify trends and insights. The Data Scientist will also assist in the development of their ETL processes to ensure data quality. Essential Requirements Strong Data Scientist experience eCommerce sector experience preferred. AWS Databricks Python Educated to degree level or equivalent experience more »
of data governance strategies and maintain high standards for data security and quality across projects.Qualifications and Skills:Proven experience in data engineering, particularly with ETL processes, data pipeline construction, and workflow orchestration.Familiarity with distributed computing techniques, such as parallel processing and batch data management.Strong foundation in data system optimization, information more »
storage systems on Google Cloud Platform (GCP). You will be working with technologies such as Apache Airflow, BigQuery, Python, and SQL to transformandload large data sets, ensuring high data quality and accessibility for business intelligence and analytics purposes. Our business is growing quickly and with that so … Key Responsibilities Design, construct, install, test, and maintain data pipelines. Ensure systems meet business requirements and industry practices for data integrity and quality. Manage ETLand ELT pipelines across many data sources (CSV/parquet files, API endpoints, etc) Design and build data models for the business end users. Write more »
Profile The ideal candidate will have a background in Data Engineering or Data Warehousing (Data Engineer/DW Developer) with robust DW development andETL/ELT data pipeline experience, along with expert knowledge of SQL. Key Skills - Mandatory Strong data modeling and SQL/database design skills Proficiency in … ETL/ELT processes Extensive experience with Python Excellent numerical, analytical, and statistical skills Ability to visualize and present data and insights to stakeholders; experience with BI tools such as Looker is highly advantageous Experience working with cloud data warehouses, ideally with AWS/Redshift, Azure, GCP, or Snowflake Experience … results Additional Skills Experience with JIRA/Asana Proficiency in requirements gathering and documentation Experience with multiple BI tools Knowledge of HTML Familiarity with ETL/ELT tools such as Fivetran and Matillion more »
focus is on data analytics and migrating from Tableau to Power BI. Experience/skills required: Proficient in data analysis, data modelling and designing ETL processes. Familiar with Data Mesh architecture and Data Products Highly skilled in T-SQL with experience in performance tuning and dealing with large data volumes … differences in the approach for building self-service BI (business or IT managed) vs Corporate BI solutions including the choice of data sources andETL approach Familiar with engineering processes for developing APIs Understanding the principles of building solutions using Snowflake, open-source frameworks, multi-cloud infrastructure This is a more »
and performant schemas. Experience with various database technologies like relational databases, NoSQL, and cloud-based data storage solutions. Understanding of data warehousing concepts andETL (Extract, Transform, Load) processes. Familiarity with tools like Tableau for data visualization. Good to have: Good Knowledge in Erwin Good Knowledge in Magic Draw Rewards more »
leading hedge fund. The ideal candidate should have a proven ability to design and implement data solutions, with expertise in extracting, transforming, and loading (ETL) data from various sources, including websites, PDFs, and Real Time data streams. Role: Python Data Engineer Client: Leading Hedge Fund Type of role: contract … Science, Data Science, Engineering, or a related field. Minimum of 7+ years of experience in a data engineering role, with a strong focus on ETL processes, data warehousing, and Real Time data processing. Proficiency in Python programming; knowledge of C++ is an advantage. Expertise in SQL and experience with SQL more »
and reporting on IWS scheduling objects. Analysis and solution design experience. An example from previous work would be an advantage. Knowledge and experience of ETL concepts. (Specific tools not an issue.) Good programming skills in Javascript & Python. Other languages would be an advantage. SQL/Xquery experience, specific DB not more »
knowledge of Azure cloud technologies and Terraform IaC Experience with API design, RESTful APIs, and other integration protocols. Knowledge of data integration patterns andETL processes is important for effectively moving and transforming data between different systems. Senior Backend Engineer - 1/2 days a month - Up to more »
building a zero-down time low latency infrastructure Shaping the technical landscape with modern web technologies Scaling and optimizing some of the most performant ETL pipelines on the planet Conducting R&D for functional programming within the firm Building out a DevOps environment from scratch in a Software Engineering Capacity more »
to experience level and will find good fits for the best people Strong experience with Python or Rust Experience with Airflow Exposure to building ETL pipelines is a huge plus A desire to learn Rust Solid SQL knowledge Fantastic education Experience working in mission critical environments where speed, reliability andmore »
like Databricks/Spark, Airflow, Snowflake, etc. Expertise designing and developing with distributed data processing platforms like Databricks/Spark. Experience using ELT/ETL tools such as DBT, FiveTran, etc. Understanding of Agile Delivery best practice Good knowledge of the relevant technologies e.g. SQL, Oracle, PostgreSQL, Python, ETL pipelines more »
to understand their data needs, offer technical solutions, and effectively integrate data engineering with quantitative strategies. Design, develop, and optimize robust data pipelines, facilitating ETL processes that support quantitative research, analysis, alpha forecasting, and execution. Implement stringent data quality assurance measures to ensure the accuracy, reliability, and consistency of data … with experience in big data technologies and strong knowledge of SQL and relational databases. Demonstrated expertise in designing and optimizing data pipelines, data modeling, ETL processes, data warehousing, and data governance. Proven ability to work effectively with quantitative researchers and other stakeholders, translating their requirements into technical solutions. Buy/… and dynamic environment, adaptable to changing priorities and emerging technologies. Excellent verbal and written communication skills. Additional Responsibilities: Work with data ingestion pipelines andETL processes, particularly involving high-frequency and mid-frequency market tick data. Utilize the PREFICT DAG-based orchestration tool to manage a large number of concurrent more »
PyTorch/High Performance Computing/HPC/GPU/TPU/Deepspeed/AI/OpenAI/Distributed Systems/Big Data/ETL Pipeline/CUDA By applying to this role, you understand that we may collect your personal data and store and process it on our systems. more »
suite within Power BI, focusing on the end-to-end process of data to provide strategic insight Please apply if you have: Extract, transform, andload data from various sources into Power BI Develop and maintain comprehensive Power BI dashboards to support business needs Create interactive visualisations and reports that more »
power of their data, ensuring robust and scalable solutions that meet business needs. You will work with a talented team, utilizing your skills in ETL processes, AWS, and Python to deliver high-quality data engineering projects. Key Responsibilities: Design and develop ETL pipelines to support data integration and transformation. Implement … or Master’s degree in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering, with a strong background in ETL processes. Proficiency in AWS services, including but not limited to S3, Redshift, Lambda, and Glue. Strong programming skills in Python. Experience with data modeling, database more »