Data Engineer
💻 Junior Big Data Developer (Python & SQL Focus) 📊
!!! URGENT HIRING !!!
We're looking for an enthusiastic and detail-oriented Junior Big Data Developer to join our data engineering team. This role is ideal for an early-career professional with foundational knowledge in data processing, strong proficiency in Python , and expert skills in SQL . You'll focus on building, testing, and maintaining data pipelines and ensuring data quality across our scalable Big Data platforms.
Key Responsibilities
- Data Pipeline Development: Assist in the design, construction, and maintenance of robust ETL/ELT pipelines to integrate data from various sources into our data warehouse or data lake.
- Data Transformation with Python: Write, optimize, and maintain production-grade Python scripts to clean, transform, aggregate, and process large volumes of data.
- Database Interaction (SQL): Develop complex, high-performance SQL queries (DDL/DML) for data extraction, manipulation, and validation within relational and data warehousing environments.
- Quality Assurance: Implement data quality checks and monitoring across pipelines, identifying discrepancies and ensuring the accuracy and reliability of data.
- Collaboration: Work closely with Data Scientists, Data Analysts, and other Engineers to understand data requirements and translate business needs into technical data solutions.
- Tooling & Automation: Utilize version control tools like Git and contribute to the automation of data workflows and recurring processes.
- Documentation: Create and maintain technical documentation for data mappings, processes, and pipelines.
Required Skills and Qualifications
Core Technical Skills
Skill Area
Requirements
Programming
Strong proficiency in Python for data manipulation and scripting. Familiarity with standard Python data libraries (e.g., Pandas, NumPy ).
Database
Expert-level proficiency in SQL (Structured Query Language). Experience writing complex joins, stored procedures, and performing performance tuning.
Big Data Concepts
Foundational understanding of Big Data architecture (Data Lakes, Data Warehouses) and distributed processing concepts (e.g., MapReduce).
ETL/ELT
Basic knowledge of ETL principles and data modeling (star schema, snowflake schema).
Version Control
Practical experience with Git (branching, merging, pull requests).
Preferred Qualifications (A Plus)
- Experience with a distributed computing framework like Apache Spark (using PySpark).
- Familiarity with cloud data services ( AWS S3/Redshift, Azure Data Lake/Synapse, or Google BigQuery/Cloud Storage ).
- Exposure to workflow orchestration tools ( Apache Airflow, Prefect, or Dagster ).
- Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field.
- Company
- Information Tech Consultants
- Location
- City of London, Greater London, UK
- Posted
- Company
- Information Tech Consultants
- Location
- City of London, Greater London, UK
- Posted