large scale data processing, ETL, Lake house and experience in micro services, API design, Kafka, Redis, MemCached, Observability (Datadog, Splunk, Grafana or similar), Orchestration (Airflow, Temporal) Proficient in SQL and in one or more DBMS: Oracle, PostgreSQL, Sybase, MongoDB, Cassandra, CockroachDB, MySQL, Couchbase, DynamoDB Overall knowledge of the Software More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Net Talent
related field, with a focus on building scalable data systems and platforms. Strong expertise with modern data tools and frameworks such as Spark , dbt , Airflow , Kafka , Databricks , and cloud-native services (AWS, GCP, or Azure). Deep understanding of data modeling , distributed systems , streaming architectures , and ETL/ELT More ❯
roadmaps, plans and delivery Knowledge of some of the specific technologies we leverage would be an advantage and these are; Python, SQL, Snowflake, Tableau, Airflow, Amazon SageMaker, Kafka and React More ❯
technical documentation Comfortable working with Agile, TOGAF, or similar frameworks Desirable: Experience with Python and data libraries (Pandas, Scikit-learn) Knowledge of ETL tools (Airflow, Talend, NiFi) Familiarity with analytics platforms (SAS, Posit) Prior work in high-performance or large-scale data environments Why Join? This is more than More ❯
technical documentation Comfortable working with Agile, TOGAF, or similar frameworks Desirable: Experience with Python and data libraries (Pandas, Scikit-learn) Knowledge of ETL tools (Airflow, Talend, NiFi) Familiarity with analytics platforms (SAS, Posit) Prior work in high-performance or large-scale data environments Why Join? This is more than More ❯
technical documentation Comfortable working with Agile, TOGAF, or similar frameworks Desirable: Experience with Python and data libraries (Pandas, Scikit-learn) Knowledge of ETL tools (Airflow, Talend, NiFi) Familiarity with analytics platforms (SAS, Posit) Prior work in high-performance or large-scale data environments Why Join? This is more than More ❯
our product experience. You'll typically be working in Java or Python, and with a technology stack that includes AWS, Kinesis, S3, Kubernetes, Spark, Airflow, gRPC, New Relic, Databricks, and more. This role requires expertise in distributed systems, microservices, and data pipelines, combined with a strong focus on observability More ❯
including data pipelines, orchestration and modelling. Lead the team in building and maintaining robust data pipelines, data models, and infrastructure using tools such as Airflow, AWS Redshift, DBT and Looker.Ensuring the team follows agile methodologies to improve delivery cadence and responsiveness. Contribute to hands-on coding, particularly in areas … foster team growth and development Strong understanding of the data engineering lifecycle, from ingestion to consumption Hands-on experience with our data stack (Redshift, Airflow, Python, DVT, MongoDB, AWS, Looker, Docker) Understanding of data modelling, transformation, and orchestration best practices Experience delivering both internal analytics platforms and external data More ❯
If you need support in completing the application or if you require a different format of this document, please get in touch with at UKI.recruitment@tcs.com or call TCS London Office number 02031552100 with the subject line: “Application Support Request More ❯