environment. Experience in line management or team leadership, with a track record of developing and supporting engineers. Strong proficiency in: AWS (Lambda, Glue, ECS, S3, etc.). Python and PySpark (data pipelines, APIs, automation). TypeScript and React (frontend development). Excellent communication and stakeholder management skills. Demonstrated expertise in technical design and architecture of distributed systems. Familiarity with More ❯
Nottingham, Nottinghamshire, East Midlands, United Kingdom Hybrid / WFH Options
Littlefish
roles.? Ability to manage multiple projects and priorities in a fast-paced environment.? Experience with SSRS/SSAS (Tabular with DAX & OLAP with MDX)/SSIS? Experience with Databricks, Pyspark, and other data sciences tools Experience of using Azure DevOps/Git? Microsoft Certified on Data (e.g. Fabric) This is a client-facing role, so the following are essential More ❯
Sunbury-On-Thames, London, United Kingdom Hybrid / WFH Options
BP Energy
systems, and wants to have a direct impact on data-driven decision-making. Key Responsibilities: Design, build, and maintain scalable and reliable ETL/ELT data pipelines using Python, PySpark, and SQL. Develop and manage data workflows and orchestration using tools such as Airflow or similar. Optimize data processes for performance, scalability, and cost-efficiency, particularly in cloud environments. … security and compliance best practices are followed across data systems and processes. Required Qualifications: 5+ years of experience in data engineering or a related field. Proficiency in Python and PySpark for data processing and automation. Strong command of SQL for data querying, transformation, and performance tuning. Deep experience with cloud platforms , preferably AWS (e.g., S3, Glue, Redshift, Athena, EMR More ❯
fast-paced environments. Experience working with Software Engineers to provide feedback and validate the technical implementation for custom applications. Experience working with a scripting language like Python, SQL, Spark, PySpark or similar. Must be able to work on-site in Hendon, VA, with the ability to work in Springfield, VA as needed. Preferred Qualifications : Comfortable with briefing government leadership. More ❯
understanding of AI/ML/DL and Statistics, as well as coding proficiency using related open source libraries and frameworks. Significant proficiency in SQL and languages like Python, PySpark and/or Scala. Can lead, work independently as well as play a key role in a team. Good communication and interpersonal skills for working in a multicultural work More ❯
and complexity projects• Bachelor's degree in Computer Science or related field (or 4 additional years of relevant SWE experience)• Strong background in analytics development• Proficiency in Pig and PySpark Desired:• Experience with patch management and IAVA tracking• Programming skills in Python, Java, or Scala• Familiarity with NiFi and Ansible• Experience working in Agile environments Security Clearance Required: TS More ❯
a minimum of 6-8 years of core python development experience. Strong competency with Python libraries such as Pandas and NumPy for data wrangling, analysis, and manipulation. Expertise in PySpark for large-scale data processing and loading into databases. Proficiency in data querying and manipulation with Oracle and PostgreSQL. Skilled in API programming, handling JSON, CSV, and other unstructured More ❯
individuals across 100 countries and has a reach of 600 million users, is recruiting an MLOps Engineer who has Chatbot (Voice) integration project experience using Python, Pytorch, Pyspark and AWS LLM/Generative AI. Our client is paying £400 PD Outside IR 35 to start ASAP for an initial 6-month contract on a hybrid basis based near Stratford More ❯
maximize data processing efficiency and real-time insights. Collaborate closely with cross-functional teams to ensure seamless integration and delivery of analytics solutions. Required Skills and Experience: Experience in PySpark, Pig or Piranhas. Understanding of Map/Reduce and/or streaming analytics methodologies. Desired Experience: Patch management Python Java Scala NiFi Ansible Lambda Functions AWS Need 6 years More ❯
Data Management Required:• Strong background in analytics development• Pig• PySpark • Piranhas Desired:• Patch management and IAVA tracking• Python• Java• Scala• NiFi• Ansible• Experience working in Agile environment Security Clearance Required: TS/SCI with Poly About Avid Technology Professionals Avid Technology Professionals, LLC (ATP) is a premiere provider of software and systems engineering, and acquisition program management services for More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
current Solution - Identify where possible and if possible, the solution works for Argos & Nectar Products. Experience Required: MTA (Multi-Touch Attribution) and MMM (Marketing Mix Modelling) Experience Python SQL Pyspark Cloud Technology - Ideally AWS or Azure Machine Learning Experience Accumetric Metrics Experience of working with multi-functional teams Sanderson is committed to barrier-free and inclusive recruitment. We are More ❯
Title: GCP -Data Engineer. Location: Philadelphia PA (Can submit who are willing to relocate) GCP Data Engineer - GCP Dataflow and Apache Beam (Key skills) Primary Skills- PySpark, Spark, Python, Big Data, GCP, Apache Beam, Dataflow, Airflow, Kafka and BigQuery GFO, Google Analytics Javascript is Must Strong Experience with Dataflow and BigQuery A person should have leading the team or … an enterprise-scale Customer Data Platform (CDP) Experience in at least one programming language (Python strongly preferred), cloud computing platforms (e.g., GCP), big data tools such as Spark/PySpark, columnar datastores (BigQuery preferred), DevOps processes/tooling (CI/CD, GitHub Actions), infrastructure as code frameworks (Terraform), BI Tools (e.g. DOMO, Tableau, Looker,), pipeline orchestration (eg. Airflow) Fluency More ❯
related field. • Demonstrated experience with intermediate to advanced full stack development. • Demonstrated experience with the ability to consume complex REST APIs. • Demonstrated experience with tools, languages, and frameworks: Python, PySpark, JavaScript, Vue, Nuxt.js, and Viz.js • Demonstrated experience with AWS services: S3 and EC2. • Demonstrated experience with databases: relational, graph (Neo4J/Graph-Tool), and NoSQL/document (MongoDB). More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
Gerrard White
Knowledge of the technical differences between different packages for some of these model types would be an advantage. Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW's Radar software is preferred Proficient at communicating results in a concise More ❯
Bracknell, Berkshire, United Kingdom Hybrid / WFH Options
Teksystems
in building scalable data solutions that empower market readiness. 3 months initial contract Remote working ( UK based) Inside IR35 Responsibilities Design, develop, and maintain data pipelines using Palantir Foundry, PySpark, and TypeScript. Collaborate with cross-functional teams to integrate data sources and ensure data quality and consistency. Implement robust integration and unit testing strategies to validate data workflows. Engage More ❯
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: Please screen the profile before interview. At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: = Scala/Spark Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL Linux More ❯
team members on a regular basis • Adhere to all established policies regarding security, completion of mandatory training courses and timesheet submission Required Experience: • Experience developing in Python • Experience with PySpark application programming interface (API) to process large datasets in a distributed cluster • Ability to efficiently balance simultaneous, high-profile projects and deliver those projects in an effective manner is More ❯
etc. Knows when and why to use specific algorithms depending on the business use case. Can fine-tune/train models and evaluate performance. Technical Skills: Proficient in Python, PySpark, and working with MongoDB JSON data. Must be comfortable working directly with raw/complex datasets and drawing insights. Experience in model selection based on data exploration. Bonus (Not More ❯
Bracknell, Berkshire, United Kingdom Hybrid / WFH Options
Teksystems
Provide hands-on support for production environments, ensuring the stability and performance of data workflows. Troubleshoot and resolve issues related to data pipelines and integrations built using Palantir Foundry, PySpark, and TypeScript. Collaborate with engineering and business teams to understand requirements and deliver timely solutions. Support and improve continuous integration (CI) processes to streamline deployment and reduce downtime. Communicate More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala (minimum of 2). Extensive Data Engineering hands-on experience (coding, configuration, automation, delivery, monitoring, security). ETL Tools such as Azure Data Fabric (ADF) and Databricks or … UK, and you MUST have the Right to Work in the UK long-term without the need for Company Sponsorship. KEYWORDS Senior Data Engineer, Coding Skills, Spark, Java, Python, PySpark, Scala, ETL Tools, Azure Data Fabric (ADF), Databricks, HDFS, Hadoop, Big Data, Cloudera, Data Lakes, Azure Data, Delta Lake, Data Lake, Databricks Lakehouse, Data Analytics, SQL, Geospatial Data, FME More ❯
mathematic models, methods, and/or techniques (e.g. algorithm or development) to study issues and solve problems, engineering (electrical or computer), and/or high performance computing Preferred Python & pySpark experience More ❯
role for you. Key Responsibilities: Adapt and deploy a cutting-edge platform to meet customer needs Design scalable generative AI workflows (e.g., using Palantir) Execute complex data integrations using PySpark and similar tools Collaborate directly with clients to understand their priorities and deliver impact Why Join? Be part of a mission-driven startup redefining how industrial companies operate Work More ❯
role for you. Key Responsibilities: Adapt and deploy a cutting-edge platform to meet customer needs Design scalable generative AI workflows (e.g., using Palantir) Execute complex data integrations using PySpark and similar tools Collaborate directly with clients to understand their priorities and deliver impact Why Join? Be part of a mission-driven startup redefining how industrial companies operate Work More ❯