Greater London, England, United Kingdom Hybrid / WFH Options
Hunter Bond
My client are looking for a talented and motivated Big Data Architect (Azure, Databricks, Spark) to be based in their London office. You'll be responsible for providing technical leadership in architecting and designing end-to-end solutions for the organisation's datalake initiatives, as they provide increasing numbers … improvements in design, processes, and implementation to improve operational management, scalability, and extensibility. The following skills/experience is essential: Strong implementation experience using Spark and Databricks Strong Cloud experience (ideally Azure) Previously heavily involved in an implementation programme Data Warehouse Strong stakeholder management experience Excellent IT background, ideally more »
Implement DevOps practices using GitHub workflows for model version control, continuous integration, and model deployment automation. Leverage data science frameworks like TensorFlow, PyTorch, and Spark ML to build robust and scalable machine learning solutions. Collaborate closely with other engineering guilds to integrate machine learning components into larger software systems … Experience working on Data Visualization tools like Tableau, PowerBI or Looker. Hands on experience working on data science frameworks such as TensorFlow, PyTorch, and Spark ML. Ability to collaborate effectively with other engineering teams and stakeholders. Work closely with Data Scientists to productionize their ML models. Familiarity with Airflow more »
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
SageMaker, or Azure Machine Learning for model development and deployment. Data Analytics and Big Data Technologies: Proficient in big data technologies such as Hadoop, Spark, and Kafka for handling large datasets. Experience with data visualization tools like Tableau, Power BI, or Qlik for deriving actionable insights from data. Programming more »
Synapse, Azure Analysis Services and Power BI Analytics Experience in Data Pipeline and integration workflow management tools: Talend, Store Proc, Change Data Capture (CDC), Spark & Azure API's . click apply for full job details more »
Data Scientists and Service Engineering teams Experience with design, development and operations that leverages deep knowledge in the use of services like Amazon Kinesis, Apache Kafka, ApacheSpark, Amazon Sagemaker, Amazon EMR, NoSQL technologies and other 3rd parties Develop and define key business questions and to build … a related field Experience of Data platform implementation, including 3 years of hands-on experience in implementation and performance tuning Kinesis/Kafka/Spark/Storm implementations Experience with analytic solutions applied to the Marketing or Risk needs of enterprises Basic understanding of machine learning fundamentals Ability to … take Machine Learning models and implement them as part of data pipeline IT platform implementation experience Experience with one or more relevant tools ( Flink, Spark, Sqoop, Flume, Kafka, Amazon Kinesis) Experience developing software code in one or more programming languages (Java, JavaScript, Python, etc) Current hands-on implementation experience more »
at scale utilising the best breed of Cloud services and technologies. So, what tools and technologies will you be using? AWS Python Databricks/Spark Trino Airflow Docker CloudFormation/Terraform SQL/NoSQL We provide you with the opportunity to think freely and work creatively and right now … Other skills we are looking for you to demonstrate include: Experience of data storage technologies: Delta Lake, Iceberg, Hudi Sound knowledge and understanding of ApacheSpark, Databricks or Hadoop Ability to take business requirements and translate these into tech specifications Knowledge of Architecture best practices and patterns Competence more »
Bristol, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
Google Cloud Professional Cloud Architect or Professional Cloud Developer certification Very Disrable to have hands-on experience with ETL tools, Hadoop-based technologies (e.g., Spark), and batch/streaming data pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies more »
complex data warehouses and/or data lakes. Familiarity with cloud-based analytics platforms such as AWS, Azure, Snowflake, Google Cloud Platform (Big Query), Spark, and Splunk. Proficiency in SQL and experience using one or more of the following languages: R, Python, Scala, and Julia, including relevant frameworks/ more »
with Git for version control and project management, alongside some knowledge of Linux/Shell. data platform familiarity - previous experience of working with both ApacheSpark and MapReduce data processing and analytics frameworks. and reporting expertise - experience with Tableau, Power BI, Excel alongside notebooks for experiment documentation. What more »
Engineering experience Demostrate In-depth knowledge of large-scale data platforms (Databricks, Snowflake) and cloud-native tools (Azure Synapse, RedShift) Experience of analytics technologies (Spark, Hadoop, Kafka) Have familiarity with Data Lakehouse architecture, SQL Server, DataOps, and data lineage concepts Demonstrate In-depth knowledge of large-scale data platforms more »
language, ideally Python but can also be Java or C/c++ SQL expeirence Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau) Get in touch with Ella Alcott - Ella@engagewithus.com more »
Skills & Experience At least 10 years experience working with JavaScript or Python/Java Previous experience deploying Software into the Cloud EKS, Docker, Kubernetes ApacheSpark or NiFi Microservice architecture experience Experience with AI/ML systems more »
for business improvements Lead a small team of data scientist on Neural Networks LLMs (CNN & RNN), ML, & NLP NLP/AI/ML/Spark/Python/Data scientist/Machine Learning Engineer/OCR/Deep Learning Requirements Bachelor's degree or equivalent experience in quantitative field more »
end ownership Python or similar (Ruby or Node) or another Functional Language JavaScript and associated frameworks, preferably Vue, or similar Cloud technologies SQL (advantageous) Spark (advantageous) Docker/Kubernetes advantageous ) MongoDB, SQL, Postgres & Snowflake (advantageous) Developing online, cloud based SaaS products. Leading and building scalable architectures and distributed systems more »
and manage data lake/data warehouse platforms. (Some of the following types of providers: AWS, Microsoft Azure, Google Cloud Platform, Databricks, Snowflake, Cloudera, Spark, MongoDB) Done this at companies using high volumes of data, ideally in retailing. Other sectors where used high volume data would also be relevant more »
you! Minimum Qualifications Bachelors or Masters Degree in Engineering or Computer Applications Hands-on experience with MS SQL Server and GCP Familiarity with BQ, Spark, Hive, Pig, and other analytical tools. Understanding of finance domain. Preferred Qualification Experience in SAP data modelling Genpact is an Equal Opportunity Employer and more »
in programming languages commonly used in machine learning, preferably Python. Experience with machine learning frameworks and libraries, such as TensorFlow, PyTorch, scikit-learn, or Apache Spark. Proven track record of developing and implementing machine learning solutions in a professional setting. Passion for exploring new technologies and driving innovation in more »
build their AI practice and a team around you. Required Skills: Building cloud and native machine learning architecture with: LLamaIndex, HuggingFace, SentenceTransformers, PyTorch, Python, Apache Spark. Experience with practical application of AI and scaling AI with these tools Experience in Health Care is essential We would love to share more »