Job Description: This role sits within Dyson's Global Data Services team. This team is tasked with ensuring that the various parts of Dyson's business are able to leverage rich, accurate, and timely data to generate insights and make better decisions. As a part of IT, the team has the resources and the remit to keep up with Dyson's impressive global growth. Building our analytical capabilities is a core pillar of Dyson's new global data strategy. As more consumers engage with Dyson, via more types of products, and across more markets, the volume and diversity of data will greatly increase. Understanding how to use this data to improve everything from customer experiences to product development will be key to Dyson's success. Job Responsibilities: Dyson's core data platform on GCP. You will ingest new data sources, write data pipelines as code, and transform and enrich data using the most efficient methods. Working with data from across Dyson’s global data estate, you will understand the best way to serve up data at scale to a global audience of analysts. You will have the opportunity to work with data architects, data scientists and data product managers on the team to ensure that we are building integrated, performant solutions. Ideally you will have a Software Engineering mindset, be able to leverage CI/CD and apply critical thinking to the work you undertake. The role would suit candidates looking to make the move from working with traditional big data stacks such as Spark and Hadoop to using cloud native technologies (Dataflow, BigQuery, Docker/Kubernetes, Pub/Sub, Cloud Functions). Candidates who also have strong software development skills and wishing to make the leap to working with Data at scale will also be considered. Job Requirements: Experience integrating/interfacing with REST APIs / Web Services Experience building API's and apps using Python/JavaScript or an alternative language. Practical experience writing data analytic pipelines Experience handling data securely. Hands-on experience with cloud environments (GCP & AWS preferred) Experience with AWS data pipeline, Azure data factory or Google Cloud Dataflow Experience migrating from on-premise data stores to cloud solutions Experience with agile software delivery and CI/CD processes. Working with containerization technologies (Docker, Kubernetes etc A willingness to learn and find solutions to complex problems. Experience of designing and building real/near real time solutions using streaming technologies (e.g. Dataflow/Apache Beam, Fink, Spark Streaming, etc.) Knowledge of data modelling techniques and integration patterns. Job Details: Company: Dyson Vacancy Type: Full Time Job Location: London, England, UK Application Deadline: N/A Apply Here careers-trivia.com