Overview

Are you a Data or Software Engineer who has accumulated a significant amount of experience while working with Amazon RedShift, Apache Kafka or Apache Spark technologies for processing, analysing, storing and facilitating data?  Do you have a strong appetite for designing and delivering scalable, high-performance systems?  Come join us in our unique journey!

About the Role
Zopa is radically transforming the way consumers borrow and save money. We are the leading peer-to-peer lending company in Europe, delivering better value than traditional banks. More than 63,000 people have lent over £1.8 billion through Zopa since our launch in 2005, and we are regularly featured in the press as an innovator in our sector.
Zopa’s data-driven culture has played a major role in the fantastic growth we have been experiencing (100% for 2015!). We want to continue strengthening this culture, empowering people with tools, knowledge, and easy-to-use data.

As part of this endeavor, we started creating a data warehouse/lake combination for analytics on Amazon Web Services utilizing technologies such as Redshift, S3, Lambda, Kafka, and Spark.
We need an experienced data engineer to help design, build, optimize, and operate this new data warehouse/lake combo.
You will be working very closely with our software development team, the data-science function, and the consumers of data, operating in a small, agile, and adaptable cross-functional team.
Your talent and drive will help create a world-class elegant, scalable, robust, and efficient data-warehousing solution powering the analytical needs of the whole company.
Are you ready?

For more clarity on our values and mode of operation, see our tech blog, and especially our posts on Predictor, Data Democratization, and the cross-functional tribal model we use.

Requirements

Essential Skills/Experience

  • Hands on experience in data engineering (>1 yr), especially in some/all aspects of designing, building, and monitoring:
  • ETL pipelines (e.g., using Lambda, Kafka, S3, EMR/EC2, Kinesis),
  • Data warehouses (e.g., on Redshift or any other MPP), and
  • Data lakes (e.g., using S3/HDFS & Spark)
  • Strong background in software development, with expertise in Python or Java.
  • UNIX/LINUX

Bonus Points For

  • Understanding of software security and threat models, and experience building secure applications
  • Good understanding of data-governance concepts
  • Experience administrating AWS
  • Experienced with Stream Data Platforms such as Apache Kafka Streams or Apache Storm
  • Agile

Behaviour

  • Strong passion for project management, execution and getting things done
  • Ability to communicate with both technical and non-technical stakeholders
  • Passion for elegant and intuitive data structures and analytical interfaces
  • Excellent interpersonal, relationship building and influencing skills
  • Team player, accountable and strong sense of urgency and execution
  • Passionate about learning new, cutting-edge technologies

Tagged as: emr/ec2, kafka, kinsesis, lambda, Python, redshift, S3, spark