ABOUT THE COMPANY:
At Envoy, we do things differently. We believe in replacing the mundane with moments that delight. This idea started at the front desk, where we swapped outdated log books for iPads and set a whole new standard for visitor management. Since then, over 20 million guests have signed in with our product at more than 5,000 offices worldwide. 
And now we’re looking past the lobby, to the mailroom, meeting rooms and beyond—and asking how can we make this better, too? By challenging the status quo of workplace technology, this is the vision we’re working toward every day.
ABOUT THE ROLE
We are looking for data engineers. You are going to design and build the infrastructure that fuels Envoy’s future growth, and allows us to realize our vision of becoming the office of the future. You only work in“you build it, you run it” environments where there’s free-flowing collaboration between specialists & generalists up and down the stack. You are constantly pushing the boundaries around the quality and quantity of data we have available, and are passionate about building scalable, fault-tolerant pipelines from research to deployment. Challenging the status quo of workplace technology is a big problem with significant scale and almost every type of technical challenge, so you should be up for the challenge of building and supporting our platform.
Our stack is built on PostgreSQL, Redis, Ruby, Rails, Elixir, Phoenix, Javascript(Ember.js, React Native), and a few more things, and we’re currently hosted on Heroku with some AWS(mostly Lambda), though we are constantly reevaluating as we scale.
Our data infrastructure is built off of Redshift, and we currently use 3rd party tools + some in-house scripts for our ETL jobs. We use dbt as our data modeling layer, and Looker / Python notebooks to visualize our data.
We value being a top-notch organization with a strong engineering driven culture, and have the same high standards with our code, systems, and people. We value learning and growth(and not being bored) and hire fully-formed, diverse, well-rounded, communicative people we can envision being friends with and trusting. Our projects tend to be driven by small, agile teams, so trust and accountability is required for us to work. It also helps us keep processes & overhead low. We appreciate that we've built a reasonably-sized, high-powered team so far(61 employees including 29 engineers) and are always striving to be the best place to work we can be.
If this sings to you, come join us!

You will:

  • Lead an initiative to design, build, and maintain an event-based data platform to ingest, process, and store all the data produced by our apps and the services that we use
  • Partner with data scientists and engineers to optimize our data storage, retention, and security practices, ensuring we are always using our resources in the most efficient way possible
  • Build for scale, availability, performance, and security across the stack. We have global enterprise scale and need to sustain thousands of requests per second
  • Develop and maintain a rich dataset that enables us to build increasingly accurate models that help us better understand our business
  • Drive quality into our data pipeline and ensure data integrity at every level, be it through good standards, in-depth and helpful reviews, data validation methods, anomaly detection and alerting, multivariate experiments w/ proper control group(s), partnering with engineering on data storage and logging, well-thought out ETL
  • Clearly and succinctly document where and how funky 3rd party data becomes our clean, gold-standard set
  • Facilitate decisions on standards, framework, tools, build versus use, and best practices. We try to keep our data and tools simple, clean, readable, and consistent to maximize our time on the organization’s problems.
  • Constantly push the boundaries of the data we have available, and help us find insights that drive our product and business roadmaps forward
  • Evangelize industry best practices and leverage new technologies to help make our infrastructure more reliable and efficient
  • Learn, teach, & share. With your fellow teammates and the world(blogs, open source, etc). It’s good for you, good for Envoy, and good for the world.
  • You are:

  • Emotionally mature & humble. You care about being effective over being right.
  • A champion of quality. Ensuring accuracy and reliability of data is something that you are passionate about.
  • Communicative & empathetic. You are happy when helping others succeed.
  • An open-minded learner. You live to learn new things, like staying up to date on new technologies, tools, and techniques. You are inspired by what people inside and outside Envoy know and are eager to incorporate the world's knowledge into your work.
  • Someone with extremely high standards. You’re practical and know perfect is the enemy of good, but you aspire for us to be great.
  • An owner. You feel personally accountable and responsible and know seeing the problem is less than half of it. You look for problems and inefficiencies and find elegant solutions to them before they become major issues.
  • Fast-paced. You love the speed of and impact you have in startups.
  • An advocate for the end user. You think about how our customers will interact with the service and champion positive changes.
  • A systems thinker. You think about how your designs will affect other aspects of the service and how it will evolve in the future.
  • On top of risks. As Envoy becomes commonplace around the world, the services we provide and data we store will be more and more valuable. It is your job to make things highly available, performant, and as secure as appropriate.
  • You have:

  • A background and experience in data engineering, and have built or been responsible for production level data infrastructure.
  • Worked with orchestration systems like Airflow, Luigi, or Azkaban to coordinate complex data pipelines
  • Some experience with with stream processing using tools like Kafka or Kinesis
  • An understanding of how columnar data warehouses(Redshift, BigQuery, Snowflake) work, and have done work optimizing query performance
  • Experience setting up and working with a distributed analytical framework like Spark or Hadoop, and the necessary Java/Scala or Python skills required to operate
  • Understanding of both relational and non-relational database systems.
  • Knowledge of how the internet & networking works(i.e., DNS, HTTP, TLS, Certificates, etc) and the tools & services that enable people & devices to connect to our services(e.g., browsers, CDNs, proxies).
  • Experience with how systems work at scale(e.g. threads, virtualization, configuration management, load balancers, caching).
  • A passion for scale & performance and experience w/ distributed systems. You think people deserve access to their data in milliseconds and feel slow APIs are an insult. You worry about what happens when we have 100x more customers, but are practical and experienced to know how to solve what you need to right now.
  • Bonus points:

  • You contribute to an open-source project or library of any sort.
  • You’ve excited about working in a startup environment.
  • Leadership experience. You’ve driven a project or led a team to a good outcome a few times, and the team would reflect positively on it.
  • Experience with security exploits, techniques, and tools. As Envoy becomes commonplace around the world, we become more and more of a target.
  • If this kind of work sounds interesting, we'd love to hear from you! We're open to all backgrounds and levels of experience, and we believe that great people can always find a place. People do their best work when they can be themselves, so we value uniqueness. We never discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status or disability status.
    Our HQ is in San Francisco and we’re open to remote candidates in the Pacific-Eastern time zones. Collaboration and the ease of chatting face to face is important to us.