Ridecell is on a mission to help our customers run the world better by powering the fastest growing and most efficient ridesharing, carsharing, and autonomous mobility services. As the world shifts to a mobility-as-a-service model, market leaders in traditional transportation need to rapidly transform their business. New entrants in autonomous and shared mobility have an opportunity to lead new markets. Ridecell is best poised to support the initiatives of these industry leading organizations, with several customers, including BMW, AAA (Gig carshare), and VW Group, who already use our proven platform to launch, operate, and rapidly scale their mobility services across multiple geographies.

Challenges abound at Ridecell. Does building the next generation Data & Analytics platform excite you? Are you passionate about playing a critical role in building out scalable and highly performing state-of-the-art big-data systems and products that are magical. If so, we would love to chat with you. Ridecell is seeking a Senior Data Engineer extraordinaire.


  • Build and enhance the building blocks of Ridecell realtime analytics platform
  • Infuse the DNA of data oriented thinking across the company rank and file
  • Evaluate new technologies and build prototypes for continuous improvements in Data Engineering
  • Responsible for data quality, reliability, availability, and security
  • Data modeling, ETL setup, Hadoop cluster scaling and reporting tool integration such as Tableau
  • Build a company data warehouse search platform for business analytics
  • Enable self-servicing of the data for our organization and to our customers
  • Create ad-hoc queries and reports and educate others to create queries as needed
  • Automate and document processes, improve performance of bottlenecks
  • Design and publish custom dashboards for Product Teams and stakeholders around the company
  • Collaborate with Data Science, Product and Support Engineering teams to build new solutions


  • MS degree in Computer Science, Mathematics or Data Science, with at least 4+ years work experience
  • Problem solver with excellent written communication skills (we prefer well-written documents to powerpoint presentations)
  • Intimate knowledge of SQL, particularly PostgresDB
  • Strong experience working with Hive, Pig, Spark
  • Experience designing data storage of structures such has JSON (BSON), XML, Avro, Parquet
  • Experience with AWS tools & technologies (S3, EMR, Kinesis etc), GCP tools
  • Experience with streaming data pipelines such as Kafka, AWS Kinesis, Spark streaming etc.
  • Strong experience in working with ultra large data sets
  • Practical programming experience in at least one programming language
  • Very good experience with UNIX/Linux
  • Capable of planning and executing on both short-term and long-term goals individually and with the team
  • Ability to establish process and bring in solutions with structured, flexible, and scalable frameworks and solutions


  • Experience working in R, Matlab
  • Experience with Geospatial queries, pivot tables
  • Experience with demand planning for future data warehouse needs
  • Intimate knowledge of Statistics and/or Machine Learning Familiarity with columnar data stores
  • Familiarity with Python Django is a plus
  • Experience building ETL with open source tools such as talend, Pentaho