Made with weweb.io

Senior Data Platform Engineer (Kafka)

Austin, USA / Remote

Sourced by

Jing-Ta Chow

VP of Engineering

 mentoring sessions

 mentees met

See Mentor Profile

EasyPost

Founded in 2012 as the first RESTful API for shipping, EasyPost, a YC unicorn, handles the scale that comes with success. EasyPost is helping e-commerce companies with accurate tracking and logistics. EasyPost pushes boundaries and changes the status quo through our RESTful API, allowing companies greater control over their shipping. We continue to disrupt the shipping industry, and this is the best time to get on board. We are out to do things differently, to consistently change, grow, and progress. We are delivering hope and spreading smiles to homes all across the country. Join us in building simple shipping solutions to enable sellers to define & rate postage, buy it, and track it in transit.

What We Offer:

  • Comprehensive medical, dental, vision, and life insurance
  • Competitive compensation package and equity
  • Monthly work from home stipend of $100 net
  • Flexible work schedule and paid time off
  • Collaborative culture with a supportive team
  • A great place to work with unlimited growth opportunities
  • The opportunity to make massive contributions at a hyper-growth company
  • Make an impact on a product helping ship millions of packages per day

About the Role 

As apart of the Data team you will be responsible for building a scalable data ingestion/processing platform and low latency customer facing data APIs. People ship millions of packages with us everyday, and these shipments go through multiple stages that generate tens of millions events a day. The platform we build is the foundation of intelligence offering to our customers, it enables us to build complex models to power our data API products that set us apart from.

What you will do:

  • Architecting fault tolerant and self-healing distributed systems
  • Work with data scientist to create highly scalable API services that's based on ML and statistic models
  • Re-building our data warehouse to support growth, design next gen schema for analytics
  • Find new ways to improve in-house batch processing framework and workflow orchestration
  • Establish standard methodologies for creating systems and datasets for the entire company's use
  • Work closely with other teams from across the organization
  • Build and maintain the automation that manages data storage technologies
  • Mentor fellow teammates on algorithms, data structures, design patterns, and best practices
  • Find new ways to improve data team initiatives and workflow orchestration

Requirements

  • Involvement with online pub-sub systems at scale
  • Comfortable in a Polyglot environment
  • Familiarity with building scalable micro services
  • Experience diagnosing and resolving complex multi-system performance problems
  • Work with cross engineering team projects, good at communicating complex technical problems and making judgement calls.
  • Multiple years of exposure to utilizing an open-source stream-processing software platform
  • Strong desire to work in a fast-paced, start-up environment with multiple releases a day
  • A passion for working as part of a team

Related Plato Mentors 

 mentoring sessions

 mentees met

See Mentor Profile

 mentoring sessions

 mentees met

See Mentor Profile

 mentoring sessions

 mentees met

See Mentor Profile