Boston, MA · Job # 8564RK
Be an integral part of a Global data team in a highly collaborative culture. Collaborate with analysts, data scientists, engineers and business counterparts to create User Experiences that delight customers, solve large scale data optimization problems, and build solutions from start to finish.
Easy Commute- via commuter rail, T, bike or car!
The Experience you bring to the team:
- 3+ years of hands-on industry experience with Python and AWS
- 2+ years of proven ability developing ETL in Spark (PySpark preferred)
- Advanced SQL (ANSI SQL or Transact-SQL)
- Working knowledge of Data Lake patterns: partitioning, multi-step transformations, data cataloging
- Working knowledge of self-describing, compressed data file formats: Parquet, Avro
- Working knowledge of event streaming platforms: Kinesis, Kafka, Flink
- Working knowledge of Domain Driven Design (DDD) and event storming
- Experience with AWS data processing services: EMR, Athena, Redshift
- Experience with AWS serverless infrastructure: API Gateway, Lambda, DynamoDB, S3
- Experience with NoSQL/non-relational databases, especially document stores
- Experience building data models intended for data visualization solutions
- Demonstrable experience implementing business logic into well structured data models that have been successfully applied to BI
- Experience coding in Java or Scala a plus
- Experience with Docker or other containerization tooling a plus
- CI/CD exposure using git based deployment automation a plus.
Apply For this Position