Data Engineer

Alloy

Alloy

Software Engineering, Data Science
Washington, DC, USA
Posted on Sep 14, 2024
About Alloy.ai
At Alloy.ai, we work with consumer goods companies that make the products we eat, wear, and use every day, as well as the ones we occasionally splurge on. We’re tackling a real and complex problem for them—managing supply and demand in the face of constantly changing customer behavior, highly complex supply chain networks, 40-year-old data standards and labor-intensive manual processes.
Alloy.ai is a fast-growing, well-funded startup with an expanding presence across the world. Our team hails from successful startups, leading tech companies and Fortune 100 enterprises. We believe deeply in fostering individual ownership, iterating to excellence, focusing on what matters, communicating openly & respectfully, and supporting one another.
We encourage people of all backgrounds to apply. Alloy.ai is committed to creating an inclusive culture, and we celebrate diversity of all kinds.
About The Role
Our data platform is an integral part of Alloy.ai’s value proposition. We offer more than 400 active integrations to connect and harmonize our customers’ data from a variety of data sources.
Our customers rely on Alloy.ai to provide accurate and complete data in a timely manner to run their businesses. Data Engineers are responsible for building, maintaining, and expanding these integrations. The team uses an array of monitoring tools to detect and address any data issues. We keep expanding and improving our data pipeline as our business grows to stay ahead of competing solution providers and so that our customers get the highest value out of their data.
About You
To succeed in the role of a Data Engineer, you should possess curiosity, creativity, and good problem solving skills and be an effective communicator.
You can expect a blend of building data integrations, maintenance and feature development. You will write production-grade Python code for our data pipeline which processes our customers’ data at any time of day. You will get to know the data back office of many retailers you know from your daily shopping and learn more about some of your favorite brands’ businesses.
You will work to connect external systems posing many interesting challenges: data availability, data cleanliness, reliability. You will use web scraping, APIs, ODBC, AS2 and other technologies to fetch and transform data.

What You Will Do

  • ​​Work together with a highly talented and motivated team of engineers to ensure that our customers can count on Alloy’s data platform providing them data promptly and reliably every day.
  • Build, maintain, and improve data integrations that extract, transform, and load data from various, disconnected retailer data sources into a standardized schema for our data platform.
  • Collaborate closely with other engineers, Product, and Client Solutions to build new and improve integrations to expand the types and sources of data available to our customers.
  • Work in cross-functional teams to develop and build new features that enhance the capabilities of our data platform and product as we continue to scale our business.
  • Become intimately familiar with data pipelines, cloud infrastructure, supply chain and logistics fundamentals, and a whole host of other technologies and concepts. If you know these technologies in advance, great, and if not, you'll be learning as you go.
  • Contribute to Alloy.ai’s engineering culture by bringing your ideas into our product development cycle.

What We Are Looking For

  • A Bachelor’s degree in a quantitative discipline (e.g., computer science, statistics, mathematics) or a related field.
  • 1-2 years of full-time work experience - including internships - in a similar position working with data pipelines and Extract, Transform, Load (ETL).
  • A good understanding of core Data Engineering concepts such as ETL, batch vs. stream processing, data modeling, and data warehousing.
  • Fluent in at least one object-oriented programming language; preferably work experience of 1-2 years in Python. Bonus points for work experience in Java.
  • Familiarity with working with relational databases (e.g., Postgres) and writing database queries. Bonus points for working knowledge with one or more cloud service suites (AWS, Google Cloud, Microsoft Azure).
  • You have experience or are interested in learning how the global supply chain works, from retail sales and inventory data to tracking of orders and shipments.
  • You have a genuine desire to help and work with other engineers via mentorship, pairing, and code reviews.
  • You can switch your course of action and prioritize tasks effectively in the event of data outages.
  • You are a clear communicator both verbally and written with a knack for translating technical details to non-technical audiences.
Role is a hybrid role based in Washington, DC. Hybrid is defined by our company as 3+ days/week in the office when not on vacation.
Unfortunately, remote employees will not be considered for this role.