

LiquiDonate CTO Aisya Aziz is turning the mess of returns and excess inventory into a disciplined, donation‑first supply chain, proving that unsellable doesn’t have to mean unusable. Recognized by TIME as a 2025 Best Invention, her API‑first architecture, item‑level data model, and human‑centered leadership are helping major retailers route millions of products to thousands of nonprofits, including publishing defensible impact metrics instead of squishy ESG math.
- API‑first, donation‑first platform that plugs into existing retail and logistics stacks.
- “Unsellable” is a deterministic, data‑driven state, not a manual judgment call.
- Matching optimizes for eligibility, capacity, cost, and impact—not just distance.
- Item‑level data, immutable events, and reconciled receipts keep ESG metrics defensible.
- AI is layered into matching, valuation, and forecasting without sacrificing auditability.
The Aisya Aziz Interview
At LiquiDonate, what architectural decision most enabled “donation-first” at scale without turning the platform into a custom integration project for every retailer and nonprofit?
The key architectural decision was building LiquiDonate as an API-first platform that integrates into the systems partners already use – whether that’s return management software, 3PLs, WMS/RMS, or e-commerce platforms like Shopify.
Instead of building custom workflows for every retailer or nonprofit, we created a standardized donation disposition API. Partners integrate once, and donation becomes a native outcome within their existing tech stack – without heavy engineering lift on either side.
What did recognition from TIME actually validate about the product—originality, efficacy, ambition, impact—and where did the platform still feel unfinished at the time?
Recognition from TIME validated that what we were building wasn’t just operationally useful – it was category-defining. It validated the originality of treating donation as a true supply-chain outcome, not a side workflow. It validated the ambition of rethinking reverse logistics around impact, and it validated that the model could work at real scale.
While I believe that our core services were strong, we were still maturing around partner self-service configuration, resiliency and developer experience to truly make it enterprise-grade at scale.
LiquiDonate positions itself as matching and moving “unsellable” inventory. What does “unsellable” mean in data terms (condition, category, compliance constraints, liability), and how is that classification enforced in workflows?
When we say “unsellable,” we don’t mean worthless – we mean inventory that can’t re-enter primary retail channels due to defined constraints. In data terms, that’s usually a combination of condition codes, return reason codes, resale eligibility flags, category restrictions, compliance rules, and sometimes liability or brand-protection constraints.
For us, we also allow the retailers standard controls on how they’d like to define which items are for donations. There are some reasons that retailers might want to have them returned to the warehouse – one example would be a new product. They’d like to see the items that were returned to determine if there are any production issues before a mass-produce.
We also have AI that could review the image provided by the customer to determine the outcome – for example, while it fits the other parameters for donation, it might not be in a good enough condition for donation. (We don’t want to donate items that are torn/broken) So in cases like that, we’ll simply ask them to throw them away rather than sending it for donation.
Architecturally, we treat “unsellable” as a deterministic state derived from structured inputs – not a manual judgment call. That’s what allows us to scale the workflow consistently across partners without introducing risk or ambiguity.
Matching sounds simple until it isn’t: what are the top constraints the matching engine optimizes for (distance, capacity, timing windows, item suitability, nonprofit eligibility), and what trade-offs show up when optimizing for cost vs. impact?
At a high level, our matching algorithm considers multiple constraints, including cost, distance, capacity, and timing.
As a sustainability-forward company, we want to ensure that nonprofits receive items aligned with their specific needs. Sending too many items to a single nonprofit can sometimes lead to resale or fraud. Not all nonprofits have the space to store a pallet of goods, so we ensure capacity is built into the process.
This also means that some items may travel farther, but we always strive to secure the best pricing for our retail partners while ensuring nonprofit partners receive the items they truly need.
For larger items, timing becomes more complex. We must align customer timelines, carrier schedules, and nonprofit receiving availability to ensure smooth coordination. This was more difficult to build, but we’re extremely proud of the results and what we can uniquely do for Big & Bulky retail items such as appliances and mattresses.
Returns and excess inventory are messy and fraud-prone. What metrics does the platform monitor to reduce waste, prevent abuse, and keep a clean chain of custody from retailer to recipient?
To reduce waste, prevent abuse, and maintain a clean chain of custody, our platform monitors several key aspects of the donation process. First, we keep nonprofit needs and capacity top-of-mind, ensuring that only organizations with the ability and space to receive items are matched. This prevents overburden and reduces waste.
We also implement equitable distribution across our 4,000+ nonprofit partners, tracking allocations to avoid funneling donations to a single organization and ensuring fair access. This helps reduce the risk of fraud, as some retailers have reported donated items ending up in the resale market. By considering each nonprofit’s intake capacity and distributing donations fairly, we create a win-win: supporting nonprofits while minimizing the chance of misuse.
To maintain transparency and accountability, we track discrepancies through a nonprofit-exclusive mobile app. Partners can provide immediate feedback if they receive incorrect items or quantities, creating a responsive feedback loop to retailers and maintaining a clean, auditable chain of custody.
By monitoring these metrics – capacity, eligibility, equitable distribution, fraud risk indicators, and receipt discrepancies – the platform ensures donations are distributed efficiently, fairly, and safely from retailer to recipient.
How is donation documentation handled end-to-end, from receipts to valuation inputs, along with audit trails, and how did those requirements shape the data model and reporting layer?
Donation documentation isn’t an afterthought in our system. From the moment an item is marked eligible for donation, we begin tracking the metadata required to generate compliant documentation later.
End-to-end, the flow looks like this:

Architecturally, these requirements heavily shaped our data model. We couldn’t treat a donation as a simple transaction record. Instead:
- Donations are modeled at the item level, not just the shipment level.
- State transitions are event-driven to preserve auditability.
- Valuation fields are versioned and traceable to their source (retailer input vs. calculated logic).
- Receipts are generated from reconciled data, not assumed data.
The biggest design constraint was ensuring documentation is reproducible and defensible. That meant building around traceability, immutability of key events, and clear separation between operational data and compliance-facing outputs.
In short, documentation requirements didn’t sit on top of the system – they shaped the core data architecture from day one.
What did partnering with Loop Returns teach about embedding donation routing inside mainstream returns flows? And I’m guessing there is a story about something that broke when volume spiked, if you are willing to share.
Loop Returns:
With any integration – especially embedding donation into a mainstream returns flow – you’re not just wiring APIs together. You’re aligning assumptions about how routing logic is supposed to behave.
We had a case where a retailer expected an item to go to donation, but instead, the system generated a return label back to their warehouse. At first, it looked like donation routing had failed. This was an emotional day for the team, as this is not the experience we wanted to give to that customer.
But the real issue lived in the gray area between configuration and expectation – eligibility rules, prioritization logic, and how certain fields were interpreted across systems. It’s so hard to predict where these systems might misalign, and it wasn’t clear at first what had even happened.
We had to go back and forth with their team to trace the payload, review the configuration, and walk through the routing decision step by step.
The biggest lesson was that integrations fail less from broken code and more from misaligned mental models.
After that, we improved observability — clearer logging on routing decisions, better validation on eligibility rules, and internal tooling to quickly answer: “Why did this order go where it did?”
Embedding donation into returns isn’t just a technical challenge — it’s a coordination challenge across systems and teams.
Scaling:
We hit a breaking point when volume spiked for multiple shipping labels per unit, when there is typically only one.
Our original workflow assumed a relatively straightforward relationship between a return item and its labels. But in certain scenarios — like split shipments or multi-package handling — one unit needed multiple labels generated and tracked independently.
At low volume, it worked. At higher volume, concurrency exposed the flaw.
We started seeing partial label generation and shipping state inconsistencies because the system wasn’t designed to handle multiple labels tied to the same unit in a robust, idempotent way.
The fix required rethinking the label generation model. We decoupled label creation from the unit record, treated labels as first-class entities, and added stronger idempotency and queue controls to prevent duplication or partial completion under load.
It was a strong reminder that scale doesn’t create new problems — it reveals incorrect assumptions in your data model.
LiquiDonate also runs in the Shopify ecosystem. What architectural patterns made multi-tenant, merchant-by-merchant deployment safe, predictable, and supportable?
1. Tenant-Isolated Data Layer
Each merchant’s data is logically separated using tenant identifiers at the database level. That ensures that operations, donations, and inventory metadata for one store never bleed into another, while still allowing the platform to scale on shared infrastructure.
2. API-First Integration with Scoped Tokens
Shopify and partner APIs use scoped OAuth tokens. Every request is tied to a specific merchant context, ensuring all operations are safe and auditable per tenant. There’s no risk of cross-merchant API leakage.
3. Configurable Per-Merchant Rules
Each merchant can define routing preferences, valuation logic, and nonprofit preferences without impacting other tenants. These per-merchant configurations are stored separately but leverage the same core workflow engine.
4. Observability and Scoped Logging
Logs, metrics, and error reporting are tenant-scoped. Support teams can debug merchant-specific issues without combing through other stores’ data. It also supports automated alerting on errors or SLA violations per merchant.
The company reports keeping more than 15 million items out of landfills. What are the most defensible impact metrics tracked today, and how do engineering teams prevent “ESG math” from getting squishy?
1. Item-Level Disposition
Every donated item is logged with SKU, quantity, and condition. That lets us produce precise counts, rather than estimates, for both reporting and receipts.
2. Financial & Valuation Metrics
Fair market value (FMV) is captured or calculated using configurable rules tied to item metadata. Each FMV entry is traceable back to its source, ensuring auditability.
3. Environmental & Social Impact
Landfill diversion and community reach are computed from item-level data combined with partner location and logistics data. These calculations are deterministic, repeatable, and based on external factors (weight, distance, emission factors).
Essentially, every metric is traceable from the raw item data through to the dashboard or report, giving both partners and auditors confidence that the numbers are real and defensible.
As you think about future enhancements, what role is AI playing in planning and execution and how do you plan to integrate it into what you have already built?
AI is increasingly part of how we plan and execute donation workflows. Right now, we’re exploring several key areas:
1. Smarter Matching & Prioritization
Machine learning models can predict which nonprofits are most likely to accept certain items quickly, optimize routing for impact vs. cost, and dynamically adjust allocations as demand and capacity shift. This complements our current rule-based, weighted scoring engine without replacing it.
2. Automated Valuation & Categorization
AI can help classify items, detect condition or damage from photos, and suggest fair market values when retailer input is missing or incomplete. This reduces manual effort and increases accuracy in reporting and compliance.
3. Forecasting & Analytics
Predictive models can help retailers anticipate return flows, seasonal spikes, and nonprofit capacity, allowing better planning for logistics and inventory disposition.
Integration Approach
We plan to layer AI into existing services rather than rebuilding the stack. That means using AI outputs as advisory inputs for workflows and reports, while preserving the auditability, deterministic rules, and traceability our partners rely on. By combining AI insights with our proven API-first architecture and event-driven data model, we can improve efficiency and impact without compromising reliability or transparency.
On the leadership side: how did the combination of University of Maryland computer science and Carnegie Mellon University technology ventures change how technical decisions get made, staffed, and shipped in an early-stage CTO role, especially while parenting?
My background in Computer Science at the University of Maryland gave me the technical rigor to evaluate architectures, design scalable systems, and anticipate trade-offs. Meanwhile, Carnegie Mellon’s Technology Ventures program gave me a lens for product-market fit, resource prioritization, and the operational side of shipping a business – essentially teaching me to balance technical excellence with business impact.
As an early-stage CTO, that combination shaped how we made technical decisions: I could weigh engineering trade-offs quickly but always through the lens of impact and feasibility for the company. Staffing decisions were influenced by understanding which roles were truly leverageable – bringing in the right engineers for the immediate priorities rather than trying to build a full org upfront. And shipping was iterative: we leaned on lean processes, automated pipelines, and clear product metrics to ensure every release delivered measurable value.
On top of that, parenting added another layer of discipline. It reinforced the importance of focus, clear priorities, and realistic timelines – both personally and professionally. I became more deliberate in delegating, setting expectations, and creating processes that would allow the team to execute autonomously, which is critical in an early-stage environment.
About Aisya Aziz CTO, LiquiDonate

Aisya Aziz is the Chief Technology Officer at LiquiDonate, where she architected and led the engineering team behind the company’s core platform—technology that was recognized by TIME magazine as a 2025 Best Invention for its role in transforming excess retail inventory into a scalable, donation-first solution. With a Master’s degree in Technology Ventures from Carnegie Mellon University and a Computer Science background from the University of Maryland,
Aisya brings both deep technical rigor and product-market intuition to building systems that operate at the intersection of commerce, sustainability, and social impact. As an early-stage startup CTO and a working mom, she leads with a pragmatic, human-centered approach to scaling teams and infrastructure while delivering mission-critical software used by major retailers and thousands of nonprofits.
For more serious insights on AI, click here.
For more serious insights on management, click here.
Did you find the interview with Aisya Aziz useful? If so, please like, share or comment. Thank you!
The cover image is AI-generated from the author’s prompt and source photos by Aisya Aziz.


Leave a Reply