Unlisted Shares Screener
Precize sells unlisted shares. Their users are serious investors who come with intent but no existing tool to help them screen what's available. I was brought in to build one from scratch. Six weeks, solo, no reference point in this market. 147 companies available on the platform, all of them needed to be explorable, filterable, and comparable in a way that had never been designed for this category before.
Year
2026
Scope
Product Design
Client
Precize
Duration
6 Weeks
WHERE IT STARTED
No existing screener meant no shortcuts. The first 2 weeks went into research, benchmarking screeners across listed equity, crypto, and mutual funds, and running 3 feedback sessions with a group of 8 active Precize users to understand how they actually think about unlisted shares.
Benchmarking came first. That was the wrong order. A few things users expected didn't show up in any reference I'd looked at, finding that mid-wireframe meant revisiting decisions I thought were already made. Next time, users come before benchmarks.
THE DECISIONS THAT ACTUALLY MATTERED
One mode or three?
Stakeholders wanted curated buckets, quick sets of shares grouped by theme. The client-serving team wanted open filters so users could build their own screens. Both were right for different users.
I pushed for three modes instead of choosing one, Pre-built Screens, Build Your Own, Saved Strategies. It made the product harder to align on. But collapsing everything into one mode would've meant designing for a user who doesn't really exist.
3 modes shipped. 3 stakeholder review rounds to get there.
What actually belongs on a screener?
The internal team came in with a long metric wishlist. Price performance, availability, liquidity signals, fundamentals, all legitimate. But a screener loaded with everything stops being a screener.
The question I kept returning to: what does a user need to decide to look closer, not to invest, just to keep going? That reframe helped cut the list. Metrics that didn't answer that question moved to the company detail page. Some of those calls needed defending in reviews. The argument was simple, too much upfront and users disengage before they even start.
Roughly half the requested metrics made it to the screener. The rest went deeper into the product where they belong.
Using AI tools without letting them do the thinking
Three weeks of design on a 0 to 1 product is tight. After research I used Figma Make and Lovable to generate rough layouts quickly. That compressed ideation but came with a real risk, AI output is generic by default and Precize has a design system with a specific character.
The rule I set: nothing goes in front of a stakeholder until it's rebuilt through the design system. The speed was real. So was the discipline required to not get lazy about what came out.
5 screens designed across 3 scenarios. 10+ components built into the existing design system.
HOW IT SHIPPED
Three modes live. Metrics validated by 8 real users across 3 sessions. Design consistent with the existing system. The client team got their requirements and users got something that didn't exist anywhere in this market before.
Working in a startup meant reviews moved fast and decisions stuck, no lengthy approval chains, no design-by-committee. That kept the work sharp and the timeline on track despite the scope.
What I'd change: User sessions before benchmarking. The gap between what competitors do and what users expect in the unlisted space was bigger than anticipated. Finding that late created rework that earlier research would've avoided.





