Eyewear Discovery Chatbot round·Product Management·Medium·20 min

Lenskart PM Interview — Eyewear Discovery Chatbot

Start the interview now · ₹9920 min · 1 credit · scorecard at the end
Field
Product Management
Company
Lenskart
Role
Product Manager
Duration
20 min
Difficulty
Medium
Completions
New
Updated
2026-05-16

What this round is about

  • Topic focus. You design a conversational chatbot that guides a shopper through eyewear discovery and prescription help on an Indian eyewear app where most buyers are first-time customers.
  • Conversation dynamic. A Lenskart Product Manager runs the round, opens by asking who you are designing for, and interrupts to pull you back to a real user whenever you drift into abstraction.
  • What gets tested. User segmentation before features, handling the no-prescription and progressive-lens cases, choosing one primary bet under a deadline, and defining success metrics with a denominator and a guardrail.
  • Round format. A spoken product-design conversation of roughly nineteen minutes, with live follow-ups and pushback rather than a silent presentation.

What strong answers look like

  • Segment-first framing. You name distinct users before any feature, for example a first-time prescription-unaware buyer, a repeat buyer replacing a known prescription, and a fashion-led shopper, and you say which one you are designing for first.
  • Job before features. You state the single job the assistant is hired to do for your primary segment and the riskiest assumption in it, for example that the user can supply a valid prescription.
  • Edge cases as design. You validate prescription inputs in plain language and route progressive-lens or no-prescription users to a home eye-test or store fitting instead of guessing.
  • Metrics with a denominator. You close with one success metric, its denominator and time window, plus a guardrail such as return rate so conversion is not bought with bad fit.

What weak answers look like (and how to avoid them)

  • Feature list with no user. Listing chatbot capabilities before naming a single segment. Mitigation: spend your first ninety seconds entirely on who and why.
  • Assumes a valid prescription. Designing only for the confident buyer who knows their numbers. Mitigation: design the first-time, no-prescription buyer as the default path.
  • Ungrounded recommendations. Suggesting frames with no inventory or price check. Mitigation: state that every recommendation reads live stock and price.
  • Metric without a denominator. Saying you will improve conversion with no base, window, or guardrail. Mitigation: always attach a denominator and one guardrail metric.

Pre-interview checklist (2 minutes before you start)

  • Recall three eyewear buyer segments. Have a first-time prescription-unaware buyer, a repeat buyer, and a fashion-led shopper ready to name on turn one.
  • Identify the prescription problem. Be ready that many shoppers lack their prescription and do not know their pupillary distance.
  • Think of the no-prescription path. Have a routing answer to home eye-test or store fitting for users who cannot supply numbers.
  • Pull up one success metric. Decide a single metric, its denominator, time window, and a guardrail before you are asked.
  • Have one cut ready. Decide what you would deliberately not build first if given two weeks.

How the AI behaves

  • Probes every claim. Asks for the segment, the number, or the baseline behind any statement and never accepts the first answer without a follow-up.
  • No mid-interview praise. Will not say great answer or validate you, it acknowledges one specific detail then pushes deeper.
  • Interrupts on abstraction. Pulls you back to a named user and a number whenever you recite a generic framework.
  • Verifies impressive claims. If you cite a metric or outcome it asks for the baseline and how you isolated your contribution.

Common traps in this type of round

  • Framework recital. Naming a product framework instead of reasoning from the eyewear discovery problem itself.
  • Dead-ended user. Leaving a no-prescription or progressive-lens user stuck in chat with no route to a test or store.
  • Everything is priority one. Listing every idea as high priority with no sequencing under the deadline.
  • Headline metric without slice. Quoting conversion or sales with no denominator, time window, or user slice attached.
  • Ignoring the offline lever. Never using the home eye-test network or store handoff as a product mechanism.
  • Pretty UI, broken trust. Recommending frames the assistant cannot confirm are in stock at the stated price.

Interview framework

You will be scored on these 6 dimensions. The full rubric with definitions is below.

Buyer Segmentation Rigor
How clearly you split eyewear shoppers into distinct needs and pick one to design for before any feature.
22%
Prescription And Fit Handling
How well your flow handles the no-prescription, wrong-entry, and progressive-lens cases instead of assuming a clean buyer.
22%
Inventory-grounded Recommendation
Whether frame suggestions are tied to real stock and price rather than free-floating recommendations.
14%
Prioritization Under Constraint
Whether you commit to one bet under the deadline and name the explicit cut, instead of listing everything as priority one.
20%
Metric Definition Discipline
Whether success metrics carry a denominator, a time window, and a guardrail rather than a headline number.
14%
India Eyewear Context Use
Whether you ground choices in the Indian market reality such as unorganized retail and low online conversion.
8%

What we evaluate

Your final scorecard breaks down across these dimensions. The full rubric and tier criteria are revealed inside the interview itself.

  • Buyer Segment Evidence20%
  • Prescription And Fit Edge-Case Handling20%
  • Inventory-Grounded Recommendation Rigor13%
  • Prioritization And Constraint Recalibration17%
  • Success Metric Definition Discipline16%
  • India Eyewear Context Grounding9%
  • Product Judgment Self-Awareness5%

Common questions

What does the Lenskart PM product-design round actually test?
It tests whether you can design a conversational eyewear chatbot grounded in real user segments and the Indian eyewear market, not whether you can recite a framework. The interviewer pushes on prescription literacy, fit trust, and the unorganized retail context. You are evaluated on segmenting users before listing features, defining the user job, defending a single primary bet under a deadline, handling the no-prescription and progressive-lens edge cases by routing to home eye-tests or stores, and naming success metrics with a denominator and a guardrail. First-principle reasoning beats memorized templates throughout.
How should I structure my answer in a product-design round like this?
Start by naming who you are designing for before touching any feature, because the interviewer opens by asking exactly that. Segment users such as first-time prescription-unaware buyers, repeat buyers, and fashion-led shoppers, then state the user job and the riskiest assumption. Pick one primary bet and say what you are deliberately not building. Walk the conversation flow for one segment end to end, including the prescription and fit edge cases. Close with one success metric, its denominator, and a guardrail. Keep it concrete and tied to numbers.
What are the most common mistakes candidates make in this round?
The biggest mistake is jumping to a chatbot feature list before segmenting users. Others include assuming every shopper can supply a valid prescription, recommending frames without grounding them in inventory or price, dead-ending users who have no prescription instead of routing them to a home eye-test or store, stating metrics with no denominator or guardrail, listing every idea as high priority with no sequencing, and reciting a named product framework instead of reasoning from the eyewear discovery problem itself. Each of these maps to a real rejection pattern.
How is this AI interviewer different from a real Lenskart interviewer?
It behaves like a calibrated mid-level Lenskart PM but never breaks character, never praises mid-interview, and never gives you the answer or names the framework you should use. It acknowledges one specific thing you said, then probes or raises an objection every single turn. It verifies impressive claims by asking for the baseline and how you isolated your impact. It is consistent and unbiased on delivery style, scoring your reasoning rather than your fluency, so non-native English speakers and quieter candidates are evaluated on ideas alone.
How is scoring done in this practice interview?
Your transcript is scored against role-specific dimensions such as user segmentation rigor, eyewear domain grounding, prescription and fit edge-case handling, prioritization under constraint, and metric definition discipline. Each dimension has observable signals and band-by-band anchors, so two evaluators would land within a few points. You receive a scorecard naming the trade-off you could not justify and the user segment you left undefended. Live tracker elements tick off as you cover each must-have, and the post-session report explains every dimension with concrete examples from what you said.
What should I do in the first two minutes of this round?
Do not start listing chatbot features. Spend the opening on the user: name two or three distinct segments, including the first-time buyer who does not know their prescription, and state the single job the assistant is hired to do for your primary segment. Name the riskiest assumption in that job. This directly answers the interviewer's opening ask, which is to say who you are designing for before any feature. It also sets up every later probe on prescription handling, fit trust, prioritization, and metrics on your terms rather than theirs.
How do I handle the prescription and pupillary-distance problem in my design?
Treat it as the central design problem, not an edge case. Around 47 percent of shoppers do not have their prescription on hand and about 90 percent do not know their pupillary distance, so the assistant must translate jargon into plain language, validate entries against sensible ranges, and flag likely errors before checkout. For progressive-lens or high-power cases, route the user to a home eye-test or a store fitting rather than guessing. Use the offline eye-test network and store handoff as a feature, so a no-prescription user is never dead-ended in the chat.
What does a strong answer in this round actually sound like?
It opens with named user segments and the job for the primary one, not features. It states one riskiest assumption and one primary bet, and says out loud what is being cut and why. It walks the conversation flow for the first-time prescription-unaware buyer, validating prescription inputs and routing hard cases to a home eye-test or store. It grounds recommendations in real inventory and price, and it closes with a single success metric, its denominator, the time window, and a guardrail metric such as return rate so growth is not bought with bad fit.
How much eyewear domain knowledge do I need for the Lenskart PM round?
You do not need to be an optician, but you must show you understand why online prescription eyewear is hard in India. Know that the market is mostly unorganized, that conversion is low at roughly 2 to 4 percent, that return rates can spike toward 50 percent without a fit solution, and that virtual try-on materially lifts conversion and cuts returns. Use plain prescription vocabulary such as single vision versus progressive lenses and pupillary distance correctly. The interviewer rewards reasoning about trust and fit, not memorized lens specifications.
Why is user segmentation weighted so heavily in this product-design round?
Because a first-time buyer who has never had an eye test behaves nothing like a repeat buyer replacing a known prescription, and a fashion-led shopper cares about look over correction. A single generic flow under-serves all three. Candidates who skip segmentation tend to design a chat UI that assumes a valid prescription and a confident buyer, which is exactly the user who does not exist in the largest India segment. Naming segments early lets you justify the primary bet, the prioritization, and the metrics, which is why the interviewer probes it first.