How we score every workshop.
The BSU Score is a single 0–100 number. Here's exactly what goes into it — openly, reproducibly, and without any proprietary secret sauce.
Rating
The Google rating, scaled linearly from 0–5 stars to 0–40 points. A 4.5-star workshop gets 36 pts. A 3.0-star workshop gets 24.
Review depth
Log-10 scaled review count. 10 reviews = 10 pts, 100 = 20 pts, 1,000+ = 30 pts. This prevents drowning out honest small shops.
Completeness
3 points each for: phone, website, hours, main photo, and verified owner. A claimed profile is a trust signal.
Quality bonus
Up to 15 bonus points for combining high rating (4.5+) with review volume (50+). Rewards consistency, not novelty.
From raw data
to final score.
Collect verified data from Google Maps
We query the Google Maps API weekly to pull public data for every auto repair workshop in the UAE — ratings, review counts, categories, hours, photos, and verification signals. No self-reporting, no paid data feeds.
Calculate the rating score
Google rating translates linearly into a 0–40 point contribution. A perfect 5.0 earns 40 points, 4.0 earns 32, 3.0 earns 24, and so on. Unrated workshops contribute 0 here.
Measure review depth (logarithmically)
We use a log-10 scale for review volume, giving a maximum of 30 points. This prevents megacorps with 10,000 reviews from drowning out honest independent shops with 100 real ones. 10 reviews = 10 pts, 100 = 20 pts, 1000 = 30 pts.
Check profile completeness
Up to 15 points are awarded for having: a phone number, a website, published hours, at least one photo, and owner verification. A workshop that hasn't bothered to claim its own Google profile is a yellow flag.
Apply quality bonus
A final bonus of up to 15 points goes to workshops that combine high ratings (4.5+) with substantial review volume (50+). This rewards consistency and prevents new workshops with 5 perfect reviews from jumping to the top of the rankings.
Five honest tiers.
The elite. 4.5+ rating, 50+ reviews, complete profile, consistent operations. Roughly 3% of all workshops.
Strongly recommended. High rating, solid review volume, mostly complete profile. You won't go wrong here.
Reliable. Good rating, reasonable review count. Check recent reviews for specifics before deciding.
Minimum viable. Either low rating, thin review count, or incomplete profile. Proceed with caution.
Insufficient data. Usually new workshops or ones with too few reviews to judge. Not "bad" — just unknown.
No games. No exceptions.
- We don't take money for rankings. Period. Not under any label.
- We don't include AggregateRating schema for thin data. If a workshop has fewer than 10 reviews, we don't publish a rating to Google's structured data.
- We don't scrape scrapable review text. Review counts and averages, yes. Individual review texts belong to Google.
- We don't delist workshops on request. You can claim your profile, update your info, and flag errors — but you can't buy your way to a higher score or pay to disappear.
- We don't hide the formula. Everything you just read is the entire algorithm. No AI black boxes, no secret weights.