What is Digital Grading Co?
Digital Grading Co (DGC) is an AI-powered card grading app currently generating significant buzz across Pokemon and TCG communities on TikTok and Instagram. The premise is simple: scan your card with your phone camera, and the app returns a predicted grade — giving you a data point before deciding whether to submit to PSA, CGC, or TAG.
At its best, this is genuinely useful. Grading fees in Singapore run from roughly SGD $25 to $80+ per card depending on service tier and turnaround. A reliable pre-submission screener could help collectors avoid sending cards that won't hit the grade they need to be profitable. The problem is the gap between what DGC promises and what the current version delivers.
App & UX Problems
The App Store reviews tell a consistent story. Across dozens of reviews from March to May 2026, the same complaints appear repeatedly — and they're not minor polish issues.
App demands a paid subscription ($4.99–$15+/month) before you can scan a single card. No free demo, no grace period.
Multiple reviewers report scan failures even with a tripod, white background, and perfect lighting. When it does scan, cards are sometimes flipped upside-down.
One reviewer scanned the same card six times under identical conditions and received five different scores ranging from 6 to 10.
Even after paying a monthly subscription, fast-tracked grading results cost extra. Free-tier "deep grades" can take up to 4 hours.
Reports of being locked out of paid accounts, being charged again to log back in, and profile creation loops on signup.
One user reported the app recognised only 2 out of ~100 cards from a typical PSA submission batch. High-end cards requiring removal from sleeves add handling risk.
What stands out is the influencer-driven marketing context. Multiple reviewers note that they downloaded the app after seeing it promoted by Pokemon and TCG influencers. The disconnect between influencer endorsements and App Store reality is stark.
"I have been collecting about 100 cards to grade over the past year. This app only recognized 2 cards — does that suggest high accuracy? It also "recognized" nonexistent corner whitening against their own suggested white background."
"I tested the same card six times under identical conditions: one scan failed, and the others gave me five different scores between 6 and 10. Don't bother with this app if you're looking for accuracy."
PSA Accuracy — 21 Cards Tested
Despite the app experience problems, the underlying AI prediction model tells a different story. Community members submitted 21 cards to PSA and recorded both the DGC prediction and the final PSA result.
The most important finding: DGC never inflates grades relative to PSA. Every miss in the dataset is a case where DGC was more conservative than PSA ended up being — typically on borderline PSA 9/10 cards where DGC returned 9.3–9.5 but the card came back a PSA 10. From a collector's perspective, this is the right failure mode. The app won't talk you into submitting a PSA 8 card expecting a 10 — it will occasionally talk you out of submitting a card that was actually a 10.
| # | DGC Score | PSA Grade | Diff | Direction |
|---|---|---|---|---|
| 1 | 9.8 | 10 | 0.2 | DGC conservative |
| 2 | 9.3 | 10 | 0.7 | DGC conservative |
| 3 | 9.5 | 10 | 0.5 | DGC conservative |
| 4 | 6.8 | 7 | 0.2 | DGC conservative |
| 5 | 8.4 | 8 | 0.4 | DGC inflated |
| 6 | 9.3 | 9 | 0.3 | DGC inflated |
| 7 | 10 | 10 | — | Exact |
| 8 | 8 | 8 | — | Exact |
| 9 | 9.1 | 9 | 0.1 | DGC inflated |
| 10 | 9.2 | 9 | 0.2 | DGC inflated |
| 11 | 9.8 | 10 | 0.2 | DGC conservative |
| 12 | 9.1 | 9 | 0.1 | DGC inflated |
| 13 | 9.4 | 10 | 0.6 | DGC conservative |
| 14 | 9.2 | 9 | 0.2 | DGC inflated |
| 15 | 9.2 | 9 | 0.2 | DGC inflated |
| 16 | 9.5 | 10 | 0.5 | DGC conservative |
| 17 | 9.2 | 9 | 0.2 | DGC inflated |
| 18 | 8 | 8 | — | Exact |
| 19 | 8.4 | 8 | 0.4 | DGC inflated |
| 20 | 9.2 | 9 | 0.2 | DGC inflated |
| 21 | 10 | 10 | — | Exact |
Diff = |DGC − PSA|. Green = within 0.5 grades. Amber = within 1 grade. Community-submitted test results, May 2026.
TAG Accuracy — 14 Cards Tested
TAG (Tri-Star Authentics Grading) uses a half-grade scale (8, 8.5, 9, 9.5, 10) which maps more naturally to DGC's continuous numeric output than PSA's whole-grade rounding. The alignment across 14 tested cards is the strongest data point in DGC's favour.
All 14 tested cards landed within 0.5 of the final TAG grade. The average DGC prediction was 9.55 vs an average TAG result of 9.54 — a difference of less than 0.01. This is the strongest evidence that the AI model itself has genuine predictive signal. The grading model appears well-calibrated to TAG's standards.
| # | DGC Score | TAG Grade | Diff | Direction |
|---|---|---|---|---|
| 1 | 8.5 | 8.5 | — | Exact |
| 2 | 10 | 10 | — | Exact |
| 3 | 9.3 | 9 | 0.3 | DGC inflated |
| 4 | 10 | 10 | — | Exact |
| 5 | 8.8 | 8.5 | 0.3 | DGC inflated |
| 6 | 9.7 | 10 | 0.3 | DGC conservative |
| 7 | 9.8 | 10 | 0.2 | DGC conservative |
| 8 | 10 | 10 | — | Exact |
| 9 | 10 | 10 | — | Exact |
| 10 | 9.8 | 10 | 0.2 | DGC conservative |
| 11 | 10 | 10 | — | Exact |
| 12 | 9.4 | 9 | 0.4 | DGC inflated |
| 13 | 10 | 10 | — | Exact |
| 14 | 8.4 | 8.5 | 0.1 | DGC conservative |
Diff = |DGC − TAG|. All 14 cards within 0.5 grades. Community-submitted test results, May 2026.
Key Findings
The algorithm has real signal — when it actually scans your card
Across 35 cards tested against PSA and TAG, DGC's AI predictions are meaningfully accurate. The model never inflates grades, leans conservative on borderline cases, and aligns near-perfectly with TAG's half-grade scale. If the scanning worked reliably, this would be a legitimately useful tool.
DGC is a conservative predictor vs PSA
The 7 under-predictions in the PSA dataset all follow the same pattern: DGC returns 9.3–9.5, the card comes back PSA 10. This matters because PSA's grading is binary at the top — a card is either a 10 or it isn't. DGC's continuous output doesn't map cleanly onto that threshold, and cards in the 9.3–9.7 DGC range should still be considered PSA 10 candidates.
TAG is the better benchmark for DGC predictions
The near-perfect alignment between DGC scores and TAG grades (14/14 within 0.5) suggests the AI's continuous numeric output was likely calibrated against a half-grade scale. If you're using DGC as a pre-submission screen, consider what a DGC score of 9.5+ means for TAG vs PSA — they have different implications.
The scanning reliability problem invalidates the accuracy data for most users
The accuracy numbers above come from the subset of cards that DGC successfully scanned and returned a score for. If scanning succeeds only 15% of the time — as some reviewers report — the pool of actually-scannable cards may be self-selecting for easier-to-read cards, which could make the accuracy look better than it is for a full submission batch. The six-scores-on-one-card problem is even more concerning: if the same card returns grades from 6 to 10, the average might be accurate but any individual scan is not.
One reddit data point contradicts the accuracy findings
One Reddit user reported that a "fully damaged card" received an 8.5 grade from DGC — which would represent a serious calibration failure. This conflicts with the PSA and TAG comparison data. Whether this reflects a card misidentification (DGC scanning the wrong card in its database) or a model failure is unclear. It's worth noting as a real failure case, not an outlier to dismiss.
"Please don't get scammed. I tested it with a fully damaged card — like all in pieces — and got an 8.5 grade."
Verdict
The grading model works. The product doesn't — yet.
Digital Grading Co's AI has genuine predictive accuracy against both PSA and TAG grades, tends to be conservative rather than optimistic, and shows near-perfect calibration against TAG's half-grade scale. That's a solid foundation. But the current app — aggressive paywall, ~85% scan failure rate, wildly inconsistent repeated scores, billing bugs — makes it nearly unusable in practice. Until the scanning reliability is fixed and the paywall offers a meaningful trial, we'd hold off on subscribing.
If you're planning a grading submission in Singapore right now, use tcgTalk's price comparison tool to check whether the PSA 10 premium on your card justifies the grading cost first — that's the most important filter before worrying about AI prediction accuracy.
Frequently Asked Questions
Is Digital Grading Co accurate?
Based on 35 community-tested cards, the AI prediction model is reasonably accurate. All 21 PSA-tested cards landed within 1 grade of the final PSA result, and all 14 TAG-tested cards were within 0.5 of the TAG grade. The AI tends to be conservative rather than inflating scores.
Does Digital Grading Co inflate grades?
No — the data shows the opposite. DGC consistently under-predicts relative to PSA, particularly on borderline PSA 9/10 cards where DGC returns 9.3–9.5 but the card achieves PSA 10. This is the safer failure mode for collectors.
Is Digital Grading Co worth the subscription cost?
The algorithm has merit, but the app experience has serious reliability issues — scanning fails most of the time, the same card can return wildly different scores across scans, and the paywall starts before you can try any feature. We'd wait for a more stable version before paying.
How does Digital Grading Co compare to PSA grading?
DGC predicts PSA grades with moderate accuracy — 90% of tested cards matched the final PSA grade exactly, and 100% were within 1 grade. However, it consistently under-predicts on borderline 9/10 cards, so a DGC score of 9.3–9.5 may still be a PSA 10.
How does Digital Grading Co compare to TAG grading?
TAG alignment is strong — all 14 tested cards were within 0.5 grades of the TAG result, with an average DGC prediction of 9.55 vs an average TAG grade of 9.54. TAG's half-grade scale appears to be a better match for the AI's continuous output than PSA's whole-grade rounding.
Data sourced from community-submitted test results (App Store reviews, Reddit). PSA sample: 21 cards. TAG sample: 14 cards. Analysis by tcgTalk. Updated May 10, 2026. This guide will be updated as more community data becomes available.
