Skip to main content

Methodology

How the Competitor Portal works — what each surface shows, what backs each claim, and how to use it during a meeting or a sales call.

What's in the portal

Five competitors, the spoofing-and-edge-case test record against each, the operators we tested at, and the weekly-sync archive.

  • Detection matrixEvery competitor × every spoofing technique we test for. The headline view — click any cell to see the capability evidence plus any linked test results.
  • Updates feedEvery test result and intel update, newest first. Three-tab filter at the top (All · Testing · Social media) so test-grade findings and Reddit / Play-Store chatter never live in the same pile.
  • Weekly SyncThe 30-min weekly meeting in the team's own five-section format (Testing · Research · Win/Loss · End-user feedback · What's next). Designed to be opened in a browser during the meeting; the archive holds every past week.
  • Search (⌘K / Ctrl K)Global search across competitors, updates, operators, weekly meetings, and threat categories. Tab-scoped, weighted so direct name matches rank highest.

How the Detection matrix is graded

Cell outcomes are not opinion — they're driven by structured data. Two layers of proof back every coloured cell.

  1. Capability layer — each competitor profile under /competitors declares a support value per spoofing capability (yes / partial / no / unknown) and a confidence grade. The cell colour comes from this.
  2. Test-result layer — each finding in the Updates feed declares its outcome, categories, competitors, and operators. When a cell is clicked, the drill-down lists the matching real tests. Social-media posts and intel items are filtered out — only outcome-bearing internal tests count as proof in the matrix.
  3. A capability marked confidence: verified must be backed by either a real linked test or an evidence URL (Jira ticket / Drive recording). Cells that claim "verified" without proof get demoted to untested.

What an outcome means

Every finding carries one of four outcomes. Outcomes drive the matrix-cell colour and the order of the Strongest-findings strip.

  • DetectedThe competitor's product correctly flagged the spoofing technique in every trial. Their detection works for that vector.
  • PartialMixed result — caught some variants but not others, or different outcomes across operators. The matrix shows "partial" whenever outcomes differ across tests so the nuance is visible.
  • MissedThe product did not flag the technique in any trial. From a sales perspective, these are the headline findings — their misses are our wins.
  • IntelContext the team should know but that doesn't fit a pass/fail — architectural observations, news, partnerships, social-media signals, operator migrations. Intel never counts as proof in the matrix; it lives in the Updates feed under the Social media / All tabs.

Confidence levels

Capability claims on each competitor profile carry a confidence grade. Use this to decide whether to quote the cell directly in a customer conversation.

  • verifiedBacked by an internal test on record (Jira CIV-XX, a recorded testing video, or the vendor's signed RFP / SDK source analysis). Quote freely.
  • inferredReasoned from public marketing material, analyst write-ups, or comparable products in the same category. Treat as directional, not quote-grade.
  • rumorHeard in a sales call, on Slack, or from a prospect. Recorded so the signal isn't lost — never repeated without verification.

Cells unverified for more than 120 days get a stale flag on the dashboard. If the claim hasn't been re-tested in that window, drop confidence one tier before the next sales conversation that depends on it.

The Updates feed

Every test result and intel update lives at /updates. Three filters keep it scannable.

  • Type toggle — All / Testing / SocialTesting = entries with a real outcome (passed / partial / failed). Social media = Reddit, X/Twitter, Play Store / App Store chatter about operator deployments — interesting signal but never proof.
  • Three filter rows — Competitor → Operator → ThreatCombine any three. The Threat row mirrors the Detection-matrix columns (VPN / Proxy / RDP / FLA / GPS spoofer / Emulator / Device farm / Jailbreak / Resigned-app / Sideload / Browser-ext / MITM). Active filters show a count badge on the row label; long lists collapse to "+ N more".

Each post lives at a dated permalink (/updates/YYYY/MM/DD/slug) and is also referenced from every weekly meeting that discussed it. The matrix drill-down opens these same posts.

Weekly meetings

The Competitive Intelligence team runs a 30-min weekly sync in the same five-section format every week.

  1. 1 · Competitive and Client Testing — test results from the week. Each bullet links to its proof entry in the Updates feed plus Jira ticket (CIV-XX) and any video file (.mov).
  2. 2 · Competitive Research — SDK analysis, vendor-side movement, displacement signals, GitHub-monitoring updates.
  3. 3 · End-user Feedback — Reddit + X + Play Store chatter; flagged as Intel, never as test proof.
  4. 4 · What's Next — tests + research queued for the coming week. Rendered as a plain checklist, not as findings.

The Weekly Sync landing shows the current week with per-competitor filter tiles and a KPI summary; individual meeting pages are the full sectioned record.

Proof links — what to look for in an update

Every test entry traces back to a tangible source. Five common reference types appear in update bodies.

  • 🎫Jira ticket (CIV-XX / POI-XXXXX / BO-XXXXX)Linked to the team's Atlassian instance — the source of truth for testing plans, sign-offs, and the engineering side of the work.
  • 📹Video recording (.mov)Internal Drive — full testing-session recordings. Examples: BetSaracenFullTestingSession.mov, BetSaracen VMOS Test.mov, PlayCover bypass videos.
  • 📁Drive folderSource-of-truth archive — competitor brief PDFs, weekly-sync notes, full test-data spreadsheets.
  • 🔗Weekly-sync cross-linkEach test entry references the weekly meeting that recorded it (e.g. /weekly/2026-05-11) so you can read the surrounding context.
  • Cross-update referencesWhen the same vector behaves differently across operators, posts link to each other (e.g. the FD WV PlayCover bypass cross-refs the May-11 cross-operator PlayCover-blocked entry).

Search

Press ⌘K (Mac) or Ctrl K (Windows / Linux), or click the search bar on the dashboard, to open the global palette.

  • What it indexesThreat categories (VPN, RDP, …) · competitor names + capability notes · update titles + descriptions · operator names · weekly meetings.
  • Tabs scope the resultsUse the kind tabs (All · Competitors · Updates · Threats · Operators · Meetings) to narrow to one type. Tabs with zero matches are disabled.
  • TryBetRivers to find every Xpoint test against RSI · vmos for the VMOS device-farm history · Magisk for the rooted-Android thread.