This isn’t a presale teardown — it’s the methodology behind our presale teardowns. Read this once; you’ll know how to read every score on this site.
The 10-point scale
Every presale gets a single integer from 1 to 10. Higher = more risk.
- 1-3 (low risk). Established team with verifiable history, audited contract from a credible firm, fair vesting structure, sustainable FDV, clear regulatory positioning. Rare.
- 4-6 (medium risk). Mixed signals. Either the team is verified but the tokenomics are aggressive, or the contract is clean but the audit firm is unknown, or the project is interesting but FDV is rich. Most credible presales sit here.
- 7-9 (high risk). Multiple red flags. Anonymous team, decorative audit, unfair vesting, suspicious roadmap, or evidence of past rug behaviour from team members.
- 10 (avoid). At least one disqualifying flag — known scam team, fake audit confirmed, no smart contract, or active rug behaviour observed.
The five categories we score
We assess across five dimensions. Each contributes to the overall score with different weights.
1. Team verification (25% weight)
Can we name the team, verify their employment history, and find a public trail going back at least 12 months?
- 1 = doxxed team with verifiable history and reputation in the space.
- 5 = team named but verification is partial.
- 10 = anonymous, with no on-chain or off-chain reputation history.
Anonymous teams are not automatically disqualifying (Bitcoin’s founder is anonymous), but the burden of proof shifts to the contract and the audit.
2. Contract & audit (25% weight)
Did a credible firm audit the deployed contract? Were the findings substantive? Does the deployed bytecode match the audited commit?
- 1 = multiple audits from top-tier firms, deployed contract verified to match.
- 5 = one credible audit, no major issues.
- 10 = no audit, decorative audit, or deployed contract diverges from audited version.
We physically check the audit firm’s portfolio page and email them when in doubt. We diff the deployed contract against the audited commit.
3. Tokenomics fairness (20% weight)
Is the unlock schedule fair across cohorts? Is the public-to-private FDV multiple reasonable? Is liquidity adequate?
- 1 = insiders vest longer than retail, FDV multiple under 2x, healthy liquidity-to-FDV ratio.
- 5 = mixed — one or two parameters are aggressive but not disqualifying.
- 10 = retail is exit liquidity for insiders (5x+ FDV multiple, 0/0 retail vesting, long insider cliffs followed by dump-able linear unlock).
4. Project plausibility (15% weight)
Does the project make economic and technical sense? Is there a real product, or is it a marketing site? Does the roadmap connect to the tokenomics?
- 1 = clear product-market fit, working product, tokens have a defined economic role.
- 5 = thesis is plausible but unproven.
- 10 = no economic role for the token, no product, roadmap is buzzwords.
5. Regulatory & operational (15% weight)
Where is the project incorporated? What jurisdictions are excluded? Is the legal structure visible? Is there a privacy policy and terms of service that are coherent?
- 1 = professional legal structure, transparent corporate documents, sensible jurisdictional exclusions.
- 5 = adequate but minimal.
- 10 = offshore shell with no public corporate trail; ToS missing or copy-pasted.
What we tell you we couldn’t verify
Every teardown has an explicit “Could not verify” section. Common items:
- Audit firm did not respond to confirmation email.
- Project’s GitHub commits are private.
- Investor list is not public; cannot confirm round prices.
- Founder LinkedIn was created within the last 12 months; cannot verify older claims.
- Smart contract is not yet deployed; assessment based on prior version.
A high “could not verify” count is itself information. We weight it into the final score.
Why we publish low scores even on projects we like
Several presales we cover are run by people we know personally. We still score them with the same rigor.
The reason: a methodology that bends for friends is worthless to readers. The credibility of every score depends on every other score being honest.
If a project’s score lands at 6 because the audit firm didn’t respond, we publish 6 and explain the gap. If the team later resolves the gap, we update the score and note the update.
What the score doesn’t capture
- Whether the project will succeed commercially.
- Whether the price will go up or down at TGE.
- Whether the founders are personally trustworthy beyond what’s verifiable.
- Whether you, specifically, should buy.
The score is about structural risk in the presale itself. A project can score 3/10 (low risk) and still fail commercially. A project can score 8/10 (high risk) and 100x. The score answers “is the presale structure asymmetrically rigged against retail” — not “should I buy”.
Updates and corrections
Every teardown has a “last updated” date. Material updates are noted in a “Corrections” block at the bottom of the article.
If a project’s circumstances change materially (audit completed, team doxxed, contract redeployed), we re-score and re-publish.
If we publish a wrong fact, we correct it explicitly. We don’t silent-edit.
Tip line and removals
If you’re a project covered here and believe a fact is wrong, email editorial@presalecryptobmic.com with corrections and evidence. We’ll update if the evidence holds.
If you’re a reader who has information about a presale we’ve covered, the same email works.
We don’t accept paid placements, paid corrections, or paid removals.
The honest summary
The score is a tool, not a verdict. It compresses our analysis into a single number you can scan in a list. The detail under it is where the actual information lives.
Read the red flags. Read the green flags. Read the “could not verify”. Make your own decision.