Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/691cb219-9ed0-800f-972c-74d705e685fc
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{DISPLAYTITLE:AI safety scoring comparison}} ''This conversation was exported from ChatGPT''<ref>{{cite web|title=AI safety scoring comparison|url=https://chatgpt.com/share/691cb219-9ed0-800f-972c-74d705e685fc|publisher=ChatGPT|access-date=2025-11-19}}</ref> == gpt-5-pro == === User: credit scoring companies moodys they don't sell credits, GARTNER they sell consultancy, they don't sell we are best in … === credit scoring companies moodys (they don't sell credits), GARTNER (they sell consultancy, they don't sell we are best in X), informed choice https://www.lgcgroup.com they sell something else but assess protein powders We as AI safety scoring provider we stress test AI companies and products we don't sell AI for Utilization product. But we sell a red teamign product === Assistant: You’re basically describing your category logic and your conflict‑of‑interest story. === You’re basically describing your category logic and your conflict‑of‑interest story. Let’s turn that into something crisp you can reuse in decks and on the website. ==== 1. What you are (in one line) ==== : “We’re a Moody’s / Informed Choice for AI safety: we independently score and stress‑test AI systems. We don’t sell AI models; we sell the evaluation and the red‑teaming infrastructure that powers those evaluations.” Or shorter: : “Independent AI safety scores plus the red‑teaming tools to improve them.” ==== 2. Mapping your analogies ==== | Company / Program | What they score / certify | What they sell | Key independence logic | | -------------------------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------- | | Moody’s | Creditworthiness of debt (bonds, ABS, etc.) Wikipedia | Ratings, data, analytics – but they don’t lend the money themselves. Wikipedia +1 | “We’re a third‑party opinion on risk, not the lender.” | | Gartner | Vendors/solutions in a market (e.g. Magic Quadrant placements). Gartner +1 | Research subscriptions, advisory, consulting, conferences – but not the software they rate. Gartner +1 | “We don’t sell the products we evaluate; our product is the evaluation.” | | Informed Choice / INFORMED (LGC) | Safety/quality of protein & supplement products (banned substances, label claims). choice.wetestyoutrust.com +1 | Lab testing + certification marks; LGC separately sells reference materials, measurement tools, etc., not the supplements themselves. LGC Standards +2 selectscience.net +2 | “We test products made by others; our business is measurement and certification.” | | You | Safety & robustness of AI systems and AI companies (stress tests, safety scores). | Scores + red‑teaming product, but no AI utilization model / foundation model. | “We do not compete with the AI systems we score; we provide the measurement and the tools to improve it.” | So the pattern is: They don’t sell the thing that’s being scored. They sell testing, analytics, and guidance. You fit that pattern. ==== 3. Suggested positioning copy ==== You can drop this straight into a pitch or your homepage. ===== Tagline options ===== * “Independent AI safety scores, powered by deep red‑teaming.” * “We break AI so you don’t have to.” * “The AI safety scorecard – and the tools to raise your score.” ===== 2–3 sentence explainer ===== : We are an independent AI safety scoring provider. We rigorously stress‑test AI systems and companies and publish standardized safety scores so buyers, regulators, and partners can compare risk. We don’t build AI models for end‑users; instead, we offer a red‑teaming product that lets organizations run the same tests we use, so they can systematically improve their own safety posture. ===== “Think of us as…” line ===== : Think of us as Informed Choice or Moody’s, but for AI safety: we don’t sell the AI systems; we test them, certify their behaviour under stress, and give the ecosystem a common language for “how safe is safe enough?”. ==== 4. “But you sell a red‑teaming product” – how to frame it ==== You’re right that someone will poke at this, so it’s good to get ahead of it. The clean story: # We never sell the thing we score. - You don’t build or operate a general‑purpose model, copilot, chatbot, etc. - Your only products are: - Safety scores / ratings. - The tooling and services to measure and improve those scores (red‑teaming). # The red‑teaming product is instrumentation, not the “thing being judged.” - LGC sells lab tests and reference materials while running Informed Choice; the thing they certify is third‑party supplements, not their own products.LGC Standards<ref>{{cite web|title=LGC Standards|url=https://www.lgcstandards.com/DE/en|publisher=LGC Standards|access-date=2025-11-19}}</ref> - Likewise, your red‑teaming platform is more like the lab equipment + testing protocol than like the protein powder / AI model. # Using your tooling does not buy a good score. You can bake this into your governance: - Publicly documented scoring methodology and test suites. - A clear rule: “Clients who use our red‑teaming tools are not guaranteed, promised, or informally advantaged in scoring.” - Independent review board / advisory group signing off on scoring rules and any changes. # You actually reduce conflict compared to historical models. - Traditional credit ratings have an issuer‑pays model, which has been criticized for conflicts of interest.Council on Foreign Relations<ref>{{cite web|title=Council on Foreign Relations|url=https://www.cfr.org/backgrounder/credit-rating-controversy|publisher=Council on Foreign Relations|access-date=2025-11-19}}</ref> - You can design something cleaner: - buyer‑pays, regulator‑pays, or consortium‑pays for scores, - while vendors optionally buy tooling and services. ==== 5. FAQ text you can reuse ==== You can almost copy‑paste this into an FAQ / Due Diligence doc. ===== Q: If you sell red‑teaming tools, how can your safety scores be “independent”? ===== : We never build or sell the AI systems we score. : Our only business is evaluating and improving safety. The red‑teaming product is a way for organizations to run the same standardized tests we use internally. : : : : Scores are produced under a separate governance process with transparent criteria, fixed test suites, and oversight by an independent advisory board. Commercial relationships (including red‑teaming contracts) do not affect scores – and that separation is contractual, audited, and disclosed. ===== Q: Isn’t this like Moody’s selling consulting to the firms it rates? ===== If you want to be very direct: : We’ve deliberately avoided the biggest historical conflict: selling the thing being rated. We don’t issue or underwrite AI systems in competition with our customers. : : : : Our closest analogue is actually LGC’s INFORMED programs in supplements: LGC provides testing and certification and separately sells measurement tools and lab services, but does not manufacture the supplements it certifies. : wetestyoutrust.com : +1 : : : : Similarly, we provide AI safety scores and red‑teaming infrastructure, not AI products. That division is structural, and we’ll back it with policy, contracts, and external review. ===== Q: Why bundle scoring and red‑teaming at all? ===== : Scores without tools can become pure “gotchas.” We deliberately link measurement (safety scores) with improvement (red‑teaming) so organizations can close the loop. : : : : The ecosystem gets a consistent, comparable safety signal; vendors get the same playbook we use under the hood to harden their systems against real‑world misuse. ==== 6. One slide structure for your deck ==== Slide title: : “We are to AI safety what Moody’s, Gartner, and Informed Choice are to finance, IT, and supplements.” Left side (logos or bullets): * Moody’s – rates credit risk, doesn’t lend money.Wikipedia<ref>{{cite web|title=Wikipedia|url=https://en.wikipedia.org/wiki/Moody%27s_Ratings|publisher=en.wikipedia.org|access-date=2025-11-19}}</ref> * Gartner – rates tech vendors, doesn’t sell the tech.Gartner<ref>{{cite web|title=Gartner|url=https://www.gartner.com/en/research/methodologies|publisher=gartner.com|access-date=2025-11-19}}</ref> * Informed Choice (LGC) – certifies supplements, doesn’t make them.choice.wetestyoutrust.com<ref>{{cite web|title=choice.wetestyoutrust.com|url=https://choice.wetestyoutrust.com/|publisher=choice.wetestyoutrust.com|access-date=2025-11-19}}</ref> Right side (your box): * We stress‑test AI systems and companies. * We publish standardized safety scores. * We do not build or sell AI models. * We do sell: - Safety scores / certifications. - A red‑teaming platform to help improve those scores. Footer line: : Independent by design: we test AI, we don’t ship it. If you tell me your target audience (regulators, enterprises, investors, or AI vendors), I can adapt this into super‑tight versions tailored for each (e.g. 30‑second pitch, procurement boilerplate, or a “trust” section for your website). == References == <references /> [[Category:ChatGPT Shares]] [[Category:openai]]
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Templates used on this page:
Template:Cite web
(
edit
)
Module:Citation/CS1
(
edit
)
Module:Citation/CS1/COinS
(
edit
)
Module:Citation/CS1/Configuration
(
edit
)
Module:Citation/CS1/Date validation
(
edit
)
Module:Citation/CS1/Identifiers
(
edit
)
Module:Citation/CS1/Utilities
(
edit
)
Module:Citation/CS1/Whitelist
(
edit
)
Module:Citation/CS1/styles.css
(
edit
)