in ,

ASTRA Scores: AI-Powered Assessment and Rating Systems


I just woke up with an idea that went from amazing to alarming in about three minutes.

The core idea

AI is about to replace many systems that were designed to teach, test, filter, and rate people.

Here are some examples.

Teaching and testing

  • Interactively and creatively teaching people new content and then doing a live, multi-modal test with the student to see if they’ve learned the material.

This is the one that woke me up, and I was like:

Wow!—this will be so cool.

Example: New employee training at a company

So imagine like someone onboarding at a company, and they need to learn the various HR policies, main cultural concepts, how to request resources, how to get access to different systems, security practices, etc.

Today we have someone live teaching these classes with slides, or people are just given slide decks—perhaps with some video—and are told to self-study.

Then they take a multiple-choice test—which is designed to be pretty damn easy—and they’re done.

That’s the current old way.

The new way

The new way would be like a life-like AI avatar that has all the knowledge of what needs to be learned and absorbed.

And they look at the person’s background, the companies they’ve worked at, the school they went to, etc.—and they come up with a perfect way to teach the content to them. But they’re also a world-class expert teacher and expert at creating curricula.

How? Because the avatar is based on the knowledge of a massive pinnacle model like GPT-5 or whatever.

The new way of testing

When it comes time to test, instead of multiple-choice questions, you actually have a conversation with this avatar. They present you with scenarios, and you have a conversation with them.

They ask you questions like why or why not? They ask you what about questions.

Again, it’s a conversation. And they can go down various different paths to establish that you either do or don’t know the material well enough to safely start working.

So that’s the teach-and-test paradigm. Pretty cool.

Scores can and will get a LOT scarier

But then we come to scoring.

What if—after this long conversation—they can give you a safety score? Or a knowledge score about how deep your knowledge is about the company policies. Or how easy you are to trick with a suspicious email.

Still seems fine.

But the more I started thinking about this—as I was still waking up—I realized how big this actually is.

What about?

  • Hiring

  • Dating

  • Access to special parties

  • (see even more exclusive things)

The bigger test-to-rating trend

This is where I started getting scared.

Think about an ultra-deep interview by an AI with full world-knowledge. Like a GPT-6 level intelligence—say a 200 I.Q. (conservatively)—but more importantly, a vast understanding of what makes people successful in various endeavors.

The Astra™️ Score

Astra is the company with the most popular score on YouTube and Insta and TikTok in 2026 (yes, it survived). Your Astra is a score between 1 and 100, with multiple subscores.

Here are some of the aspects of the test:

  • A 3-day (7 hours a day) deep interview with a full-sized AI representative from Astra.

  • Knowledge and Past

    • Your personal life philosophy

    • Your personal life goals

    • Your understanding of math, physics, biology, history, economics, and many other disciplines.

    • Your understanding of human nature

    • A review of everything you’ve written online, every video you’ve made, etc.

    • A review of everything that’s ever been said about you publicly online

    • Your past, your traumas, your preferences, what you’re looking to accomplish in life

    • Your work history

    • Your skills

    • Your past relationships and how they continue or how they ended

    • Etc.

  • Scenarios

    • They then present you with all sorts of scenarios to actually test what they learned above, and tease out more information on your personality type and strengths and weaknesses

    • They also use immersive tech to put you under stress and see how you respond

  • The whole thing is done with a full camera on you in your surroundings, so they’re observing body language, facial expressions, etc. as well as your actual answers and your voice.

  • Health biomarkers, taken from blood and saliva samples. Optional, but encouraged. 😀

The result of all of this is your constellation of Astra scores, which is rolled up to on Astra™️ score.

93Astra

The scary part is how it will be used

Now think about hiring.

What do you think will be more predictive of success in a job? A set of arbitrary questions from a hiring panel, or your Astra score in the areas of Conscientiousness, Neuroticism, IQ, Work history, Talent score, Discipline score, etc?

Uh, yeah.

The Astra AI will take all your scores. All your work history. All your personality traits. Your fucking blood work. Analysis of everything that’s been said about you. Everything you’ve ever said. A thorough review of your publicly-visible work for your whole career. A deep personality analysis of you whole past and your life. Analysis of your honesty from your body language and voice and facial expressions….etc.

And it will use its full knowledge of what makes people successful, combined with its full knowledge of human psychology and a growing list of other profiles it can correlate with—and output an answer.

It’ll be the closest thing to a true assessment of a person that we’ve ever seen, and it won’t be close.

I’m an AI-optimist who sees good that can come from something like this, but this still scares the crap out of me. Why? Because for every company that builds a benign-ish version, there will be 3 companies building a dystopian version.

Let’s keep going

Ok, so we talked about hiring.

Something like this might replace a lot of hiring processes. Or Astra will simply get trained on what the company needs and will create custom interview avatars for just that particular role.

The point is, this level of depth will be way better—in terms of being more predictive of success—than anything that’s come before it.

Now let’s expand into greater society.

People with high scores will display them right on their personal APIs, so people can visually see them in their AR interface.

Someone showing off how great they are with their Astra/Omni scores

Think about dating. Think about vetting for whether you want to start a family with someone.

There will be scores for:

  • Current financial state

  • Financial potential

  • Family dependability

  • Excitement

  • Conversation quality

  • Humor

  • Skill in bed

  • Trust with friends

  • Knowledge of Greek literature

These scores will be extraordinarily deep and accurate. But will they capture who we really are?

Or an even worse question—will people even care if they do once they get popular?

A universal vetting mechanism

What this all starts to point to is a cycle of:

  • Teach

  • Test

  • Rate

AI will be the best teachers, because they can be multi-modal, super-intelligent, and nearly all-knowing—plus they can tune their teaching style perfectly for the student.

Ditto for the testing. It can be so natural feeling, and can pull out the truest and best performance from the student.

And then the ratings. They’ll be so multi-faceted. So deep. And so damning when they’re low.

My concern

My biggest worry with systems like this is that it’ll take bias that already exists in the world and put actual numbers on them.

You take one look and think “not dating material”, but you don’t know how you came up with that. Well, Astra can tell you. Here’s a breakdown of 137 subscores that resulted in them getting a 38/100 in “should you date them.” Answer: No.

So damning. So final. So gross.

It reminds me of dystopian sci-fi. It reminds me of eugenics. It reminds me of elitism. It reminds me of basically everything we shouldn’t be building.

But we will build this. I guarantee you people have already started.

The problem is that existing, legacy rating systems are so bad, and so crappy at being predictive, that these replacements will be gobbled up by so many entities that need them to thrive.

Companies need the best people. Intelligence groups need people who are steady and reliable. Single people need someone who will be a good partner.

Our morals run everything until they don’t. And the point where a bad decision can harm you is exactly where that line is.

Just like AI itself, expect this. It’s not a thing that might happen, or could happen. It’s a thing that will happen—and probably already is.

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Simple analyze about CVE-2024-30080

biliLive-tools – Automated B station live screen recording. Post-workflow: barrage conversion, video compression, upload to B station (Win/Linux)