Welcome, neuralnetworking.

I see you've not signed in since 4/5/2021 when you made your OP for this thread. I've read with great interest a sampling of your posts at your substack, including your featured article in the OP of this thread, "Ambition, the Elite Job Market, and the Information Gap" (https://neuralnetworking.substack.com/p/ambition-the-elite-job-market-and). Also "The Case for Compassionate Meritocracy" (https://neuralnetworking.substack.com/p/the-case-for-compassionate-meritocracy) in which you appear to suggest that technology-driven GDP growth might fund future universal income rather than allowing the wealth to be retained by the few roboticists and AI researchers. I also noted that
Originally Posted by article
... the results achieved by the text-generating GPT-3 neural network give me a sense that secretarial and administrative jobs might be on the chopping block in the next century. After they go, who knows how long bloggers have left. 1
and
Originally Posted by article
1 GPT-3 wrote parts of several paragraphs in this post.
More on GPT-3 here: https://www.digitaltrends.com/features/openai-gpt-3-text-generation-ai/
Originally Posted by digitaltrends GPT-3 article
A great many jobs are more or less ‘copying fields from one spreadsheet or PDF to another spreadsheet or PDF’, and that sort of office automation, which is too chaotic to easily write a normal program to replace, would be vulnerable to GPT-3 because it can learn all of the exceptions and different conventions and perform as well as the human would.”
Might this description also include positions in what you have termed in your OP, "the Elite Job Market of finance, consulting, and tech"? In which case, it seems we've rather quickly come full circle. Along the lines of jobs being displaced by tech, here's an old post based on an article in Fortune Magazine, Technology may replace 40% of jobs in 15 years (2019).

Circling back to the mention of GPT-3 helping to author portions of the updated substack piece on "Compassionate Meritocracy," from digitaltrends we read:
Originally Posted by digitaltrends GPT-3 article
Fed with a few sentences, such as the beginning of a news story, the GPT pre-trained language model can generate convincingly accurate continuations, even including the formulation of fabricated quotes.
We also read,
Originally Posted by digitaltrends GPT-3 article
Can you build an A.I. that can convincingly pass itself off as a person? OpenAI’s latest work certainly advances this goal. Now what remains is to be seen what applications researchers will find for it.
Might the substack, and this thread, be examples of such applications? Are members of this forum unwitting subjects in a Turing Test?