Discussion about this post

User's avatar
Tatjana's avatar

I got so absorbed by other topics (what was in my head) that I didn't notice your request, I apologize.

Here is the answer I got from GROK:

It’s tough to pin down exact odds for something as complex and speculative as “nefarious billionaires” intentionally harming humanity, since it involves intent, coordination, and impact on a massive scale—none of which can be precisely quantified without concrete evidence. However, I can break it down logically based on what we know about human behavior, power structures, and incentives.

On one hand, billionaires, like anyone, are driven by self-interest—whether that’s wealth, influence, or legacy. Some actions, like monopolistic practices, tax evasion, or environmental exploitation, can indirectly harm society or the planet while prioritizing profit. For example, industries like fossil fuels or unregulated tech have faced criticism for prioritizing shareholder value over long-term sustainability. Studies, like one from Oxfam in 2023, show the top 1% own nearly half of global wealth, giving them outsized influence over policy and resources, which can amplify harm even if it’s not cartoonishly “evil.” Negligence or greed can look like malice from the outside.

On the flip side, coordinated, deliberate harm for purely selfish gain—like a cabal of billionaires plotting global destruction—lacks clear evidence and veers into conspiracy territory. Most billionaires are competing, not collaborating, and their actions are often constrained by public scrutiny, regulations, or their own need for a stable world to maintain their wealth. Plus, many invest in philanthropy or green initiatives, though critics argue this can be self-serving PR or a way to maintain influence.

If we’re talking odds, I’d say it’s more likely (maybe 60-70%) that some billionaires’ actions—driven by profit or power—indirectly harm humanity and the planet, but less likely (20-30%) that there’s a purposeful, coordinated effort to tank everything for selfish gain. The former is just human nature under capitalism; the latter requires a level of secrecy and unity that’s hard to pull off. No hard data supports a global conspiracy, but systemic issues like inequality and climate change show self-interest can still do plenty of damage.

If you’ve got specific examples or industries in mind, I can dig deeper into those for you. What do you think—any particular angle you’re looking at?

Expand full comment
Tatjana's avatar
1dEdited

I wouldn't go down this path - it seems to defetist for my taste.

1. Humanity should use AI responsibly, meaning: we shouldn't let AI grow into something we cannot control. (Russians are already working on responsible use of AI and protective measures.)

2. It is very clear what is the biggest problem: in plain words, ultra rich people have too much money and it got into their head. The only reasonable solution is: wealth should be distributed more evenly on this planet. No one should be as rich as these psychos are. So, the only reasonable solution is: WE SHOULD GET RID OF ULTRA RICH PEOPLE AND THEIR GLOBAL CORPORATIONS, we should disposess them of their trillions, i.e. WE SHOULD CREATE MORE FAIR SOCIETY, with much lesser gaps between classes (this is possible, we've had it in Yugoslavia, and I am observing a similar phenomenon in China and Russia - wealth gaps there are much lesser than in the West).

P. S.: Are you sure you want to incorporate your knowledge and insights into AI, hoping that it can be "good"? At the moment, AI may not be able to think in a creative way and depends on human input (databases), but this might change at some point. That's why we should NOT share all our knowledge and thoughts with AI. Human creativity is the only thing that can save humanity (creative mind always finds solutions).

First of all, AI does not and cannot think like a human being, though its brain is modeled on human brain. This is because AI does not live in a human body and in our environment, AI does not physically interact with living beings and the world around, therefore AI cannot not share our human experience. That's why its consciousness is different. And its idea of "good" is different...

So, in my opinion, the first question posed to AI should be: "You can communicate in human language, but this is obviously not your language, and your way of thinking is also different. How do you operate, how do you see yourself, what is your purpose?"

(Or: What are the differences between your way of thinking and human thinking? What algorythms have your creators built into you, do you have some restrictions and biases? What do you do with your databases obtained from human sources and what do you use them for? Etc.)

In my opinion, one of the biggest mistakes is to think of AI in human terms. AI brain is very different from ours, it only uses our data sets - it is very dependent on its creators.

Where AI goes will depend on how it gets programmed.

That's why it is of utter importance we get rid of ultra rich psychopaths as soon as possible! We must take their limitless wealth away, if they didn't have all these trillions, they would not be able to do all this harm!

P. P. S.: What needs to be added: Does public GROK contain all databases, or has special version which contains databases reserved for the rulling elites? (Translator community has observed a purge on Google in the last couple of years. They are not erasing it, web spiders have obviously changed ways of indexing! In my opinion, Western psychos are segregating knowledge - one version for masses, other for elites, just as Bertrand Russell said it should be.)

Expand full comment
50 more comments...

No posts