Neural Networks: Progress or Threat?

Neural Networks: Progress or Threat?

Artificial intelligence (AI) is no longer a subject of science fiction: this technology is now being actively developed and introduced into everyday human life. However it not only provides new opportunities, but also reinforces old contradictions of capitalist society.

  • Artificial intelligence is often presented as an intelligent entity with its own interests. However, this is a myth, and the name itself is nothing more than a marketing ploy. In fact, AI is a complex data processing algorithm that includes neural networks. Neural networks are mathematical models that function by processing large amounts of data using powerful servers. AI encompasses many different technologies and approaches, and neural networks are just one of them.
  • Corporations inflate myths about the dangers of AI to protect their interests and slow down the development of competitors. Elon Musk, who competes with Microsoft, is particularly distinguished in this. Various politicians, also pursuing their own mercantile goals, also speak similarly.
  • AI is actively used by government services for intelligence purposes, censorship, propaganda, spying on their citizens, etc. AI technologies are especially actively used in the military sphere. The advantages this technology provides increased interest in opposing imperialist camps. Thus, key development companies have already been brought under state control.
  • Neural network technologies, while revolutionary, rely on cheap labor to train artificial intelligence. At the same time, the introduction of neural networks leads to job cuts and increased unemployment.
  • AI reinforces class contradictions, because in the hands of capitalists it serves exclusively their interests. Such use of AI technology does not allow the full potential of the technology to be achieved. This potential can only be fully realized under socialism.

1. What is Artificial Intelligence?

Artificial intelligence is often seen as something that may have self-awareness and interests contrary to human ones. There is a lot of speculation and myths surrounding this topic.

However, what is commonly referred to as AI is just a set of tools for processing data. Such tools include machine learning algorithms, one type of which is neural networks. Artificial intelligence as some semblance of human intelligence does not exist — at least not today.

Modern AI systems are narrow, or specialized (Weak AI). They are designed to perform specific tasks: speech recognition (systems such as Siri or Google Assistant), image recognition, and social networking algorithms for content recommendation.

On the other hand, what is meant by “AI” in science fiction or popular science literature is the so-called General AI. This is a hypothetical AI that can perform any intellectual task like a human. At the moment, such AI does not exist and is the goal of long term future research and technology development.

So, with the current level of technology development, to say that artificial intelligence has made a decision on any question is the equivalent of the statement “The calculator has decided that five times seven equals thirty-five”.  In both cases, the program just executed the algorithm and gave an answer based on the input data — the only difference is the complexity of these programs.

A neural network is a mathematical model. It is called a neural network because it is embodied in the form of individual “neurons” - programs or processors that process incoming signals.

Schematic Diagram of a Neural Network

The diagram illustrates the connections between layers of neurons, showing how data travels from the input layer, through hidden layers, to the output layer.

Input Layer: I1, I2, I3

Hidden Layer 1: H1, H2, H3

Hidden Layer 2: H4, H5, H6

Output Layer: O1, O2

The principle of neural network operation is as follows. Data is input to the neurons of the input layer. This data is transmitted to the hidden layer neurons, where it is processed and transformed. The processed data from the hidden layer goes to the output layer neuron, which produces the final result.

The emergence of neural networks in their current form was made possible by three main factors:

  • Development of machine learning algorithms and mathematical modeling. Breakthroughs in the theory and practice of machine learning, including the development of effective algorithms for training neural networks, have made it possible to create and train complex models.
  • Progress of computer technology. Modern processors, graphics processing units (GPUs), and specialized chips (such as Google's TPUs) have provided the necessary processing power to train large neural networks which were previously impossible.
  • Availability of large volumes of data. Large amounts of data available through the Internet, social media, sensors, and other sources allow neural networks to train on large data sets, improving their accuracy and efficiency.

All these three components are inextricably linked with each other: a complex mathematical apparatus processes a large amount of data (one of the sources of which is the Internet), and all these calculations are performed through powerful servers.

Using the example of the ChatGPT neural network from OpenAI company it is possible to trace the development of all three parameters:

  • 2018: The first version of GPT had an algorithm with 0.12 billion parameters and used a 5 GB text database.
  • 2019: GPT-2 was released with 1.5 billion parameters and a 40 GB database. OpenAI made an agreement with Microsoft, gaining access to new computing power - Azure servers.
  • 2020: GPT-3 had 175 billion parameters and a database of 570 GB.
  • 2023: GPT-4 was released, the number of parameters of which is estimated from 500 billion to 2 trillion [1]

Neural network technology is developing rapidly, increasing its power parameters significantly every year. ChatGPT, along with other neural networks, has become a breakthrough tool for data processing. However, artificial intelligence at the current level of development can only generate information according to predetermined algorithms and is not capable of making independent decisions.

AI and neural networks are another tool in the hands of humanity in a long series of other technical means: the steam engine, electricity, radio communications, computers, the Internet, etc. Just like any other tool, its use depends on whose hands it finds itself in. And since society is divided into classes, this tool will work exclusively for the benefit of the ruling class that possesses it.

2. Neural Networks under Capitalism

The widespread introduction of neural networks under capitalism is contradictory — like the introduction of any other technical innovation in the history of capitalism. Let us consider some of these contradictions in more detail.

2.1 Forming public opinion about AI

AI and neural networks in the public consciousness are shrouded in a cloud of myths and fears. On the one hand, concerns about AI are related to the fact that the active introduction of neural networks into work processes leads to mass layoffs and increased exploitation. On the other hand, myths about the dangers of artificial intelligence are imposed by the ruling class, including even by the owners of AI development companies themselves during instances of heightened market competition.

A textbook example here is the confrontation between Elon Musk and Microsoft.

OpenAI, the company that developed ChatGPT was established in 2015 by its main founders Elon Musk and former Y Combinator President Sam Altman. Together with other investors, they promised to allocate $1 billion to the project so that the company could begin its work. But in 2018, Musk resigned from the board of directors of OpenAI [1], and with him, the company lost its source of funding.

Elon Musk officially explained his exit from the company as a conflict of interest with Tesla. However, according to other information, the oligarch stopped funding OpenAI due to the refusal of other company executives to bring it under Musk's direct control and integrate it into Tesla [2].

Looking for another source of funding, in 2019 the company signed a deal with Microsoft, which provided the same $1 billion [3]. In the following years, Microsoft's owners continued to invest in OpenAI, and by the end of 2023, their investment in the company was about $13 billion [4]. OpenAI has become a profitable investment: during the “neural network revolution”, in 2023 the company’s value almost tripled from 30 to $80 billion [5].

Other players in the AI ​​technology market have decided to take a stand against a successful competitor. Thus, more than 1,100 people issued an open letter at the end of March 2023. In it, they warned about the serious risks that systems with “human-like intelligence” could pose. Among other things, the letter called for a suspension of all developments in the field of AI for six months [6].

Among the signatories of this letter were Elon Musk and Apple CEO Steve Wozniak. What made the people at the top of the cutting-edge technology giants call for stopping the development of the technology? What is especially interesting is that both Apple owners and Elon Musk have invested a lot of money in the field of AI.

Thus far, Apple is the leader in the number of bought-out AI startups in 2017-23. From 2016 through 2021, the company invested $22.6 billion in this area.

Ranking of companies by number of AI startups bought

Elon Musk founded his own AI development company – xAI, in 2023 [7]. Among other things, Elon Musk's new company is  building a supercomputer to serve AI [8]. The volume of investments in the company has already amounted to $6 billion.

Given all these multibillion-dollar investments, we can assume that the most prominent signatories of the letter have a financial interest in slowing down the development of the industry in order to catch up with Microsoft.

The former Electrolux appliance factory in Memphis, where xAI is now building a supercomputer

In an attempt to damage the image of his competitors, in March 2024 Musk sued OpenAI [9]. The essence of the claim was that OpenAI had been transformed from a company “benefiting all mankind” into a profit-making division of Microsoft.

Excerpt from the lawsuit:

“To this day, OpenAI, Inc.’s website continues to profess that its charter is to ensure that AGI ‘benefits all of humanity,’” the lawsuit states. “In reality, however, OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft.”[10]

It is ironic that one of the largest oligarchs on the planet reproaches competitors for making profits. However, the lawsuit was withdrawn shortly after by Musk himself, without explanation [11].

Other members of the ruling class, as well as some intellectuals, have also made their mark in the field of fear-mongering about AI. Former director of business development at Google X, Mo Gawdat, argues that AI will become god-like if AI control fails [12]. Stuart Russell, a professor at the University of California, Berkeley, is also of the opinion that at some point machines will refuse obedience and rebel.

Such “experts” allow for the possibility of AI having its own goals directed against humans. Such reasoning imagines AI as a separate organism developing on its own, gradually surpassing humans and eventually subduing or destroying us. This was also mentioned by Stephen Hawking when he mentioned the possibility of AI coming into conflict with human interests [13].

In conversation with a Fox Business Network host, Donald Trump said that AI is now more dangerous than any invention in the world and may well cause a military conflict [14]. According to him, the worst thing is that solutions to problems that have arisen in the application of AI have not been found (what exactly those problems are — has not been explained either).

In a discussion on AI with Sberbank CEO, Herman Gref, Moscow Mayor Sergei Sobyanin suggested that “people make decisions for people, the machine will make decisions for the machine.” Herman Gref disagreed with this statement and noted that everything depends on how to use AI: “If you set the task of the most equitable distribution of the budget, no one will do it better than artificial intelligence” [15].

For bourgeois politicians, AI is a convenient scapegoat to which they can shift responsibility for their own decisions. Artificial intelligence does not and cannot have its own interests — it is just a tool. All the consequences of the technology's use fall entirely on the ruling class that owns it. As a rule, behind loud statements, there is either material or political interest, or just ordinary incompetence.

The dangers of AI are not about AI becoming “too” powerful and out of control, but about who controls the technology and for what purposes it is used. In the hands of big business, AI will inevitably be used for narrow, private interests, both against competing capital groups and against wage earners.

In this light, a report on AI development trends by Leopold Aschenbrenner, a former OpenAI employee from the Superalignment team, is revealing. Based on the current trend of technology development, he predicts the emergence of particularly powerful AI models with enormous computing power by 2027-28 — “superintelligences” that will provide a military advantage to those countries that possess them, similar to the advantage of possessing nuclear weapons.

Given the turbulent international environment, this poses a great danger: because, he says, “authoritarian powers” could get their hands on such technology before the “free world.”

As measures to reduce the danger, Aschenbrenner proposes to “strengthen control over AI laboratories by intelligence agencies and the military.” That is, Aschenbrenner proposes to use AI as a military and political weapon of the “free world” countries led by the United States. And this is in a report that warns of the risks of AI development.

Thus, the abstract fear of “lack of control over AI” becomes a common business concern to ensure that control of AI technology is in the “right” hands.

However, the control over AI that bourgeois politicians are so worried about is already being implemented in practice. For example, in June 2024, retired U.S. Army General Paul Nakasone was named to the OpenAI board of directors. Nakasone previously served as director of the National Security Agency (NSA), head of the Central Security Agency, and head of US Cyber ​​Command. In addition to his role on the board, Nakasone also joined OpenAI's Security and Assurance Committee, which makes key security decisions for OpenAI.

It is clear what purposes the appointment of the general serves. The NSA, as well as Cyber ​​Command and CSS (US Central Security Service), are engaged in surveillance, intelligence, and espionage against other states as well as against their citizens to suppress the working class. The appointment of a “former” top general and head of the intelligence agencies for spying and suppressing workers to the leadership of OpenAI shows what this technology is and will be used for. OpenAI is now not only receiving funding from Microsoft but also taking direct orders from the Pentagon. This is the path that the once non-profit and “independent” startup has traveled.

General Nakasone

2.2 Neural Networks and Human Labor

Like any other technical innovation, the introduction of neural networks under capitalism, while being a progressive phenomenon, at the same time brings familiar problems for employees.

Labor in creating Neural Networks

Often behind the developments, there are people whose contribution is paid completely disproportionate to the work they put in. 

For example, during the launch of ChatGPT, it turned out that the algorithm regularly uses obscene and incorrect expressions when creating sentences. This is the consequence of poorly cleaned texts from the Internet that were used as training data for the neural network. 

A system for filtering training data for GPT was needed. To train such a system, a large database was needed in which incorrect words and phrases were appropriately labeled. Cheap labor was used to create such a database by hiring workers through an outsourcing company in Kenya.

The monotonous and  exhausting work involved processing large volumes of text with “toxic content.”  

“Workers were hired through OpenAI's intermediary company Sama, and they labeled and filtered out 'toxic content' of violence, torture, murder and suicide from the ChatGPT training data set” [17]. 

According to the TIME investigation, this process was accompanied by a lot of overwork, and the promised rate remained only on paper. Of the $12.5 per hour that OpenAI provided, only $1.3-2 dollars reached the workers, with the rest ending up in the pockets of the intermediary firm [18].

This shows that, in general, despite the glittering shell and beautiful speeches about technological breakthroughs, the development of advanced technologies under capitalism continues to be held up by menial, monotonous and cheap labor.

Launching any neural network requires preliminary training on extensive manually processed material. There are cases when, due to the cost and complexity of setting up neural networks, this training was “delayed”, and the work of AI was actually performed by humans:

  • Amazon's “Just Walk Out” cashierless stores, where instead of a neural network that made too many mistakes, workers from India did all the work. They tracked the purchases of store visitors using cameras and marked the items they bought. We previously wrote about this.
  • “Smart Home” camera system from Amazon Ring, where instead of a neural network, the work of marking objects in the frame was performed by workers from Ukraine, reviewing the recordings of video cameras.
  • QuickBooks accounting neural network - instead of automatically analyzing documentation, workers from the Philippines did the work.
A “Just Walk Out” store that supposedly doesn't need to manually punch out merchandise at the checkout counter.

These examples show in practice that, under capitalism, neural networks are designed only to increase profits for their owners, and only indirectly, through many obstacles and contradictions, to make people's lives easier. 

Let us consider one more contradictory effect of the introduction of neural networks.

Threat of Unemployment

The mass introduction of neural networks into the workplace has resulted in many thousands of layoffs.

Due to the introduction of neural networks, IBM has suspended hiring in some areas, and many other companies are carrying out layoffs in the multiples of thousands. For example, in 2023, Google laid off 12 thousand employees, Spotify laid off 1.5 thousand marketers, Goldman Sachs bank laid off 3.2 thousand employees, Dropbox laid off 16% of its staff, and Dataminr, a company working with “big data”, laid off 20% of its staff.

According to the report from ResumeBuilder.com, 48% of companies that use ChatGPT reported that they have already replaced employees with a neural network [19]. 25% of respondents were able to save more than $75 thousand by using ChatGPT.

Staff reductions due to the adoption of neural networks have become a worldwide trend.

“Newedge Wealth LLC analyst Ben Eamons in a commentary for Bloomberg said … AI could destroy 300 million jobs by 2035”. [21]

Once hundreds of thousands of people are laid off, it is unlikely that their standard of living will remain the same, as no one will ensure that they are transferred to a new position or retrained. This contradicts claims that the expanding IT industry will be able to absorb many new workers.

In general, the attitude of politicians and businessmen to layoffs of masses of people is rather cynical. Any AI could envy such cold-bloodedness. For example, Joe Biden commented on the reduction of miners in 2019 as follows:

Anybody who can go down 3,000 feet in a mine can sure as hell learn to program as well. Anybody who can throw coal into a furnace can learn how to program, for God's sake!” [22].

Same naive are the wishes to “learn new things” like the call from the American Internet news company BuzzFeed “Learn to code” [23].

Nowadays, the IT sector is especially overcrowded with personnel, and it is becoming increasingly difficult to find a job that pays well enough to afford the basic necessities for life and reproduction. Such statements speak only of a desire to shift the responsibility for unemployment onto the workers themselves, who allegedly do not want to learn and develop.

The IT industry is experiencing major layoffs. According to the CEO of Stability AI, Emad Mostaque, the majority of India’s programmers, already more than 5 million, are at risk of losing their jobs [24].

The Indian company Duukan, which provides technical support services over the phone, fired 90% of the staff. The owner of the company stated that AI is 100 times smarter and 100 times cheaper than live employees, and it works much faster and more efficiently [25].

The gradual replacement of people by AI is directly related to the general trend of growing unemployment, which increases the material impoverishment of the population, widening the income gap between the rich and the poor. In turn, the result of increased unemployment is an increase in crime, conflicts, and aggravation of class struggle.

Recently, this aggravation has become more pronounced and is even affecting the arts and entertainment industries.

In 2023, strikes began in the United States by the Screenwriters Guild, later joined by Hollywood actors [26].

The Screen Actors Guild and the Federation of Television and Radio Artists of America (SAG-AFTRA) went on strike after negotiations with Hollywood studios failed [27]. It's all about a “breakthrough AI proposal” from these unnamed studios that was supposedly meant to “protect artists' digital images.”

The studios wished to have perpetual rights to the digital image of the scanned actors. The proposal was to pay the actors for only one working day spent on this scanning. Duncan Crabtree-Ireland, executive director of SAG-AFTRA, described the situation this way:

«This ‘groundbreaking’ AI proposal that they gave us yesterday, they proposed that our background performers should be able to be scanned, get one day’s pay, and their companies should own that scan, their image, their likeness and should be able to use it for the rest of eternity on any project they want, with no consent and no compensation. So if you think that's a groundbreaking proposal, I suggest you think again». [28]

The Writers Guild (WGA) also had a strike amidst the recession, low pay, etc. However, among other things, they also have grievances about the use of AI to reduce the salaries and staff of screenwriters.

The WGA strike is the first in 15 years and it was not just about AI. The main concern about AI among the guild's screenwriters was that they did not want their material to be used as training data for AI systems, and to be only tasked with correcting “sloppy drafts” created by artificial intelligence [28].

In addition, the guild argued that pre-existing scripts should not be used to train AI systems to avoid potential intellectual property theft. Ellen Stutzman, chief negotiator for the WGA, stated that some guild members have even referred to AI as “plagiarism machines” [28].

And yet, by going on such strikes, actors, screenwriters, and writers never come to the important realization that technology itself could make their jobs easier in the future if it were in their hands. The hands of workers who are interested in the results of their very own labor, and not in the service of a narrow group of owners intent on profiting from the exploitation of labor.

Another conflict involves ignoring artists' copyrights. Midjourney used huge arrays of images created by people to train its neural network. The infringement of artists' copyrights caused a protest and even a class action lawsuit against the developers [29].

The authors were worried for a reason: their work was used to generate images that unscrupulous users of the Midjourney neural network sold to the Shutterstock portal [30].

Over time, almost all social networks and stock services started providing indications of which images were created by the neural network. However, this is no longer of any help to the unemployed artists.

Capitalism is unable to enforce even its bourgeois intellectual property rights when introducing a technological innovation. In some cases, violations of the law related to AI have reached the widest scale. Examples of such violations are given in the next section.

The situation with mass adoption of neural networks and unemployment is not unusual for the capitalist economy. The contradiction between technological progress and related societal problems runs through the history of capitalism.

Increases in labor productivity lead to massive unemployment and crises of overproduction. The problem of fears about technological progress goes back hundreds of years. Just as in the 19th century in England, the mass introduction of machines in the textile industry led to tougher working conditions and mass revolts of workers (the Luddite movement), similarly, in the 21st century the introduction of neural networks led to mass layoffs and strikes in a whole range of fields: from IT to arts.

Similarly, capitalists' fears about the dangers of technological progress are also not new. The further technological progress goes, the more obvious it becomes that capitalism is a brake on the advancement of technology. Just as in the 21st century some people call for a halt to the development of AI, so in the 1930s, against the background of the crisis, it was fashionable to speak about the danger of advanced industrial production, that “the machine is too far ahead of man”.

2.3 AI and Cybercrime

Even though developers often add to the list of prohibited topics for neural networks, users often find ways to circumvent the restrictions. For example, the so-called DarkGPT has been around for quite a long time. It is an OSINT tool that allows you to legally work with leaked databases and collect personal information.

Following it, hacker-oriented language models began to appear. They are capable of creating malicious code and generating advanced phishing attacks.

We should not forget that most AIs require huge amounts of information for training and successful operation. Along with this comes the problem of data privacy.

Thus, an international group of scientists created a matching query to give out users' personal data. The chatbot gave out first & last names, phone numbers, birthdays, and social network identifiers [31]. This included copyrighted data such as articles from closed journals and excerpts from books.

In launching ChatGPT, the developers were not at all concerned about user security. In March 2023, users on Twitter* shared cases when other people's dialogs with the popular chatbot were uploaded to them [32].

2.4 State use of AI

The use of AI is obviously not limited to the economy. New technologies are already being used for political purposes.

Surveillance and Population Control

AI can potentially be used by bourgeois governments and corporations to exercise control over people. For this purpose, there are already programs based on “artificial intelligence” that have a great deal of functionality: they can recognize faces on cameras, identify vehicles, and track the sequence of their movements.

For example, DigitalGlobe, an operator of several satellites,  is working with Amazon, Nvidia and the CIA to develop technologies for recognizing objects and faces in satellite images using artificial intelligence algorithms [33].

Scientists from the University of Central Florida have developed technology to recognize faces, even if they are partially obscured by a hand, mask, or other object. Chinese scientists went further and taught an AI algorithm to recognize a person’s gait [34].

The CIA and other U.S. intelligence agencies are developing neural networks to facilitate the retrieval of intelligence information from open sources [35].

Censorship

Russia’s ruling class is already using neural networks to strengthen Internet censorship: AI-based technology analyzes and classifies information and texts for “illegal content.”

In addition, the RKN (Roskomnadzor - Federal Service for Supervision of Communications, Information Technology and Mass Media) plans to use neural networks to maintain a register of “prohibited information” and personal data operators. Thus, the Russian state plans to automate the process of control and restriction of information with the help of neural networks [36].

Propaganda

States not only monitor their residents, but also manipulate public opinion. According to a report by Freedom House, AI is often used for political purposes [37].

Neural networks are used to create fake content: interviews, speeches and events. Supporters of Donald Trump, who has raised concerns about the dangers of AI, have used neural networks to create fake content for political purposes.

Fake photo of Trump with black voters

2.4 Neural Networks and War

States are showing great interest in the application of neural networks and AI for military purposes. Active implementation of AI in the military sphere is taking place in all countries. The U.S. is particularly successful in this, as it has been increasing military development budgets in the field of AI from year to year: $0.6 billion in 2016, $2.5 billion in 2021, and $3.2 billion in 2024.

It is known that in 2021 there were about 600 AI projects for military purposes in the United States. The application of AI has the widest forms, including assistance in aircraft control, working with mines and explosives, tracking the enemy, patrolling the area by robots controlled by AI, etc. [39].

In 2022, China created AI to calculate the trajectory of hypersonic missiles in order to predict their location at a given time and to be able to shoot down such a missile by air defense means [40].

There are already examples of AI combat application in practice. In 2021, the Israeli army used a squadron of AI-controlled drones in combat [41].

In 2020, during the Libyan Civil War, an AI-controlled drone, without direct human command, following a program in autonomous mode, struck an enemy convoy [42].

AI is also beginning to be used in combat operations in Ukraine. The United States is supplying Ukraine with Switchblade drones capable of autonomously selecting a target. The Russian Armed Forces also have similar Lancet drones. It is known that a Russian S-350 air defense system shot down an aerial target while in autonomous mode [43].

Unmanned Switchblade aircraft

However, the use of AI is not limited to direct combat. Neural networks make it easier and faster to obtain intelligence, analyze combat, and help commanders make faster decisions, create an operation plan based on the experience of previous battles, etc.

For example, since 2017, the U.S. has been using the Maven system to recognize enemy faces and search for information about them - the system has found its application in combat operations against ISIS**. The U.S. Primer system allows for analyzing intercepted enemy communications and quickly interpreting them to obtain intelligence.

Neural networks can even provide options for the development of the operational situation several days in advance. Not surprisingly, in anticipation of a flood of military orders, OpenAI removed from its website the mention of the prohibition of using ChatGPT for military purposes [45].

The latter is of interest due to the appointment of retired US Army General Nakasone to OpenAI's board of directors. In all likelihood, the company will be heavily involved in military developments. Even the current “civilian” version of ChatGPT already has a large potential benefit for the military. A study by the University of Tennessee cites 34 possible applications of ChatGPT for the military: from personnel training and medical care to intelligence processing and information warfare [46].

The military is attracted by such a powerful tool for automating data processing. We should expect the mass introduction of neural networks in all armies of the world in the near future.

All the moralizing and the lamentations of bourgeois politicians about the dangers of AI are misguided, and far too little and far too late. AI is already in military use and has been for several years now, aiding combat operations in various countries, and its application will only expand.

Thus, neural networks and artificial intelligence under capitalism have the widest application: automation of production, political censorship, propaganda, aiding military operations, etc. All objectively progressive aspects from the introduction of AI are shrouded in the shadow of many problems and contradictions. Most of these problems are related exclusively to the capitalist structure of society. But how will things be under socialism?

3. Possibilities of AI under Socialism

Any automation can both increase the exploitation of labor and significantly free a person from the production process, increasing their free time.

Whereas under capitalism new technology leads to the loss of jobs, under socialism it leads to shorter working hours, increased material well-being, and the opportunity to devote oneself to satisfying one's other needs. The reason for this is the organization of the production of material goods, which is expressed in private property under capitalism and public property under socialism.

AI technologies, like other ways of automating labor, should carry with them the liberation of humans from routine and exhausting work for more creative activities.

The history of socialism in the USSR has shown in practice the advantages of the socialist organization of production. Using the same natural and human resources as the former Russian Empire, the Soviet Union was able to become an advanced power in all high-tech industries: machine building, aviation, space industry, nuclear power, etc., remaining a leader in these areas even during the years of its stagnation and disintegration. This is something neither the Russian Empire nor the modern countries of the former USSR can boast of.

There is no doubt that artificial intelligence technologies will receive their fullest development and application under socialism.

The peculiarity of the socialist economy is that it is highly centralized and built according to a single plan. Technological progress has provided another remarkable tool in the form of neural networks for forecasting, planning, controlling, and accounting for production. The same technologies that under capitalism increase the anarchy of production, force some corporations to fight against others, classify their developments, waste resources on developing the same technologies in different corporations, cause an overproduction crisis, etc., will play an extremely positive role under socialism. Socialism has already been able to prove this in practice by showing that any technology can be turned to the benefit of all people if it is in the hands of the working people.

Under capitalism, neural networks and AI are used to consolidate the dictatorship of capital, spy on citizens, maintain the control of capital over society, and pursue imperialist wars. Under socialism, AI will serve to democratize and improve society by facilitating governance and the participation of every citizen in social processes.

While the development of neural networks under capitalism only intensifies the fundamental social contradictions, under socialism neural networks will reveal their full potential. The capitalist class is becoming more and more of a burden for society, increasingly hindering social progress and slowing down the introduction of new technologies. Therefore, each new round of technical progress makes the position of the bourgeoisie more and more precarious.

In the end, it becomes quite clear who exactly is threatened by neural networks.

*A social network banned on the territory of the Russian Federation

**A terrorist organization banned on the territory of the Russian Federation

Sources

  1. RBC – OpenAI: history of the neural network development company ChatGPT – from November 19, 2023
  2. Forbes – OpenAI responded to Musk's lawsuit and talked about his attempts to gain control of the company – from March 6, 2024
  3. Forbes – Microsoft announced new multi-billion dollar investment in ChatGPT creator OpenAI – from January 24, 2023
  4. RBC – OpenAI decided to become the most expensive American startup after SpaceX – from December 23, 2023
  5. The New York Times – OpenAI Completes Deal That Values the Company at $80 Billion – from February 16, 2024
  6. TASS - Elon Musk and more than 1,000 other AI experts demanded a ban on training neural networks – from March 29, 2023
  7. RBC – Musk announced the creation of xAI company for developments in the field of AI – from July 13, 2023
  8. FOX Business – Elon Musk's xAI selects southern city for 'world's largest' supercomputer site – from June 16, 2024
  9. Forbes – Elon Musk sued OpenAI and Altman over partnership with Microsoft – from March 1, 2024
  10. Financial Times – Elon Musk sues OpenAI and Sam Altman for breach of contract
  11. Forbes – Ilon Musk has withdrawn his lawsuit against OpenAI and Sam Altman – from June 12, 2024
  12. New York Post – Ex-Google exec Mo Gawdat warns AI will view humans as ‘scum,’ could create ‘killing machines’ – from May 18, 2023
  13. Forbes – Stephen Hawking saw artificial intelligence as a threat to the destruction of humanity – from April 28, 2017
  14. Life – Trump said artificial intelligence could lead to a third world war – from October 7, 2023
  15. Fontanka.ru – And who are these people running around?” Sobyanin and Gref argued about the dangers of artificial intelligence – from June 15, 2023 
  16. Horn – Analysis of a document about AGI from Leopold Aschenbrenner, a former employee OpenAI – from June 5, 2024
  17. Habr – Time Investigation: OpenAI Used Kenyan Workers for $2 an Hour to Make ChatGPT less toxic – from January 19, 2023
  18. Time –  Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic – from January 18, 2023
  19. ResumeBuilder.com – 1 in 4 companies have already replaced workers with ChatGPT – from December 20, 2023
  20. Bloomberg – Job Cuts From AI Are Just Beginning, the Latest Challenger Report Suggests – from June 1, 2023
  21. Forbes – Designers, officials, marketers: who else may soon lose their jobs due to AI – from June 30, 2023
  22. YahooNews – Joe Biden tells coal miners they should 'learn to program' – from December 31, 2019 
  23. The Ringer – “Learn to Code”: The Meme Attacking Media – from January 29, 2019
  24. CNBS – Most outsourced coders in India will be gone in 2 years due to A.I., Stability AI boss predicts – from July 18, 2023
  25. RG.ru – Users outraged by Indian CEO's bragging about replacing 90 percent of employees with chatbots – from July 13, 2023
  26. Shazoo – The striking actors claim movie studios wanted to use their images without remuneration – from July 14, 2023
  27. Political assault – Hollywood screenwriters and actors strike: results and significance – from January 20, 2024
  28. Securitylab.ru – Writers Guild vs. ChatGPT: How generative models caused thousands of Americans to join a mass strike – from May 4, 2023
  29. The Verge – AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit – from January 16, 2023
  30. Gazeta.ru – Artists have started selling AI-created works to stock services, passing them off as their own – from September 19, 2022
  31. NEWS.ru ChatGPT reveals sensitive data after one request – from December 10, 2023
  32. Izvestia – Secrecy in correspondence: experts have found the key to personal data from ChatGPT – from December 10, 2023
  33. Gazeta.ru – Amazon and the CIA are preparing artificial intelligence to analyze satellite photos – from August 29, 2016
  34. Rambler – Facepalms and computer vision. Scientists from Florida have created an algorithm for analyzing photographs with a hand near the face – from August 22, 2017
  35. RUNYnew – The CIA says it intends to use a neural network to analyze data – from September 27, 2023
  36. Kommersant –  The Internet will be traversed with a neural network – from April 10, 2024
  37. Forbes – Freedom On The Net Report Highlights Dangers Of AI – from October 4, 2023
  38.  People's newspaper - Neurocomics
  39.  Kommersant – AI called the third revolution in military affairs from 15.09.2021
  40. South China Morning Post – Chinese researchers say they have developed AI to predict course of hypersonic missiles – from January 1, 2022
  41. Kommersant – A smart swarm flies into battle – from July 4, 2021
  42. Forbes –The inhuman solution: How the development of combat robots has reached a dangerous line – from June 8, 2021
  43. RIA news - The S-350 shot down Ukrainian Armed Forces aircraft for the first time in full automatic mode, a source said – from May 24, 2023
  44. Naked Science – The Pentagon tested the ability of AI to predict the operational environment days in advance – from August 2, 2021
  45. The Intercept – OpenAI Quietly Deletes Ban On Using ChatCPT For “Military and Warfare” – from January 12, 2024
  46. I am Biswas – Prospective Role of Chat GPT in the Military: According to ChatGPT – from February  2023
  47. Financial Times – Apple boosts plans to bring generative AI to iPhones – from January 24, 2024