Impact of Artificial Intelligence on Environmental Quality through Technical Change: A Free Dynamic Equilibrium Approach
Abstract
In the times we live in today, humanity faces unprecedented environmental challenges. The emergence of artificial intelligence (AI) has opened new doors in our collective efforts to address our planet’s pressing problems; however, many have doubts on the actual extent of impact that AI have on the environment. In particular, AI also assisting dirty production is a drawback that is largely absent from the literature. To investigate the impact of AI on the environment, we establish mathematical models to model the economy and the production process of goods based on outdated and advanced technologies. The secondary results are stated in the form of lemmas, the main results are stated in the form of theorems. From the theorems we conclude that AI may not on its own prevent an environmental disaster, a reason of which is its concurrent contribution to dirty production. With temporary government intervention, however, AI is able to avert an environmental disaster.
Keywords: AI, environment, disaster, mathematical model, optimal, government.
Funding: The authors did not receive support from any organization for the submitted work.
Conflict of Interest: The authors have no relevant financial or non-financial interests to disclose.
Ethical Approval: This article does not contain any studies with human participants or animals performed by any of the authors.
1 Introduction
The increase in the average global temperature on Earth is one of the main factors causing climate change (Stern & Robert, 2014). In 2020, average annual temperatures have increased by 1.25 degree Celsius over the past century and by about 0.54 degrees Celsius over the past 30 years alone (NASA, 2023).
Climate change causes significant damages to the economy and the well-being of mankind. Regarding the economy, a wider temperature range reduces capital investment and thus hinders economic growth (Lu et al., 2019). Climate change disproportionately affects poorer countries (Tol, 2018), thus worsening global inequality and making it even more difficult for countries to escape poverty. It also is a main driver of some extreme weather events such as cyclones and storms (Clarke, 2022), which can destroy physical capital such as agricultural land, factories and buildings (Batten, 2018).
Regarding our physical well-being, major storms (whose increasing frequency are driven by climate change) can cause major injuries as people are hit by flying debris (Alderman, Turner & Tong, 2012). Moreover, extreme heat increases the ozone concentration on the ground, which can result in breathing problems and lung diseases (Crimmins, 2016).
Regarding our mental health, chronic weather conditions such as droughts worsens food security, resulting in increasing stress and anxiety (Manning & Clayton, 2018). High temperatures are even associated with higher suicide rate (Lawrance, 2021). On the other hand, extreme events such as hurricanes can cause long-lasting trauma and depression due to injury of oneself or loved ones, property destruction and adverse living conditions (Galea, 2007).
With such extreme consequences, climate change must be addressed, and to achieve this goal, we need to evaluate the causes of climate change. This is not an easy task and is still a growing literature, yet the vast majority of research seem to agree that anthropogenic climate change has a significant impact on our ecosystems (see e.g. Rosenzweig et al., 2008; Kaufmann et al., 2011; Hansen & Stone, 2016)
The fact that climate change is highly likely anthropogenic presents a unique challenge: that many attempts at slowing down climate change go against economic growth, the main momentum of the vast majorities of economies. In fact, some are so concerned with this challenge they believe it is improbable to be solved, and propose degrowth instead. A significant portion of the literature (e.g. Kallis, 2011; Sekulova et al., 2013; Morgan, 2020) where authors argue that attempts to phase out dirty energy and production cannot succeed while the world is still chasing GDP growth.
Within this discussion, artificial intelligence (AI) is receiving more and more attention as a potential solution to tip the scale towards sustainable growth, for many AI applications have shown significant benefits for the environment without impeding economic growth. AI is used for climate modeling and prediction (Chantry et al., 2021). AI-driven systems optimize renewable energy harvesting, improving efficiency and sustainability in the energy sector (Patel et al., 2022). In smart buildings, AI manages energy consumption, reducing costs and emissions (Farzaneh et al., 2021). In agriculture, smart agriculture leverages sensors and data analytics to maximize crop yields while conserving land and water resources (Hemming et al. 2020), and early detection of crop diseases and pests allows a reduction in toxic pesticide usage (Selvaraj et al., 2019).
AI also plays a large role in enhancing economic growth. We show in section 2 that with an abundance of different AI and machine learning models, AI provides a wide-ranging and effective tool set to solve problems in research, thus increasing the likelihood of success and the real-world impact of research. This translates to improvements in productivity and efficiency in industries. Developments in AI, therefore, are essentially supply-side policies that enhance economic growth. Significantly, this also signals that environmental protection and economic growth are not mutually exclusive and can go in tandem together. Thus, microeconomics equilibrium that manages to heavily involve AI may actually improve environmental quality, and policies to correct externalities may not be necessary.
However, critics have raised concerns about the actual ability of AI to drive economic growth and environmental protection.
Regarding economic growth, although the literature generally agrees that AI has a positive impact on GDP, they disagree on the extent due to limitations of AI. We will argue in section 2 that these limitations are likely to be addressed in the future, and thus AI likely will have a large positive impact on the economy.
Regarding environmental protection, its impact in addressing climate change is difficult to fully assess also due to limitations of AI. A major limitation that is popular in the literature is the large carbon footprint of training AI model (Tamburrini, 2022).
Our paper contributes to this literature by investigating another largely unexplored limitation: The extent to which AI accelerates dirty production, and its consequences on the environment. Amidst talks about how AI drives clean innovation, we are cautious about the optimism surrounding them because we need to recognise and address the fact that AI can be applied to the dirty sector too.
Research on this is very scarce; we can only locate Andres (2022) who claims that AI innovation benefits clean production three times more than dirty production. Our paper will be built on that result and explore whether this allows the clean sector to overtake the dirty sector.
In Acemoglu et al. (2012), the authors prove that an environmental disaster cannot be avoided under economic equilibrium. We will modify the mathematical model in this paper to add the impact of AI in production. We will assume a highly optimistic scenario for AI development, that AI progress grows at an exponential rate.
Our paper will show that despite AI seemingly large potential for the environment, it is uncertain that AI alone can prevent an environmental disaster in economic equilibrium. We further argue qualitatively that it is unlikely for AI to achieve such a goal. Only when AI is combined with temporary government intervention, can it prevent an environmental disaster. We will show in our discussion of the result that the contribution of AI to the dirty sector is a major reason for AI’s inability to achieve this goal.
Our paper is structured as follows: part 1 is for the introduction, part 2 is to access the impact of AI on the economy, part 3 is for proposal and model building, part 4 includes the results of the research, part 5 is for discussion of the results, part 6 is about the computer simulation of the model, and finally part 7 is for the conclusion.
2 Impact of AI on Research and Production
This paper uses a mathematical model in which scientific research improves the quality of machinery, which increases productivity. In this section, we argue that AI can enhance productivity, in order to integrate its impact into our model. We also address criticisms of AI and attempt to quantify the impact of AI based on predictions from other researches.
2.1 How AI Increases Productivity
We explains how AI either supplements science research or is the product of research itself, through which productivity is improved. Two review articles by Wang et al. (2023) and Xu et al. (2021) discuss the role of AI through different lenses. Wang et al. focuses on the ways and channels through which AI aids research, while Xu et al. explores the fields of research in which AI has major positive impacts.
In Wang et al., the authors explains that AI can assist research at many junctures, which demonstrates how AI supplements science research. Some of those junctures are:
-
•
Data generation: AI itself can be used to generate additional data from existing dataset to train the main AI model. Klenn & Bergmann (2019) uses this approach to develop an AI model that detects failures in a factory, where raw training data is scarce and inaccessible. This allows for timely maintenance, which reduces downtime and increases productivity.
-
•
Self-supervised learning: When data labelling is not available and is too costly to be done, self-supervised learning can be used to learn the dataset without labels and generate its own features. In Magalhães et al. (2023), self-supervised learning is used to “classifies anomalies in an industrial space”, making this task more efficient.
-
•
Hypothesis generation: A hypothesis is an educated guess of the answer to a problem, which is to be tested by scientific research. However, it can be difficult to obtain a good guess in the first place for complicated, obscure problems. AI can help generating hypotheses by exploring the data given. Liu et al. (2023) uses a deep learning model to hypothesise possible antibiotics that are efficient in killing a certain bacteria.
-
•
Simulation: In cases where physical experiments are too costly or impractical, computer simulation, many of which are assisted by AI, can be used. Bruzzone & Orsoni (2003) develop an AI-based model that simulate the logistic of a supply chain to evaluate proposed solution to increase efficiency.
On the other hand, Xu et al. surveys the application of AI in different fields, showing how researches produce AI application that enhances production. They include:
-
•
Information Technology: AutoML is a tool that automate the machine learning development workflow, allowing non-experts in machine learning to still apply it to their problems. For example, de Souza et al. (2021) develop Spectral AutoML specifically to increase the efficiency of soft sensor development.
-
•
Materials Science: AI models can be used to predict properties of new materials, aiding material discovery and design as scientists can more quickly test properties of their material designs and tweak them as needed. Yazdani-Asrami et al. (2022) outlines how this is done in applied superconductivity, increasing productivity in this task.
-
•
Geoscience: Managing the water resource of an area is challenging, especially in the context of climate change bringing in additional uncertainty. AI models, such as one developed by Xiang et al. (2021), can perform this task more accurately.
These two review articles, combined with many industrial-related examples provided above, have shown the two channels through which AI contributes to increasing productivity. Further more, they also demonstrate both the breadth and depth of the impact of AI in research to aid productivity, as AI is used for multiple tasks in research, for research in many different fields. Thus, AI has an immense potential in propelling our scientific progress for production.
In addition, Andres (2022) shows that the clean production sector benefits three times more than the dirty sector from AI innovation. This observation still holds (albeit with a smaller multiplier) after fixing firm effects, showing that this property result from the inherent difference between clean and dirty production, instead from the difference in investment in research and development between firms. Therefore, this trend is highly likely to still hold far into the future.
2.2 Evaluation of Criticisms of AI
Despite its large potential, there are still many problems in AI and its application in research and production. Because of that, many scientists have rightfully raised concerns about AI, or cast doubt on the extent of impact AI can have. To arrive at a more nuanced, supported observation of the potential of AI, we evaluate some of its common criticisms.
Our central claim is that while the criticisms are largely valid, they are likely to be addressed and mitigated in the future. To argue for this, we will show two components:
-
•
There are existing researches that tackle those criticisms, and
-
•
There will likely be significant progress in these research areas in the future.
We will thus conclude that AI will likely still have a significant positive impact on research and production.
2.2.1 AI can be inaccurate
No AI system is totally accurate. The best AI models have accuracy of around 90%, meaning it is wrong in 1 out of 10 cases. This heavily limits the impact of AI on research and industry. Kwon, Raman, & Moreno (2023) show that AI inaccuracy caused by bad training input makes manpower allocation inefficient, demonstrating its impact in an industrial setting. Might this be an eternal problem for AI, one that can’t be overcome?
We believe the answer is no, because in practice, AI can be used with human to improve efficiency. Wilson & Daugherty (2018) even predict that a future where AI collaborate with human is more likely than AI replacing us. To that extent, researches have put forward many ways to improve human-AI collaboration, which include increasing AI confidence (Zhang et al., 2020) and transparency (Vössing et al., 2022), providing descriptions of AI behaviour (Cabrera et al., 2023), and enhancing user decision control (Westphal et al., 2023).
Moreover, the large number of researches already existed shows that human-AI collaboration has the potential to be a highly effective solution for our issue. Recognising this, more researchers will focus on this research area, hoping that their works contribute to the literature that will be applied and have significant positive impact on the real world. Hence, we believe that there will likely be significant progress in human-AI research in the future.
2.2.2 AI based on supervised learning only learn and cannot improve form the past
Supervised learning is a subset of machine learning, which is in turn a subset of AI. Supervised learning refers to deducing patterns from labelled data to perform a task. If the task is classification (e.g. is this image of a cat or a dog?), the label will be one of the category. If the task is regression (e.g. how much should this house cost?), the label will be a numerical value.
In Hagendorff & Wezel (2020), the author focus on supervised learning as a representative of AI because it “is the method used in the vast majority of artificially intelligent applications.” They criticise supervised learning models, arguing that because they are trained on past, existing data, they are only able to replicate the past instead of coming up with novel and creative ideas. They rebut counterexamples of arts by AI by claiming that they are only “deformed re-configurations of existing art works.” This “status quo” preference is problematic if it reinforces the issues we are trying to change. For example, in an attempt to use AI to eliminate human bias in some justice systems, the AI actually reflects those existing biases (Malek, 2022).
We largely agree with the authors’ usage of supervised learning as a representative of AI, as well as their criticism of supervised learning. However, we also believe this approach makes the authors’ argument less applicable for the future where self-supervised learning is becoming a new frontier of AI research.
Self-supervised learning is also a form of machine learning that is used for similar tasks with supervised learning, but is one that uses unlabeled data for model training. This allows the model to deduce its own connections and come to its own conclusions which can be novel from human, reflecting an attempt to move away and improve from the status quo. For example, Sirotkin (2022) shows that social bias can be reduced with a careful choice of self-supervised model.
Moreover, we believe that self-supervised learning is highly likely to see significant progress in the future and might even replace supervised learning in many applications. Bergmann (2023) notes that acquiring labelled datasets for training can be difficult since there may be no existing dataset, and data labelling is a long, expensive and manually tedious process. Hence, self-supervised learning is an excellent alternative when acquiring labelled datasets is challenging. With machine learning being employed in more diverse, more challenging tasks with fewer labelled datasets available, self-supervised learning will be the new frontier in AI development, and research will surely focus more on this area in the future.
2.2.3 AI faces difficulties in harder tasks
In the discussion of AI contribution to the macroeconomy, Acemoglu (2024) differentiates tasks for AI to solve into easy and hard tasks. Easy tasks, he wrote, “are defined by two characteristics:
-
•
there is a simple (low-dimensional) mapping between action and the (perfect) outcome measure, and
-
•
there is a reliable, observable outcome metric.”
Hard tasks are the opposite, meaning they either are highly complex, or their outcomes cannot be reliably or fully observed. Acemoglu reasonably argues that these difficulties heavily hinder the ability of AI to effectively solve the tasks, and hence questions the extent of contribution of AI to productivity and growth.
It however must be noted that Acemoglu chooses to focus on the short term future of the next 10 years only. In the longer term, which is the scope of our article, we claim that there are already solutions to address both the complexity and observability issues, with promising significant progress in the future.
Regarding complexity of tasks, a review paper by Joksimovic (2023) finds that human-AI collaboration has been a main solution explored in a number of papers, which has seen an increase in quality over time. The argument that human-AI collaboration will likely see significant progress in the future has been laid out in section 2.1.
Regarding reliable observability of results, reinforcement learning and its modifications have been used to solve problems under partially observable environments. In simple terms, a reinforcement learning models involves an actor and an environment, where in each turn, the actor choose an action to perform, insodoing interacts and changes the environment, and is granted a reward by the environment. The actor’s goal is usually to maximise the sum of rewards over time.
Much progress has been made to improve reinforcement learning in partially observable environments. Spaan (2006) proposes approximate planning in his thesis. Choi and Kim (2011) explores recovering the reward function of the environment. Muškardin et al. (2023) goes further and suggests learning the entire environment model. Srinivasan & Lanctot et al. (2018) applies the actor-critic model for this task, where the actor employs a ‘critic’ intelligent model (to be trained) to evaluate the actor’s actions.
Similar to human-AI collaboration, the vast number of papers and approaches in reinforcement learning for partially observable environments shows that it is a highly promising solution to the issue at hand, which is recognised by researchers to focus their research on, fuelling even more progress in the field.
2.3 Quantifying the Impact of AI
To quantify the impact of AI on production, we will assume that the percentage increase in production of the final good is the same as the percentage increase in GDP. We then review the literature on predictions of impact of AI on GDP growth, and find that they vary significantly. Acemoglu (2024) noted that quantifying the impact of AI on the economy is “extremely difficult and will have to be based on a number of speculative assumptions,” explaining this situation.
To name a few, Acemoglu (2024) takes a relatively cautious stand, only predicting a contribution of 0.9% over the next 10 years. Others are more optimistic: Goldman Sachs (2023) headline reads “Generative AI could raise global GDP by 7%”, over a 10-year period.
Some envision significantly larger growth. McKinsey & Company (2023) predicts a $2.6-$4.4 trillion annual contribution to the global economy. Noting that the 2023 global GDP is $105.44 trillion (World Bank, 2024), this contribution corresponds to an average of 3.32% annual growth, or an estimate of 33.2% growth over 10 years.
Korinek and Suh (2024) even claim a 100% GDP growth over 10 years with the help of Artificial General Intelligence. Since the global GDP increased by 35.6% from 2013 to 2023 (World Bank, 2024), this implies that AI is responsible for 64.4% of GDP growth in a decade.
In light of such varied predictions, we will use all four of the above values in checking whether our result on avoiding an environmental disaster holds under them, and in conducting computer simulations.
3 The Model
The framework uses an economy where time is discrete and goes from to infinity. There is a unique final good produced competitively using inputs produced by either clean or dirty technology. Scientists conduct research to improve the quality of machines to increase production. Dirty production degrades the environment, which after a certain point becomes an environmental disaster.
3.1 The Economy
At time , the economy produces units of the unique final good using clean and dirty inputs and respectively. According to the aggregate production function:
(1) |
where is the elasticity of substitution between the clean and dirty sectors. We ignore the distribution parameter for simplicity. Throughout the paper, we assume that , meaning that clean and dirty inputs are gross substitutes.
The two inputs and are produced using labor and a continuum of machines that are sector-specific, meaning they can only be used on one sector, clean or dirty. They are calculated by:
(2) |
where , , is the quality of machine of type used in sector at time , and is the quantity of this machine.
Market clearing for labor means labor demand is less than labour supply. Normalising the latter to 1, we get:
(3) |
Market clearing for the final good means:
(4) |
where at time , is the consumption of the final good, and is the number of final good units needed to produce a machine.
Let and be the price of clean and dirty inputs respectively. Since clean and dirty inputs are used competitively to produce the final good, we have:
(5) |
In addition, by normalising the price of the final good to one, we get:
(6) |
3.2 The Innovation Process
Scientists conduct research with the help of AI to improve the quality of machines. Scientists can choose whether to research in the clean or dirty sector based on higher expected profit. We assume that the quality of AI for each sector increases at a fix rate per decade. This means that AI quality in sector at time , , grows exponentially.
Since AI contributes to the clean sector three times more than to the dirty sector, we model as:
(7) |
where and denotes the rate of development of AI in the clean and dirty sector respectively. Note that there is no additional constant term accompanying the exponent because we assume that at , AI has negligible impact on production, meaning , yielding a constant term of 1.
The probability that a scientist is successful in innovation in sector is . If successful, the research improves the quality of a machine in the dirty sector by a factor of (where ), from to .
In the clean sector however, the effectiveness of successful research is greater by a factor of due to the impact of AI. Quality of machines thus increases by a factor of , from to
Market clearing for scientists means:
(8) |
We also define:
(9) |
The process of innovation is then described as:
(10) |
3.3 The Environment
Let is the quality of the environment. We let , where is the quality of the environment without any human pollution. We also assume that it is the initial level of environmental quality, meaning .
The quality of the environment evolves according to:
(11) |
when the right-hand side (RHS) is within . When the RHS is negative, , and when it is larger than , . Here, denotes the extent to which production in the dirty sector has a negative impact on the environment, and is the rate at which the environment regenerates itself.
For notational convenience, we also define
(12) |
Finally, we define the following term:
Definition 1.
An environmental disaster refers to when for some finite .
4 Free Dynamic Equilibrium
4.1 Preliminary Setups
We first define the free dynamic equilibrium:
Definition 2.
A free dynamic equilibrium is given by sequences of wages of workers (), prices of inputs (), prices of machines (), demand for machines (), demand for labour (), scientist allocations (, ) and quality of the environment () such that, in all time :
-
1.
(, ) maximises profits of the producer of machine in sector .
-
2.
maximises profits of producers of input .
-
3.
maximises the profits of final good producers.
-
4.
(, ) maximises the expected profit of a scientist at date .
-
5.
The wage and prices clear the labour and input markets, respectively.
-
6.
changes are given by (11).
4.2 Production in Equilibrium
This section aims to express the ratio expected profits of scientists in the clean and dirty sector in terms of constant parameters and variables relevant to directed technical change, such as , , , and .
Firstly, we calculate , the demand for machines type in sector at time .
Lemma 1.
(13) |
Proof.
Per Definition 2, condition 2, the profit-maximisation problem of the producer of input at time is:
(14) |
where is chosen by the producer of machine in sector at time to maximise its profit (per Definition 2, condition 1). Note that the terms correspond to revenue of input produced, cost of labour and cost of machines respectively.
To get the inverse curve of the input producer, we partially differentiate with respect to (w.r.t) :
(15) |
This is an iso-elastic inverse demand curve, and therefore its price elasticity of (PED) is the power of the price of input , which is .
According to the mark-up rule, the profit maximising price for the machine producer is a constant markup over marginal cost by a factor of:
With the marginal cost being , we have the profit maximising price . Plugging this into (15) gets us:
∎
Next, we calculate the expected profit of a scientist in sector at time .
Lemma 2.
Let be the expected profit of a scientist in sector at time . Then:
(16) |
Proof.
Since the machine producer sells machines at the price of , and each of which costs , its equilibrium profit is:
(17) |
To get the expected profit of a scientist doing research in sector at time , we take into account the probability of success and the increase in innovation if successful. Using equation (9), we get:
∎
Lemma 2 shows that the expected profit of a scientist in the clean sector is boosted by and increases with the development of AI. Thus, AI development is important for scientific progress, particularly in the clean sector.
The next lemma calculates the price of machines in terms of productivity in the sectors.
Lemma 3.
(18) |
(19) |
Proof.
Partially differentiating (14) w.r.t , we get:
(20) |
Since the wage applies to both sectors:
(21) |
We then calculate the distribution of labour between the two sectors.
Lemma 4.
(22) |
(23) |
Noting that , this lemma shows that labour will be distributed more towards the sector with a higher productivity. Research to increase productivity thus also has the effect of attracting labour for production.
The following lemma calculates the production of clean and dirty inputs, in terms of productivity.
Lemma 5.
(25) |
(26) |
Proof.
We then calculate the production of the final good.
Lemma 6.
(28) |
Next, we calculate the total consumption by consumers.
Lemma 7.
(29) |
4.3 Innovation in Equilibrium
Finally, we used the lemmas above to calculate the ratio between expected profits of scientists in the clean and dirty sectors.
Lemma 8.
(30) |
In equilibrium, scientists conduct research in the sector with higher expected profit. The above lemma thus gives rise to the following important theorem on the allocation of scientists.
Theorem 1.
At equilibrium, innovation happens in:
-
•
only the clean sector if and only if
(32) -
•
only the dirty sector if and only if
(33) -
•
both sectors if and only if
(34) for some
Proof.
We recall that in the free market, scientists will research in the sector that brings more expected profit. Now, after fixing , we define:
(35) |
Then we can rewrite (30) as .
Firstly, is equivalent to being the equilibrium, which means innovation occurs in only the clean sector if . This yields (32).
Next, is equivalent to being the equilibrium, meaning innovation only occurs in the dirty sector. This yields (33).
Now assume we have . Note that is continuous, and depending on the value of , is strictly increasing, strictly decreasing, or is a constant. The inequality implies that is strictly decreasing, which means there exists a unique so that . This means innovation happens in both sectors if for some . Conversely, if innovation occurs in both sector, then there must exist so that . Hence we proved both directions, yielding (34). ∎
5 The Impact on The Environment
We will show and give an intuition of why AI alone, without government intervention, may not be able to avert an environmental disaster. Only when AI is used in tandem with a sufficiently large but temporary government intervention will an environmental disaster be avoided.
5.1 Environmental Disaster Might Not be Averted
Without AI, Acemoglu et al. (2012) shows that with clean technology being sufficiently backwards compared to dirty technology and , an environmental disaster necessarily occurs under economic equilibrium without government intervention. We will show that with AI, the situation is not so different:
Theorem 2.
If for any such that:
then for all .
Proof.
We will use induction: Consider a sufficiently large time where . According to (33), this means that:
Let the left hand side be . Consider , taking note that and :
We then have the ratio:
(36) |
At this point, we prove the following 2 lemmas:
Lemma 9.
Proof.
The expression to be proven is equivalent with:
which is trivially true. ∎
Lemma 10.
If:
then:
Proof.
Noting that , the given condition of implies:
and we are done. ∎
We return to our main Theorem. Using Lemma 9 and 10, we conclude that for all as given in the Theorem. Thus equation (33) still applies for , and . Induction completes our proof.
∎
This result differs slightly from Acemoglu et al. (2012) in the sense that it is still possible for AI development to change the trend of innovation from the dirty to the clean sector. However, any such change either happens sufficiently early, or does not happen at all.
This result unfortunately leads to a pessimistic prediction:
Theorem 3.
If and for any such that:
then for all sufficiently large .
Proof.
The above Theorem shows that research occurs in the dirty sector only for sufficiently large . This means that for all sufficiently large :
which means that grows at a rate of . In other words:
Now, taking ln of both sides of (26):
Partially differentiate w.r.t , we get the following for sufficiently large :
In the long run, tends to infinity. Since , tends to 0. With being a constant, tends to 0.
Plugging that in the above equation, we get in the long run. This means that the growth rate of is , implying that grows to infinity.
As a result, for all sufficiently large . Equation (11) then gives:
for all sufficiently large , resulting in an environmental disaster. ∎
5.2 Why Environmental Disaster Avoidance is Uncertain
We provide an intuition as to why AI may not be able to reverse an environmental disaster. There are two forces in play in our model.
The first one is AI having the potential to benefit clean research 3 times more than dirty research. This means that for a successful research, the percentage increase in machine quality is much larger in the clean sector. As a result, the expected profit of a researcher in the clean sector significantly increases compared to in the dirty sector. This might be able to incentivise researchers to switch to the clean sector, which increases productivity and drives clean production.
Mathematically, this impact of AI is reflected by the term in equation (16), which is carried to the term in (30). In Theorem 2, when considering how changes over time, this impact is considered in the first term of (36).
However, there is also a second force in play, which is that AI likely only benefit dirty production in reality. Acemoglu et al. (2012) assumes that dirty production is sufficiently ahead of clean production so that all research initially happens in the dirty sector. Thus, until that changes, only dirty production actually benefits from AI.
Considering the allocation of scientists, this aspect is seen in the term in (30). Only would increase, and since this term has a positive exponent, this factor makes dirty research more profitable. In Theorem 2, this impact is reflected in the second term of (36).
We note that even though clean production may not benefit from AI at all from the start, this does not mean that research cannot switch to the clean sector. If the potential impact of AI on clean production (represented by ) grows faster than the actual impact of AI on dirty production (represented by ), the ratio can still increase and become larger than 1, tipping research to the clean sector.
However, this scenario may not happen. For a sufficiently large , both the first and second terms of (36) are bounded by constants. This means that for a sufficiently large , there is an upper bound to how fast AI’s potential to benefit clean research can increase, an a lower bound to the growth rate of productivity in the dirty sector. In such a scenario, research cannot switch to the clean sector. We also note that in the scenario where the switch do happen, it must happen early at a small , or it won’t happen at all
5.3 Environmental Disaster Is Averted With Government Intervention
We now prove that AI, with the help of government intervention, allows us to avoid an environmental disaster. We first prove the following theorem, which is essentially a mirror of Theorem 2.
Theorem 4.
If for
then for all .
Proof.
The proof is structurally identical to that of Theorem 2. We will once again use induction: Consider a sufficiently large time where . According to (32), this means that:
Let the left hand side be . Consider , taking note that and :
We then have the ratio:
(37) |
Now we prove the following Lemmas:
Lemma 11.
If:
then:
Proof.
Manipulating the given condition of , we get:
Now, note that since , we have . Hence:
This expression is actually equivalent to what we want to prove:
which becomes clear after we add a term to factorise each side:
This completes our proof. ∎
Lemma 12.
If:
then:
Proof.
Noting that , the given condition of implies:
and we are done. ∎
The above theorem means that for a sufficiently far point in the future, if all scientists are working in the clean sector, then it remains the case at all times from that point onwards.
This gives rise to the following theorem on the effect of government intervention:
Theorem 5.
If is sufficiently high, then a sufficiently large but only temporary subsidy for the clean sector and/or taxation on the dirty sector will prevent an environmental disaster.
Proof.
A subsidy for the clean sector can be seen as a percentage increase in expected profit of scientists in the clean sector. Similarly, a taxation on the dirty sector is interpreted as a percentage decrease in expected profit of scientists in the dirty sector.
Let the combined effect of the subsidy and/or taxation at time be expressed by so that:
(38) |
Assume that is the sufficiently large constant in Theorem 4. Then for all , we set so that . This demonstrates that the government intervention is only temporary.
By theorem 4, we deduce that for all . This means that , and thus .
The rest of the proof closely mirrors that of Theorem 3. Taking ln of both sides of (26):
Partially differentiate w.r.t :
Thus , meaning is unchanged for all . Equation (11) then immediately implies that for a sufficiently large , for all . ∎
In essence, the taxation on the dirty sector and/or the subsidy on the clean sector provide a financial incentive for scientists to move from the dirty to the clean sector. This allows innovation to happen in the clean sector instead.
Moreover, recall that there exists a tipping point where if innovation only happen in the clean sector at that point, then innovation will only happen in the clean sector in the future. Thus, government intervention only needs to continue until this tipping point, and is therefore temporary.
It should be noted that this is a more promising result than in Acemoglu et al. (2012) as Proposition 3 in this paper shows that temporary subsidies can only prevent an environmental disaster if clean and dirty input are strong substitutes.
6 Computer Simulation
6.1 Choices of Parameters
Our parameter choices are mostly similar to Acemoglu et al. (2012), with some minor changes. For simulation constants, we set the period length to 2 and the number of periods to 50 to investigate the next 100 years.
Regarding constants on the economy, and . We normalise without loss of generality.
Regarding constants on the innovation process, we set and so that productivity without AI increases by 2% per year.
The value of and are taken as ‘the production of nonfossil and fossil fuel in the world primary energy supply from 2002 to 2006’ in quadrillion of British thermal unit (Btu). We then calculate the corresponding values of and as follows: from equations (25) and (26):
(39) |
and then we have .
For , we will calculate 4 of its possible values, based on 4 different prediction on how AI will affect GDP. Here, in the interest of simplicity, we assume that consumption is GDP. We summarise the predictions below:
Authors | AI contribution to GDP in next 10 years |
---|---|
Acemoglu (2024) | 0.9% |
Goldman Sachs (2023) | 7% |
McKinsey & Company (2023) | 33.2% |
Korinek & Suh (2024) | 64.4% |
For environment-related parameters, we first define:
(41) |
where at time , is the increase in temperature compared to the pre-industrial period, and is the concentration in the atmosphere.
We define the environmental disaster as an increase of in temperature, meaning , giving .
The environmental quality is then defined as:
(42) |
Moreover, we define instead, to reflect that the current atmospheric concentration has increased by 99 ppm.
Finally, we calculate as the emission per unit of dirty input produced from 2002 to 2006, and so that only half of the emitted from production contributes to the increase in atmospheric concentration.
6.2 Results
The values of are as follows. We also calculate the lower bound for in Theorem 4, which represents the minimum number of years the government needs to intervene to switch research entirely to the clean sector.
No. | Authors | AI impact | Intervention | Avoid disaster? | |
---|---|---|---|---|---|
1 | Acemoglu (2024) | 0.9% | 0.0160 | 100 years | No |
2 | Goldman Sachs (2023) | 7% | 0.102 | 15 years | Yes |
3 | McKinsey & Company (2023) | 33.2% | 0.180 | 8 years | Yes |
4 | Korinek & Suh (2024) | 64.4% | 0.201 | 7 years | Yes |
Only the first, most pessimistic prediction results in an environmental disaster; all others show that we will avoid an environmental disaster.
Figure 1 shows that the first scenario will see all research being conducted in the dirty sector, as expected. For other scenarios, research completely switch to the clean sector, but at different times. In the second scenario, it takes around 20 years, while for the third and fourth one, this change takes place almost immediately.

Naturally, this has a decisive impact on production. Figure 2 and 3 shows that when researches are all in the dirty sector (as in scenario 1), production of dirty input skyrockets, and clean input contribution to final good production is virtually zero. In the opposite scenario, when researches are all in the clean sector, production of dirty input is virtually zero, and almost all of the final good is produced from clean inputs.


Figure 4 demonstrates that when researches and production switch to the clean sector, the rate at which consumption increases is significantly larger (note that we cap consumption at to avoid overflow issues during simulation). This is because researches and production are better able to utilize AI, when AI has a larger impact on the clean sector than the dirty sector.

Figure 5 and 6 shows the effect on the environment in those scenario. With Acemoglu’s prediction, the temperature will gradually increase starting from the year (it only stops at 6 degrees Celcius because we cap the temperature increase at that point). As a result, an environmental disaster will occur in around 125 years. It should be noted that during the first 90 years, dirty production is still increasing “behind the scene”, so that by the , its detrimental environmental impact outweigh the regenerate ability of the environment.
For other scenarios, our simulation gives almost identical results, with temperature returning to that of pre-industrial period and an environmental disaster is avoided.


7 Conclusion
With climate change being one of the most serious and urgent topic nowadays, there have been many attempts at new, innovative ways to protect the environment. Many of such projects leverage the huge potential of AI, leading some to believe that AI will lead our environmental efforts. However, many are more cautious and doubt the extent to which AI can contribute to the economy and the environment.
This paper partially validates those doubt, as it shows quantitatively that AI might not be able to avert an environmental disaster, and argue qualitatively that it is unlikely to do so. Only when paired with temporary government intervention can AI definitively prevent an environmental disaster.
That said, this paper also shows quantitatively that AI allows for more optimistic prediction. Acemoglu et al. (2012) shows that an environmental disaster will definitely occur without government intervention, while with AI, there is a chance that it will push clean production just enough to prevent such a disaster. Moreover, Acemoglu et al. proves that a temporary subsidy can only prevent an environmental disaster if clean and dirty input are strong substitutes. With AI, a temporary subsidy (or taxation on the dirty sector) is guaranteed to achieve that goal.
This paper is one of the earliest attempt at quantifying the impact of AI on the environment. Future studies can continue to improve upon this paper by considering different growth rate of AI (e.g. logistic growth), different ways to quantify the impact of AI in the model, and prove stronger mathematical results that highlight the potential of AI in protecting the environment.
Code
The code used for computer simulation is available at the following Github repository:
https://github.com/ld-minh4354/Impact-of-AI-on-Environmental-Quality
Declarations
Funding: The authors did not receive support from any organization for the submitted work.
Conflict of Interest: The authors have no relevant financial or non-financial interests to disclose.
Ethical Approval: This article does not contain any studies with human participants or animals performed by any of the authors.
References
- [1] Acemoglu, D., Aghion, P., Bursztyn, L., & Hemous, D. (2012). The environment and directed technical change. American Economic Review, 102(1), 1 31-166. https://doi.org/10.1257/aer.102.1.131
- [2] Acemoglu, D. (2024). The Simple Macroeconomics of AI (No. w32487). National Bureau of Economic Research.
- [3] Alderman, K., Turner, L. R., & Tong, S. (2012). Floods and human health: a systematic review. Environment international, 47, 37-47.
- [4] Andres, P., Dugoua, E., & Dumas, M. (2022). Directed technological change and general purpose technologies: can AI accelerate clean energy innovation?.
- [5] Batten, S. (2018). Climate change and the macro-economy: a critical review.
- [6] Bergmann, D. (2023). What is self-supervised learning?. IBM. https://www.ibm.com/topics/self-supervised-learning
- [7] Bruzzone, A., & Orsoni, A. (2003, March). AI and simulation-based techniques for the assessment of supply chain logistic performance. In 36th Annual Simulation Symposium, 2003. (pp. 154-164). IEEE.
- [8] Cabrera, Á. A., Perer, A., & Hong, J. I. (2023). Improving human-AI collaboration with descriptions of AI behavior. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1), 1-21.
- [9] Chantry, M., Christensen, H., Dueben, P., & Palmer, T. (2021). Opportunities and challenges for machine learning in weather and climate modelling: hard, medium and soft AI. Philosophical Transactions of the Royal Society A, 379(2194), 20200083.
- [10] Choi, J. D., & Kim, K. E. (2011). Inverse reinforcement learning in partially observable environments. Journal of Machine Learning Research, 12, 691-730.
- [11] Chui, M., Hazan, E., Roberts, R., Singla, A., Smaje, K., Sukharevsky, A., Yee, L., & Zemmel, R. (2023). The economic potential of Generative AI: The Next Productivity Frontier. McKinsey & Company. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier
- [12] Clarke, B., Otto, F., Stuart-Smith, R., & Harrington, L. (2022). Extreme weather impacts of climate change: an attribution perspective. Environmental Research: Climate, 1(1), 012001.
- [13] Crimmins, A., Balbus, J., Gamble, J. L., Beard, C. B., Bell, J. E., Dodgen, D., … & Ziska, L. (2016). The impacts of climate change on human health in the United States: a scientific assessment. The Impacts of Climate Change on Human Health in the United States: A Scientific Assessment.
- [14] Falkowski, P., Scholes, R. J., Boyle, E. E. A., Canadell, J., Canfield, D., Elser, J., … & Steffen, W. (2000). The global carbon cycle: a test of our knowledge of earth as a system. Science, 290(5490), 291-296. https://doi.org/10.1126/science.290.5490.291
- [15] Farzaneh, H., Malehmirchegini, L., Bejan, A., Afolabi, T., Mulumba, A., & Daka, P. P. (2021). Artificial intelligence evolution in smart buildings for energy efficiency. Applied Sciences, 11(2), 763.
- [16] Fok, S. C., & Ong, E. K. (1996). A high school project on artificial intelligence in robotics. Artificial Intelligence in Engineering, 10(1), 61-70. https://doi.org/10.1016/0954-1810(95)00016-X
- [17] Folini, D., Kübler, F., Malova, A., & Scheidegger, S. (2021). The climate in climate economics. arXiv preprint arXiv:2107.06162. https://doi.org/10.2139/ssrn.3885021
- [18] Galea, S., Brewin, C. R., Gruber, M., Jones, R. T., King, D. W., King, L. A., … & Kessler, R. C. (2007). Exposure to hurricane-related stressors and mental illness after Hurricane Katrina. Archives of general psychiatry, 64(12), 1427-1434.
- [19] Goldman Sachs (2023). Generative AI could raise global GDP by 7%. https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html
- [20] Hagendorff, T., & Wezel, K. (2020). 15 challenges for AI: or what AI (currently) can’t do. Ai & Society, 35(2), 355-365.
- [21] Hansen, G., & Stone, D. (2016). Assessing the observed impact of anthropogenic climate change. Nature Climate Change, 6(5), 532-537.
- [22] Hemming, S., Zwart, F. D., Elings, A., Petropoulou, A., & Righini, I. (2020). Cherry tomato production in intelligent greenhouses—Sensors and AI for control of climate, irrigation, crop yield, and quality. Sensors, 20(22), 6430.
- [23] Hritonenko, N., & Yatsenko, Y. (2009). Mathematical models of global trends and technological change. MATHEMATICAL MODELS–Volume III, 2, 303
- [24] Jaffe, A. B., Newell, R. G., & Stavins, R. N. (1999). Energy-efficient technologies and climate change policies: issues and evidence. Available at SSRN 198829. https://doi.org/10.2139/ssrn.198829
- [25] Joksimovic, S., Ifenthaler, D., Marrone, R., De Laat, M., & Siemens, G. (2023). Opportunities of artificial intelligence for supporting complex problem-solving: Findings from a scoping review. Computers and Education: Artificial Intelligence, 4, 100138.
- [26] Kallis, G. (2011). In defence of degrowth. Ecological economics, 70(5), 873-880.
- [27] Kaufmann, R. K., Kauppi, H., Mann, M. L., & Stock, J. H. (2011). Reconciling anthropogenic climate change with observed temperature 1998–2008. Proceedings of the National Academy of Sciences, 108(29), 11790-11793.
- [28] Klein, P., & Bergmann, R. (2019, July). Generation of Complex Data for AI-based Predictive Maintenance Research with a Physical Factory Model. In ICINCO (1) (pp. 40-50).
- [29] Korinek, A., & Suh, D. (2024). Scenarios for the Transition to AGI (No. w32255). National Bureau of Economic Research.
- [30] Kotlikoff, L., Kubler, F., Polbin, A., & Scheidegger, S. (2021a). Pareto-improving carbon-risk taxation. Economic Policy, 36(107), 551-589. https://doi.org/10.1093/epolic/eiab008
- [31] Kotlikoff, L., Kubler, F., Polbin, A., & Scheidegger, S. (2021b). Making carbon taxation a global win-win. No Brainers and Low-Hanging Fruit in National Climate Policy. Retrieved March 15, 2024, from https://cepr.org/system/files/publication-files/110107-no_brainers_and_low_hanging_fruit_in_national_climate_policy.pdf#page=234
- [32] Kotlikoff, L. J., Kubler, F., Polbin, A., & Scheidegger, S. (2021c). Can today’s and tomorrow’s world uniformly gain from carbon taxation? (No. w29224). National Bureau of Economic Research. https://doi.org/10.3386/w29224
- [33] Kwon, C., Raman, A., & Moreno, A. (2023). The Impact of Input Inaccuracy on Leveraging AI Tools: Evidence from Algorithmic Labor Scheduling. Available at SSRN.
- [34] Lawrance, E., Thompson, R., Fontana, G., & Jennings, N. (2021). The impact of climate change on mental health and emotional wellbeing: current evidence and implications for policy and practice. Grantham Institute briefing paper, 36, 1-36.
- [35] Liu, G., Catacutan, D. B., Rathod, K., Swanson, K., Jin, W., Mohammed, J. C., … & Stokes, J. M. (2023). Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumannii. Nature Chemical Biology, 19(11), 1342-1350.
- [36] Lu, S., Bai, X., Zhang, X., Li, W., & Tang, Y. (2019). The impact of climate change on the sustainable development of regional economy. Journal of Cleaner Production, 233, 1387-1395.
- [37] Malek, M. A. (2022). Criminal courts’ artificial intelligence: the way it reinforces bias and discrimination. AI and Ethics, 2(1), 233-245.
- [38] Manning, C., & Clayton, S. (2018). Threats to mental health and wellbeing associated with climate change. In Psychology and climate change (pp. 217-244). Academic Press.
- [39] Morgan, J. (2020). Degrowth: necessary, urgent and good for you. Real-World Economics Review, (93), 113-131.
- [40] Muškardin, E., Tappler, M., Aichernig, B. K., & Pill, I. (2023, November). Reinforcement learning under partial observability guided by learned environment models. In International Conference on Integrated Formal Methods (pp. 257-276). Cham: Springer Nature Switzerland.
- [41] NASA. (2023). Global Temperature — Vital Signs – Climate Change: Vital Signs of the Planet. Climate Change. Retrieved July 12, 2024, from https://climate.nasa.gov/vital-signs/global-temperature/?intent=121
- [42] Nordhaus, W. (2018a). Projections and uncertainties about climate change in an era of minimal climate policies. American Economic Journal: Economic Policy, 10(3), 333-360. https://doi.org/10.1257/pol.20170046
- [43] Nordhaus, W. (2018b). Evolution of modeling of the economics of global warming: changes in the DICE model, 1992–2017. Climatic Change, 148(4), 623-640. https://doi.org/10.1007/s10584-018-2218-y
- [44] Patel, R. K., Kumari, A., Tanwar, S., Hong, W. C., & Sharma, R. (2022). AI-empowered recommender system for renewable energy harvesting in smart grid system. IEEE Access, 10, 24316-24326.
- [45] Popp, D. (2002). Induced innovation and energy prices. American Economic Review, 92(1), 160-180. https://doi.org/10.1257/000282802760015658
- [46] Rosenzweig, C., Karoly, D., Vicarelli, M., Neofotis, P., Wu, Q., Casassa, G., … & Imeson, A. (2008). Attributing physical and biological impacts to anthropogenic climate change. Nature, 453(7193), 353-357.
- [47] Selvaraj, M. G., Vergara, A., Ruiz, H., Safari, N., Elayabalan, S., Ocimati, W., & Blomme, G. (2019). AI-powered banana diseases and pest detection. Plant methods, 15, 1-11.
- [48] Sekulova, F., Kallis, G., Rodríguez-Labajos, B., & Schneider, F. (2013). Degrowth: from theory to practice. Journal of cleaner Production, 38, 1-6.
- [49] Sirotkin, K., Carballeira, P., & Escudero-Viñolo, M. (2022). A study on the distribution of social biases in self-supervised learning visual models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10442-10451).
- [50] Sohrabi, S., Udrea, O., & Riabov, A. (2013, June). Hypothesis exploration for malware detection using planning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 27, No. 1, pp. 883-889).
- [51] de Souza, D. C., Cabrita, L., Galinha, C. F., Rato, T. J., & Reis, M. S. (2021). A Spectral AutoML approach for industrial soft sensor development: Validation in an oil refinery plant. Computers & Chemical Engineering, 150, 107324.
- [52] Spaan, M. T. J. (2006). Approximate planning under uncertainty in partially observable environments. Universiteit van Amsterdam [Host].
- [53] Srinivasan, S., Lanctot, M., Zambaldi, V., Pérolat, J., Tuyls, K., Munos, R., & Bowling, M. (2018). Actor-critic policy optimization in partially observable multiagent environments. Advances in neural information processing systems, 31.
- [54] Tamburrini, G. (2022). The AI carbon footprint and responsibilities of AI scientists. Philosophies, 7(1), 4.
- [55] Stern, David I., and Robert K. Kaufmann. ”Anthropogenic and natural causes of climate change.” Climatic change 122 (2014): 257-269.
- [56] Tol, R. S. (2018). The economic impacts of climate change. Review of environmental economics and policy.
- [57] Vössing, M., Kühl, N., Lind, M., & Satzger, G. (2022). Designing transparency for effective human-AI collaboration. Information Systems Frontiers, 24(3), 877-895.
- [58] Wang, H., Fu, T., Du, Y., Gao, W., Huang, K., Liu, Z. et al. (2023). Scientific discovery in the age of artificial intelligence. Nature, 620(7972), 47-60.
- [59] Westphal, M., Vössing, M., Satzger, G., Yom-Tov, G. B., & Rafaeli, A. (2023). Decision control and explanations in human-AI collaboration: Improving user perceptions and compliance. Computers in Human Behavior, 144, 107714.
- [60] Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review, 96(4), 114-123.
- [61] World Bank. (2024). GDP Data. https://data.worldbank.org/indicator/NY.GDP.MKTP.CD
- [62] Xiang, X., Li, Q., Khan, S., & Khalaf, O. I. (2021). Urban water resource management for sustainable environment planning using artificial intelligence techniques. Environmental Impact Assessment Review, 86, 106515.
- [63] Xu, Y., Liu, X., Cao, X., Huang, C., Liu, E. et al (2021). Artificial intelligence: A powerful paradigm for scientific research. The Innovation, 2(4).
- [64] Yazdani-Asrami, M., Sadeghi, A., Song, W., Madureira, A., Murta-Pina, J., Morandi, A., & Parizh, M. (2022). Artificial intelligence methods for applied superconductivity: material, design, manufacturing, testing, operation, and condition monitoring. Superconductor Science and Technology, 35(12), 123001.
- [65] Zhang, Y., Liao, Q. V., & Bellamy, R. K. (2020, January). Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 295-305).
- [66] Zhang, D., Maslej, N., Brynjolfsson, E., Etchemendy, J., Lyons, T., Manyika, J., … & Perrault, R. (2022). The AI index 2022 annual report. AI index steering committee. Stanford Institute for Human-Centered AI, Stanford University, 123. Retreived March 14, 2024, from https://aiindex.stanford.edu/ai-index-report-2022/