Professional Documents
Culture Documents
Scope 3 Emissions
Scope 3 Emissions
Scope 3 Emissions
Overview of GHG Protocol scopes and emissions across the value chain
Upstream scope 3
emissions
Category Category Description Minimum Boundary
6. Business Transportation of employees for business-related activities The scope 1 and scope 2 emissions of
travel during the reporting year (in vehicles not owned or transportation carriers that occur
operated by the reporting company). during use of vehicles (e.g., from
Category Category Description Minimum Boundary
energy use)
11. Use of sold End use of goods and services sold by the The direct use-phase emissions of sold products
products reporting company in the reporting year over their expected lifetime (i.e., the scope 1 and
scope 2 emissions of end users that occur from
the use of: products that directly consume energy
(fuels or electricity) during use; fuels and
feedstocks; and GHGs and products that contain
or form GHGs that are emitted during use)
Category Category Description Minimum Boundary
12. End-of-life Waste disposal and treatment of products The scope 1 and scope 2 emissions of waste
treatment of sold sold by the reporting company (in the management companies that occur during
products reporting year) at the end of their life. disposal or treatment of sold products.
Scope 3 Challenges
Scope 3 emissions along the value chain are crucial for understanding a company's full greenhouse gas (GHG)
impacts, accounting for around 80% of total emissions. However, tracking and accounting for these emissions pose
significant challenges:
Data Collection: Gathering relevant and granular data for Scope 3 emissions is complex and requires
managing large amounts of data.
Lack of Cooperation: Collaboration with third parties like suppliers, lessors/lessees, employees, or
customers is necessary but often lacking, hindering effective management of Scope 3 emissions.
Transparency: Companies often lack transparency regarding the relevance of Scope 3 emissions and struggle
to access data from their upstream and downstream partners, reducing the quality of information.
Stakeholder Complexity: Scope 3 emissions involve a wide range of stakeholders across the value chain,
making data collection more complex and tracking emissions challenging.
Resource Constraints: Calculating Scope 3 emissions requires resources, expertise, and data management
processes, which can strain personnel and financial resources without strong management support and
alignment.
Scope 3 calculation methodologies
The Greenhouse Gas Protocol (GHG) recommends the use of Life Cycle Assessment (LCA) or environmentally
Extended Input-Output Analysis (EIOA) for reporting and estimating Scope 3 emissions.
1. Life Cycle Assessment (LCA): This approach evaluates the environmental impact of a product or activity
by considering its entire life cycle, from raw material extraction to disposal. It requires detailed
knowledge about the activities and services in the supply chain and their associated emission factors.
However, for large companies with tens of thousands of products and services, collecting such detailed
physical activity data across the supply chain can be complex and impractical.
2. Environmentally Extended Input-Output Analysis (EIOA): This method is widely adopted and based
on the economic model Input-Output Analysis (IOA). EIOA captures the inter- and intra-trade relations
between various sectors of the economy across different countries. The environmental impact is included
in the environmentally Extended Input-Output model, which estimates the carbon footprint for delivering
one dollar of output by sectors or products. EIOA only requires collecting value-based (monetary values)
activity data of the company, mainly consisting of purchasing data, broadly classified into product groups
or economic sectors. Emission factors for every dollar spent on each group of products or economic
sectors can be determined from the EEIO framework.
3. Machine Learning (ML) Techniques: Some research works have explored the use of machine learning
techniques for estimating Scope 3 emissions. For example, one study used ML regressor models to predict
Scope 3 emissions using widely available financial statement variables, Scope 1&2 emissions, and
industrial classifications. Another study focused on the quality of Scope 3 emission data and the
performance of machine-learning models in predicting Scope 3 emissions using datasets from different
providers.
ESG Journey
The Home Depot aims to reduce its Scope 3 emissions specifically from the "Use of Sold Products" category by 25%
by the end of 2030 from a 2020 base year. As the largest retailer of home improvement products, The Home Depot
recognizes that a significant portion of its total indirect emissions comes from the use of products it sells. Therefore,
the company has chosen to focus its emissions reduction efforts on this category to have the most significant impact
on carbon emissions.
1. Inventory Analysis:
Unsupervised ML algorithms like neural net clustering and K-Nearest Neighbours (KNN) can
streamline data collection from diverse sources and enhance data accuracy.
ML algorithms such as artificial neural networks (ANN), support vector machines (SVM), and deep
neural networks (DNN) can categorize or cluster data based on criteria or learned patterns, improving
data organization and reliability.
ML algorithms like Naive Bayes Classifier and Decision Trees can classify inventory data into
predefined categories, facilitating data categorization based on specific attributes.
ML models such as DNN and generative adversarial network (GAN) can help in data imputation and
extrapolation, ensuring completeness and representativeness of inventory data.
2. Characterization and Normalization:
ML algorithms can integrate and harmonize data from different sources and formats to ensure
consistency and comparability in inventory data.
ML methods can normalize data by accounting for different units and timeframes, ensuring
consistency and accuracy in inventory analysis.
Natural Language Processing (NLP) techniques can automatically extract relevant data from text-
based sources, such as scientific literature, reports, or websites, aiding in inventory analysis.
3. Data Pre-processing: Techniques like Principal Component Analysis (PCA), SVM, and ANN can transform
raw data into suitable formats for analysis. PCA identifies important patterns, SVM develops models for data
classification, and ANN learns complex patterns from large datasets.
4. Environmental Impact Evaluation and Interpretation: AI/ML can develop advanced models for impact
assessment, conduct sensitivity and uncertainty analysis, optimize environmental impact allocation, and
identify optimal process configurations. ML methods like regression, decision trees, random forests, SVM,
and ensemble methods can be used for impact assessment modelling, depending on data characteristics and
requirements.
5. Model Evaluation and Interpretation: Model evaluation ensures accuracy and reliability of impact
assessment models. Techniques like Monte Carlo simulation and Bayesian statistics are used for uncertainty
and sensitivity analysis. AI/ML-based methods aid in pattern recognition, anomaly detection, decision
support, data visualization, and communication of LCA results.
A holistic AI/ML-enabled LCA framework for improved prediction of environmental impact
Scope 3 emissions are disclosed by type, with 15 Root Mean Squared Logarithmic Error (RMSLE):
types known as targets. Lower values indicate lower percentage errors in
Some emission types are less reported or reported predicted emissions, penalizing underestimated
as zero due to their idiosyncratic nature. predictions more.
More companies report emissions for upstream R-squared (R2): Higher values mean more of the
activities and types that are easier to measure and variation in reported target values is explained by
control. the input features.
Mean Absolute Percentage Error (MAPE): Lower
Models:
values indicate lower percentage error, penalizing
Primary models: Random Forest and Adaptive overestimates more relative to underestimates.
Boosting (AdaBoost) algorithms. Model Performance:
Benchmark models: Linear regression estimators,
k-nearest neighbours (k-NN) algorithm. Average R2 increases across models: 46% for
Random Forest uses bagging, while AdaBoost uses OLS, 68% for k-NN, 75% for random forest, and
boosting. 78% for AdaBoost.
When focusing on observations where companies
use mostly primary data, the average R2 further
increases to 83%.
The Limitations and Aims of the machine learning approach for estimating Scope 3 emissions: