Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 25

Final Individual Assessment

Unit Name Business Analytics Fundamentals


Details Code HI6037
Year, Trimester 2024 Trimester 1

Assessment Name Final Assessment


Details Due Date & Time 21 June, 2024
11.59 pm

Student Student Number

Details
First Name

Family Name

Submission Integrity Declaration I have read and understand academic integrity policies and
Declaration practices, and my assessment does not violate these.

Full Name

Submission Date

ALL SUBMISSIONS MUST INCLUDE YOUR STUDENT DETAILS AND SUBMISSION


DECLARATION.
IF THESE DETAILS ARE NOT COMPLETED YOU RISK BEING PENALISED

Please take a screenshot of the result from Power BI and paste it under each
question in this file.

HI6037FIA T12024
Instructions

Academic Holmes Institute is committed to ensuring and upholding academic integrity. All
Integrity assessment must comply with academic integrity guidelines. Important academic
Information integrity breaches include plagiarism, collusion, copying, impersonation, contract
cheating, data fabrication and falsification. Please learn about academic integrity
and consult your teachers with any questions. Violating academic integrity is
serious and punishable by penalties that range from deduction of marks, failure of
the assessment task or unit involved, suspension of course enrolment, or
cancellation of course enrolment.

Format &  All answers must be entered in the answer boxes provided after each question.
submission  Your assessment must be in MS Word format only.
instructions  You must name your file with the Unit Code and Student ID
example: HI6037 - ABC1234
 Check that you submit the correct document as special consideration is not
granted if you make a mistake.
 Your student ID & name must be entered on the first page.
 Submission declaration must be completed on the first page.
 All work must be submitted on Blackboard by the due date and time.
Late submissions are not accepted.
 You have two attempts to submit. The final submission will be marked only.

Penalties  Reference sources must be cited in the text of the report and listed
appropriately at the end in a reference list using Holmes Institute Adapted
Harvard Referencing. Penalties are associated with incorrect citation and
referencing.
 For all other penalties, please refer to the Final Assessment Instructions
section on Blackboard.

HI6037FIA T12024
Question 1 (10 marks)

1A: Describe the steps you would take to merge the cleaned_Location.xlsx and cleaned_Budget.csv
datasets in Power BI. What key factors must you consider to ensure data integrity? (5 Marks)

ANSWER: ** Answer box will enlarge as you enter your response


To merge the cleaned_Location.xlsx and cleaned_Budget.csv datasets in Power BI, I would follow
these steps:

Load Data:

First, I would open Power BI Desktop.

Then, I would click the Get Data button and select Excel to load the cleaned_Location.xlsx file.

Next, I would repeat the process by clicking Get Data again, but this time, I would select CSV to
load the cleaned_Budget.csv file.

Transform Data:

When both files were loaded, I would right-click any cell inside the query and select ‘Transform
Data’ on the Home tab of the Power Query Editor would open up.

In Power Query Editor, I would ensure that the two data sets are in the right format and cleansed
correctly, for instance, checking for any NULL values and ensuring that the data types are correct.

Merge Queries:

I would reach the Home tab on the backstage view and then choose More Queries under the
group heading Merge Queries in the Power Query Editor.

Join keys must be similar in both datasets, and I would select ‘Location ID’ from the two datasets
as the join column.

Adjust Merged Data:

When merging the queries, I would check that the merged dataset recovered is of the correct
data structure.

I would delete all the extra columns I did not need and rename them, if there was any column,
whose name I did not find appropriate enough.

Close & Apply:

HI6037FIA T12024
Finally, I would click more specifically on the words Close & Apply to export the transformed and
merged data to Power BI.

Key Factors to Ensure Data Integrity:

Consistent Data Types: The crucial thing to consider is to check for equality of data types of the
columns used to merge two datasets.

Unique Identifiers: I would ensure that the fields used to create the relationships the join keys,
which in this case is LocationID, do not allow the use of duplicate keys, hence avoiding any loss of
data. Handling Missing Values: Before merging, I would check for and appropriately handle any
missing values in the datasets.

Data Validation: After merging, I would perform checks to ensure the merged data accurately
reflects the combination of the original datasets.

Backup Original Data: It’s important always to keep a backup of the original datasets so I can
revert to them in case of any issues during the merge process.

By following these steps, I can effectively merge the cleaned_Location.xlsx and


cleaned_Budget.csv datasets in Power BI while maintaining data integrity.

1B: Using the merged dataset, create a visualisation in Power BI that displays the correlation
between budget allocations and property price changes over time. Explain how your visualisation can
be interpreted. (5 Marks)

Datasets Used:

1 Onecleaned_Location.xlsx
2 Twocleaned_Budget.csv

HI6037FIA T12024
ANSWER:
Using the merged dataset from cleaned_Location.xlsx and cleaned_Budget.csv, I created a bar
chart in Power BI to display the correlation between budget allocations and property price
changes over time.

Visualisation:

In the Report view of Power BI, I selected a Bar chart from the Visualizations pane.

I added PropertyPriceChange to the Axis to display the changes in property prices.

I added Budget to the Values to show the budget allocations.

I included a yearly slicer to filter the data by specific years for more detailed analysis.

Interpretation of the Visualization:

Trend Analysis: The bar chart describes the budget's extent and direction and the property price
changes. Every bar is a different percentage of the property’s price change and associated
budget.

Correlation Insights: Hence, from the chart above, it can be seen that there is a possibility of
different extents of budget allocations by different organisations depending on the changes in the
range of property prices. For instance, more significant budget amounts imply greater relative
property price changes.

HI6037FIA T12024
Decision-Making: It is hoped that this form of communication will assist the stakeholders to
discern how the distribution of funds could affect property prices. If a tendency is observed
where large amounts can cause a significant price variation, then future budget creations will be
based on that.

This polarity of budget and its relation to the variations of property prices over time can easily be
seen from the chart read by the analyst and planner to make logical decisions on the budget
allocation outlay.

Question 2 (10 marks)

2A: Explain how you would assess the accuracy of sales forecasts using the cleaned_Forecast.csv and
cleaned_dim_tables_final.xlsx datasets. What specific Power BI functionalities would you use? (5
Marks).

ANSWER:
As a result of the cleaning of the Forecast Table, the next step is to verify the extent of the
accuracy of sales forecasts. csv and cleaned_dim_tables_final. xlsx datasets in Power BI, I would
follow these steps:

Load Data:

First, I would have navigated to the Power BI desktop.

Next, I would click on Get Data and select CSV to bring in the cleaned_Forecast. csv file.

After that, I would again click on Get Data and this time, I would choose Excel to import the
cleaned_dim_tables_final—xlsx file.

Transform and Clean Data:

When working in the Power Query Editor, the data in both datasets would be cleaned and
formatted correctly.

I would check that the data types are proper and manage missing values to get the datasets ready
for analysis.

Merge Datasets:

HI6037FIA T12024
I will combine the cleaned_Forecast. csv with relevant tables from cleaned_dim_tables_final.
merge the similar keys of combined dataframes of xls and xlsx form to generate a large data pool
for detailed study.

Create Calculated Columns and Measures:

To assess the accuracy, I would create a calculated column for the absolute error using a formula
like Absolute Error = ABS([Actual Sales] – [Forecasted Sales]).

Additionally, I would create another calculated column for the percentage error: Percentage error
= [Absolute error]/[Actual sales].

Use Power BI Functionalities:

Line Chart: In my case, I would use a line chart to show the actual and forecasted sales as the two
variables, as sales are measured over time. Regarding the changes to be made to the structure, I
would incorporate the Date as part of the Axis. At the same time, the ActualSales and
ForecastedSales will be placed under the Values.

Scatter Plot: I would use a scatter plot to compare the existing and forecasted sales. This assists in
picking out any departures that may be rather sizeable.

KPI Visual: I will impact the KPI graphical image of Mean Absolute Error (MAE) or Mean Absolute
Percentage Error (MAPE), etc.

Create Visualizations:

Error Distribution: Argentina I would use a histogram to portray the forecast errors that relate to
the actual errors. I would include the AbsoluteError to the Values and bin the data to show how
frequently the various ranges of the error occur.

Accuracy Metrics: I will use cards to show finer details like MAE, MAPE and total forecast error for
crucial performance indicators.

Analyse Results:

Trend Analysis: I would apply the line chart to assess the awareness of the forecast accuracy over
the period.

Identify Outliers: In the same graph, I would be able to identify whether or not observations were
poorly forecasted.

Error Analysis: The histogram and KPI visuals will help me determine the general accuracy and
consistency of the forecasts.

HI6037FIA T12024
Specific Power BI Functionalities I Would Use: Specific Power BI Functionalities I Would Use:

Power Query Editor: This accomplishes data transformation operations and joins multiple
datasets.

Calculated Columns and Measures: These are used to perform calculations such as error metrics
and corresponding mathematical calculations with various parameters to be used later.

Line Chart: To plot a comparison of actuals and the forecasted amounts on the sales timeline.

Scatter Plot: To compare the actual and the forecasted sales.

KPI Visual: To place standard performance measures, including MAE and MAPE.

Histogram: To illustrate the pattern of errors for the forecast.

Card Visuals: To explain accuracy metrics that are useful when implementing the algorithm.

The following steps and the Power BI functionalities outlined above will help me determine the
degree of accuracy in sales forecasting and reveal the shortcomings in the sales forecasting
process.

2B: Develop and describe a geographic visualisation in Power BI that highlights sales distribution and
high-performing regions. Include details on any filters or slicers you would add to enhance
interactivity. (5 Marks)

Datasets Used:

 cleaned_Forecast.csv
 cleaned_dim_tables_final.xlsx

HI6037FIA T12024
ANSWER:
I developed a geographic visualisation in Power BI to highlight sales distribution and high-
performing regions for this question.

Description of the Visualization:

Sales Distribution: The map clearly shows sales distribution across different regions. Each region
is marked on the map, and markers of varying sizes and colours represent the sales data.

High-Performing Regions: Using colour scales and marker sizes helps identify high-performing
regions with higher sales volumes quickly.

Filters and Slicers: The Region and Country slicers enhance interactivity, allowing users to drill
down into specific regions or countries and see how sales are distributed. This makes it easy to
analyse sales performance at different geographic levels.

Dynamic Data Card: The data card provides a real-time update of total actual sales based on the
selected filters, giving users an immediate overview of sales performance.

Using these Power BI functionalities, I created a comprehensive and interactive geographic


visualisation highlighting sales distribution and high-performing regions, providing valuable
insights for decision-making.

HI6037FIA T12024
Question 3 (10 marks)

3A: What variables have been included in this synthetic dataset for simulating tourism data? Can you
briefly explain each one? (5 Marks)

ANSWER:
To define the parameters of the synthetic dataset used in the simulation of tourism data, I
analysed the cleaned_synthetic_tourism_data given in the example.xlsx file. Here are the critical
variables included in the dataset, along with a brief explanation of each:

Date:

This variable refers to the time the data was taken, generally in the form of the date. It assists in
identifying the tourism flow rates that increase and decrease over time and during certain
seasons.

Attraction:

The attraction variable measures the point of attraction being looked at, which could be the
Sydney Opera House or the Great Barrier Reef. It enables the comparison of tourism statistics and
hence distinguishing the tourist-attracting spots.

Visitor_Count:

This variable refers to the number of clients who turn up at a specific location on a particular day
and time. It is of great help for people to establish visitors' preferences regarding certain
attractions and analyse visitor patterns.

Average_Spend:

Total admitting board size means the number of visitors who spent money to enter a particular
attraction on a given date. Average spend refers to the average amount paid by the individual
visitor to that specific attraction on the same date. This entails costs such as fees for entry,
shopping for knickknacks, meals, and any other expenses in the attractions. It assists in evaluating
the tendency to spend dollars on tourists.

Local_Business_Revenue:

This variable refers to the total income of different LAs realised from tourism-related activities at
a particular attraction on a given day. This plays a significant role in that it assists in
understanding the workings of the economy about tourism.

HI6037FIA T12024
These variables all give a broad yet complete picture of tourism information by which the tourist
action, the importance of the tourist economic contribution, and the tourist site's reputation can
be illustrated. Understanding these variables will consequently enable us to have a better
understanding of the tourism system and make practical decisions on the improvement of
tourists’ experience while at the same time increasing the benefits accrued to the local
economies.

3B: Create a dynamic map in Power BI using your synthetic dataset that shows tourism density and
its economic impact on local economies. Describe how you would create this visualisation to provide
insights into peak times and spending patterns. (5 Marks)

Datasets Used:

 cleaned_Synthetic_Tourism_Data.xlsx

ANSWER:
For this question, I generated a dynamic map in Power BI by employing synthetic data to display
tourism density and its impact on the economy of the affected regions. Here’s how I set up the
visualisation:

HI6037FIA T12024
Load Data:

First, I imported the cleaned_synthetic_tourism_data—workbook into Power BI by importing the


data from the ‘.xlsx’ file.

Transform and Clean Data:

Further content enrichment was done using what the Power Query Editor enabled me to do to
inform the data and clean it.

I verified that all columns, including Date, Attraction, Visitor Count, Average Spend, and Local
Business Revenue, were encoded, and the data types were set appropriately.

Create the Map Visualization:

In the Report view, I right-clicked on the visualisations pane and chose the Map visual.

I dropped the Attraction field into the Location bucket, placing the various tourist attractions on
the map.

Since it was necessary to measure the density of tourism, I placed the Visitor_Count into the Size
bucket.

Locating Local_Business_Revenue under the Tooltips containment enabled the demonstration of


the effect of tourism at a particular site.

Enhance the Visualization with Filters and Slicers: Enhance the Visualization with Filters and
Slicers:

I include a slicer for Month, enabling users to slice the data by particular months to analyse
monthliness in business.

I also added a slicer for Attraction to allow users to filter on the specific tourist attractions and
view the performance comparison.

Create Supporting Visualizations:

I also made a bar chart to increase the information density where the Average_Spend and the
Visitor_Count are depicted depending on the month. This will assist in identifying the habits of
spending and the appropriate times for tourism.

I also included a pie chart to show the performance of Average_Spend in every month of the
year.

Customisation and Interactivity:

HI6037FIA T12024
I modified the map's appearance to make the color scales and the marker’s size understandable
and clear.

The map stands out because it has pop-up bubbles with information to which people can hover
to learn more about the visitor numbers and local businesses’ gross receipts.

Then, the bar chart and pie chart change their contents according to the applied filters, which
gives a general understanding of the tourism trends.

Description of the Visualization:

• Tourism Density: The map used in the visual representation depicts the number of visitors at
different places with more significant strings depicting more visitors. This, in turn, aids in
determining the most frequented areas within a given destination.

• Economic Impact: This way, the tooltip has information on business income, and one can see
tourism's contribution to local economies. The population viewed how much revenue had been
attributed to tourism within the places.

• Peak Times and Spending Patterns: The overall support of the bar chart and the pie chart,
therefore, is to add more understanding by presenting average spending & visitor counts by
month. This assists in determining the degree of tourism activity and expenditure throughout the
day, week, month, and year.

• Filters and Slicers: The month and attraction slicers enhance interactivity, allowing users to drill
down into specific periods and attractions to gain deeper insights into tourism trends.

Using these Power BI functionalities, I created a dynamic and interactive visualisation that
provides valuable insights into tourism density and its economic impact, helping stakeholders
make informed decisions to boost local economies and improve tourist experiences.

Question 4 (10 marks)

4A: Discuss how you would analyse customer demographics, purchasing patterns, and profitability
using synthetic transaction data. What analytical techniques would you apply? (5 Marks)

ANSWER:
To analyse customer demographics, purchasing patterns, and profitability using the synthetic
transaction data, I would follow these steps and apply the following analytical techniques:

HI6037FIA T12024
Load Data:

First, I loaded the Cleaned_Synthetic_transaction_data.csv file into Power BI to begin the


analysis.

Data Preparation:

I ensured the data was formatted correctly and cleaned in the Power Query Editor.

I checked the data types for each column and handled any missing values to prepare the dataset
for analysis.

Understanding Customer Demographics:

Customer Segmentation: Using the CustomerID and any available demographic data (e.g., age,
gender, location) from related tables, I would segment customers into different groups.

Visualisations: I would create bar charts and pie charts to visualise the distribution of customers
across different demographic segments.

Analysing Purchasing Patterns:

Time Series Analysis: By examining the TransactionDate, I would analyse purchasing trends over
time. This involves creating line charts to show the number of transactions and total sales over
different periods (e.g., daily, monthly).

Product Analysis: Using ProductID, Quantity, and UnitPrice, I would analyse which products are
most popular and generate the most revenue. A bar chart showing sales by product can provide
insights into best-selling items.

Basket Analysis: I would perform a basket analysis to understand which products are frequently
purchased together. This involves using techniques such as association rule mining to identify
product bundles.

Assessing Profitability:

Profit Calculation: I would calculate the profit for each transaction using the TotalAmount and
cost data (if available). I can estimate profitability based on sales and unit price if cost data is
unavailable.

Profit by Customer: I would create measures to calculate total profit per customer and visualise
this using bar charts or scatter plots. This helps in identifying high-value customers.

Profit by Product: Similar to customer analysis, I would analyse profitability by product to


determine which items drive the most profit.

HI6037FIA T12024
Analytical Techniques:

Descriptive Statistics: I would use descriptive statistics to summarise critical metrics such as
average sales, total revenue, and profit margins.

Segmentation Analysis: Segment customers based on purchasing behaviour and profitability


using clustering techniques (e.g., K-means clustering).

Predictive Analysis: Apply predictive modelling to forecast future sales and identify potential
high-value customers using techniques such as regression analysis.

Analytical Techniques Applied:

Descriptive Statistics: To summarise and understand key metrics.

Customer Segmentation: To group customers based on demographics and purchasing behaviour.

Time Series Analysis: To identify trends and patterns over time.

Product Analysis: To determine the best-selling and most profitable products.

Basket Analysis: To identify the product bundles and frequently purchased items.

Predictive Modeling: To forecast future sales and identify high-value customers.

By applying these techniques, I can gain valuable insights into customer demographics,
purchasing patterns, and profitability, which can help me make informed business decisions to
improve customer satisfaction and increase profitability.

4B: Design a dashboard in Power BI to visualise distinct customer segments based on profitability
and purchasing behaviour. Explain how each element of the dashboard contributes to understanding
customer segments. (5 Marks)

Datasets Used:

 cleaned_Synthetic_transaction_data.csv

HI6037FIA T12024
ANSWER:
For this question, I developed a dashboard in Power BI to differentiate the unique customer
groups according to profitability and consumption patterns. Here’s a breakdown of how each
element of the dashboard contributes to understanding customer segments:

TotalAmount Gauge:

Description: This one shows the total of sales that have been achieved.

Contribution: It acts as a summary measure of income realised by the business, yielding a


tangible and faster view of its performance.

Quantity and UnitPrice Indicators:

Description: These indices represent the amounts of the offered products and the prices for each
of them.

Contribution: These metrics assist in knowing how much of the product is sold and the capacity
per unit, which is vital in determining profitability.

TotalAmount and UnitPrice by CustomerID Chart: TotalAmount and UnitPrice by CustomerID


Chart:

HI6037FIA T12024
Description: Based on the information analysed, this bar chart with a line overlay showcases the
total amount and unit price according to the customer.

Contribution: It assists in identifying value-based customers based on their buying power. The
line overlay provides another way of looking at customers’ willingness to pay on average, which
may help point out the issue of price sensitivity.

Quantity and TotalAmount by Month Chart: Quantity and TotalAmount by Month Chart:

Description: This dual bar-line chart illustrates the Sales quantity and Total amount by Month.

Contribution: This chart helps understand seasonal movements and evaluate the firm’s monthly
sales performance. Inventory stocking helps determine the satisfying weeks and good timing to
load up the inventories for sale and the best time to launch marketing campaigns.

TransactionID and CustomerID Slicers:

Description: These slicers enable the user to slice the data by the transaction or the specific
customer.

Contribution: They increase interactivity by allowing for the analysis of such aspects as individual
transactions or groups of customers. Users can explore the specific characteristics of a particular
customer or transaction.

Average_Spend and Visitor_Count by Month Charts:

Description: These charts – which are not depicted in the current screenshot for simplicity’s sake,
though the data included here would describe them – would illustrate the average monthly
spend and visitor count.

Contribution: They give other insights regarding customers' spending patterns and the facilities'
visitation frequency. Understanding these peculiarities may help develop marketing and
promotions to increase customer interest.

Description of the Visualization:

Customer Segmentation: The dashboard elements together assist in pulling off customer
segmentation based on their buy habits and profitability. The following visualisations can be
made to identify high-value customers and frequent buyers; however, price-sensitive customers
also fall into the same category.

Profitability Analysis: Having the total sales, quantity sold, and average unit prices, this
dashboard offers the profitability picture with several customer types.

HI6037FIA T12024
Purchasing Patterns: These time-based charts display the quantities and the Tot-Amount by
month, which helps to identify the buying trends and the time of the year the organisation needs
to offer seasonal products.

Interactive Exploration: applying the slicers for TransactionID and CustomerID is beneficial in
visual interaction as it allows one to go deeper into the details of the data as per the user's need.

For those functionalities of the Power BI, I built a detailed and dynamic dashboard, which will be
helpful in the analysis of the segments by the turnover and buying patterns. This goes a long way
in enabling enterprises to make proper business decisions on their marketing strategies,
customer satisfaction, and enhanced profitability.

Question 5 (10 marks)

5A: Describe the process you would use in Power BI to cleanse and prepare a complex dataset for
analysis. What challenges might you encounter, and how would you address them? (5 Marks)

ANSWER:
To cleanse and prepare a complex dataset for analysis in Power BI, I would follow these steps:

Load the Dataset:

First, I would access Power BI Desktop and click on Get Data from the Home tab to import the
data set from its source (for instance, Excel, CSV, database).

Open Power Query Editor:

Once the information set was loaded, I would refine the data to commence the Power Query
Editor by clicking on Transform Data. This is where I can carry out all the data-cleaning activities.

Remove Unnecessary Columns:

I would also clean some of the data by possibly deleting some of the columns that may not be
necessary in the analysis. This assists in excluding all the distractive data and concentrating only
on the most critical data.

Handle Missing Values:

I would then look at the percentage of the total dataset and whether there is any string found to
be missing in the total observation. If the data has missing points, based on the data type, I might

HI6037FIA T12024
replace missing values with a default value, mean/median, or forward / backward fill. There are
situations when deleting all rows with missing values could be relevant.

Correct Data Types:

Selecting the suitable data types for each column is one of the most critical steps that have to be
taken in the database design. I would format the data to suit the data type (for example, dates
should be in date format, numbers should be in number format, etc.).

Remove Duplicates:

I shall also check to eliminate any redundancy in the rows to double the accuracy of the given
data set.

Create Calculated Columns:

If so desired, I would further define new calculated fields with even more specifics and
complexities. For instance, I defined a new column, TotalAmount, and a Quantity and UnitPrice
product.

Rename Columns:

If the names of the columns are not easily understood, I would rename the column names to
more suitable names for the dataset.

Apply Transformations:

To the data, I would pre-process it appropriately by either pivoting /unpivoting columns, splitting
columns by delimiters, or joining columns.

Check for Consistency:

Last, I would perform a data validity check, where I would look at the data and check that it
seems sensible and is not radically different between the sample data and training data.

Challenges and How I Would Address Them:

Handling Large Datasets:

Challenge: Decomposition on large data sets adversely affects Power BI and reduces analysis
applications.

Solution: I would try to load data in increments, eliminate as many data points as early as
possible, and work with aggregations when working with big data.

Dealing with Missing Data:

HI6037FIA T12024
Challenge: Whenever there is a gap, more often than not, different interpretations shall be
expected.

Solution: In case of missing data, I would further think hard and select whether I should attribute
values, remove those numerications or employ algorithms capable of dealing with missing values.

Ensuring Data Accuracy:

Challenge: Hence, inaccurate data types or data that contain inconsistencies are likely to cause
errors.

Solution: I would check on data types to confirm their validity and do some form of data self-
check to prove the data’s validity.

Managing Duplicates:

Challenge: That is why similar information should not be used, as some form of analysis is bound
to be affected by duplicate data.

Solution: I would have avoided using such parameters and ensured every parameter has a
unique ID within the scope of use.

Complex Transformations:

Challenge: Most transformations can be cumbersome, and translating one to another is likely full
of potential errors.

Solution: I would split a transformation into sub-transformations and also test all of the sub-
transformations to the extreme.

Thus, it makes sense to go through this process and meet these challenges to successfully
cleanse and prepare a complicated data set and get accurate results upon its analysis in Power BI.

5B: Prepare a presentation in Power BI to showcase findings from one of the datasets provided.
Explain how you would use narrative elements and annotations to guide viewers through your
analysis. (5 Marks)

Datasets Used:

 Any provided dataset (e.g., cleaned_Budget.csv)

HI6037FIA T12024
ANSWER:
For this question, I prepared a presentation in Power BI using the cleaned_Budget.csv dataset.
The dashboard I created provides an overview of the budget distribution across different
countries and IT departments over time. Here’s how I would use narrative elements and
annotations to guide viewers through the analysis:

Title: "Budget Overview"

Introduction: I would start with a brief introduction, explaining that the dashboard provides
insights into budget allocations across various countries, IT departments, and years from 2020 to
2024.

Narrative Element:

Narrative Box: I included a narrative box in the dashboard summarising key findings. For example:

"At $10,53,557.97, the USA had the highest budget, which was 5,267.22% higher than New
Zealand, which had the lowest budget at $19,629.48. The USA accounted for 41.39% of the total
budget."

"Across all 16 countries, budgets ranged from $19,629.48 to $10,53,557.97."

"The budget trended down, resulting in a 0.67% decrease between 2020 and 2024, starting with a
0.67% decrease in 2020."

Budget by Country (Pie Chart):

HI6037FIA T12024
Description: This pie chart shows the budget distribution by country.

Annotation: I would annotate key segments, highlighting that the USA has the largest budget
share and providing specific percentages for significant countries.

Guidance: Explain to viewers that this chart helps identify which countries receive the most and
most minor budget allocations.

Budget by IT Department (Bar Chart):

Description: This bar chart displays the budget distribution across various IT departments.

Annotation: I would annotate the departments with the highest and lowest budgets, such as
"Productivity" and "Business Intelligence."

Guidance: Emphasize how this chart prioritises different IT functions within the budget.

Budget by Year (Bar Chart):

Description: This bar chart shows the budget trend from 2020 to 2024.

Annotation: Annotate significant changes or trends, such as the initial decrease in 2020.

Guidance: This chart helps understand the budget trend over time, showing whether the budget
is increasing, decreasing, or stable.

Map Visualization (Budget by Country):

Description: The map visually shows the geographic distribution of the budget.

Annotation: Annotate significant regions, such as North America and Europe, to show where
most of the budget is allocated.

Guidance: Explain how this visualisation provides a global perspective of budget allocation,
making it easy to see which regions are prioritised.

Interactive Elements (Slicers):

Description: The slicers for the year and country allow users to filter the data.

Annotation: Add a brief guide on how to use the slicers to explore specific periods or countries.

Guidance: Explain that these interactive elements allow users to customise their view, making the
dashboard more engaging and informative.

Overall Insights:

Summarise the main insights from the dashboard:

HI6037FIA T12024
The USA has the highest budget allocation.

Productivity receives the most significant budget among IT departments.

The overall budget has shown a slight decrease over the years. Using these narrative elements
and annotations, I can effectively guide viewers through the analysis, helping them understand
the essential findings and insights from the dataset. This approach ensures the presentation is
informative, engaging, and easy to follow.

End of Assessment

HI6037FIA T12024
Marking Rubric

Criterion-Referenced Grading
Question Excellent Very Good Good Slightly deficient Unsatisfactory

1 Insightful Adequate Meets Incomplete or Missing or irrelevant


questions, clear questions, expectations unclear questions, unclear
screenshots, screenshots, and with competent questions, screenshots, and
and thorough explanations that execution and screenshots, and inadequate
explanations meet the basic minor areas for explanations that explanations that fail
that exceed all requirements. improvement. do not fully meet to meet basic
requirements. requirements. requirements.
2 Insightful Adequate Meets Incomplete or Missing or irrelevant
questions, clear questions, expectations unclear questions, unclear
screenshots, screenshots, and with competent questions, screenshots, and
and thorough explanations that execution and screenshots, and inadequate
explanations meet the basic minor areas for explanations that explanations that fail
that exceed all requirements. improvement. do not fully meet to meet basic
requirements. requirements. requirements.
3 Insightful Adequate Meets Incomplete or Missing or irrelevant
questions, clear questions, expectations unclear questions, unclear
screenshots, screenshots, and with competent questions, screenshots, and
and thorough explanations that execution and screenshots, and inadequate
explanations meet the basic minor areas for explanations that explanations that fail
that exceed all requirements. improvement. do not fully meet to meet basic
requirements. requirements. requirements.
4 Insightful Adequate Meets Incomplete or Missing or irrelevant
questions, clear questions, expectations unclear questions, unclear
screenshots, screenshots, and with competent questions, screenshots, and
and thorough explanations that execution and screenshots, and inadequate
explanations meet the basic minor areas for explanations that explanations that fail
that exceed all requirements. improvement. do not fully meet to meet basic
requirements. requirements. requirements.
5. Insightful Adequate Meets Incomplete or Missing or irrelevant

HI6037FIA T12024
questions, clear questions, expectations unclear questions, unclear
screenshots, screenshots, and with competent questions, screenshots, and
and thorough explanations that execution and screenshots, and inadequate
explanations meet the basic minor areas for explanations that explanations that fail
that exceed all requirements. improvement. do not fully meet to meet basic
requirements. requirements. requirements.

HI6037FIA T12024

You might also like