powerBI

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

Final Individual Assessment

Unit Name Business Analytics Fundamentals


Details Code HI6037
Year, Trimester 2024 Trimester 1

Assessment Name Final Assessment


Details Due Date & Time 21 June, 2024
11.59 pm

Student Student Number

Details
First Name

Family Name

Submission Integrity Declaration I have read and understand academic integrity policies and
Declaration practices, and my assessment does not violate these.

Full Name

Submission Date

ALL SUBMISSIONS MUST INCLUDE YOUR STUDENT DETAILS AND SUBMISSION


DECLARATION.
IF THESE DETAILS ARE NOT COMPLETED YOU RISK BEING PENALISED

Please take a screenshot of the result from Power BI and paste it under each
question in this file.

HI6037FIA T12024
Instructions

Academic Holmes Institute is committed to ensuring and upholding academic integrity. All
Integrity assessment must comply with academic integrity guidelines. Important academic
Information integrity breaches include plagiarism, collusion, copying, impersonation, contract
cheating, data fabrication and falsification. Please learn about academic integrity
and consult your teachers with any questions. Violating academic integrity is
serious and punishable by penalties that range from deduction of marks, failure of
the assessment task or unit involved, suspension of course enrolment, or
cancellation of course enrolment.

Format &  All answers must be entered in the answer boxes provided after each question.
submission  Your assessment must be in MS Word format only.
instructions  You must name your file with the Unit Code and Student ID
example: HI6037 - ABC1234
 Check that you submit the correct document as special consideration is not
granted if you make a mistake.
 Your student ID & name must be entered on the first page.
 Submission declaration must be completed on the first page.
 All work must be submitted on Blackboard by the due date and time.
Late submissions are not accepted.
 You have two attempts to submit. The final submission will be marked only.

Penalties  Reference sources must be cited in the text of the report and listed
appropriately at the end in a reference list using Holmes Institute Adapted
Harvard Referencing. Penalties are associated with incorrect citation and
referencing.
 For all other penalties, please refer to the Final Assessment Instructions
section on Blackboard.

HI6037FIA T12024
Question 1 (10 marks)

1A: Describe the steps you would take to merge the cleaned_Location.xlsx and cleaned_Budget.csv
datasets in Power BI. What key factors must you consider to ensure data integrity? (5 Marks)

ANSWER: ** Answer box will enlarge as you enter your response


To merge the cleaned_Location. xlsx and cleaned_Budget. csv datasets in Power BI, I followed
several systematic steps to ensure data integrity: CSV datasets in Power BI, I followed several
systematic steps to ensure data integrity:

Loading the Datasets:

First, I had to turn on the program and, beginning at the Home tab, click on the ‘Get Data’ tool.

For the cleaned_Location. It is an XLSX file, so I clicked on ‘Excel’ and then browsed the location
of the file that was loaded.

Thus, the type of the cleaned_Budget was also set to ‘Text/CSV’. I was given permission to
import the CSV file and then verified the Settings to use Power BI.

Inspecting the Data:

I analysed the structures of both given datasets when they were open in the Data view and
looked through the columns and the kinds of data they contained.

Then, use the ‘common column’ that can be regarded as the key for merging, i.e. ‘LocationID’
existing in both datasets.

Merging the Datasets:

When I was in the ‘Model’ view, I established the relationship between the two tables by right-
clicking on the last cell in the ‘Data Table’ and clicking on ‘Manage Relationships’, then ‘New’.

So, I selected the most logical primary critical fields from both tables I had drawn and set the
Cardinality as ‘Many to One’ because one location may correspond to several budgets. The Cross
filter direction was set to ‘Single’.

Ensuring Data Integrity:

Consistency: I verified that the formats of the standard columns were the same; there were no
differences concerning text in upper or lower case and spaces between the values.

Completeness: I searched for null data in necessary fields since null values harm merging
somewhat.

HI6037FIA T12024
Correctness: To ensure there was no way I would make an analysis based on faulty data, I had to
ensure that both tables had correct data and that the data was up-to-date.

Duplicates: To ensure that my results are accurate, I filtered out any records that appeared more
than once in the datasets before merging.

Validating the Merge:

After the merging process, I checked a sample of the merged data with the original datasets to
ensure that the merging process was successful and that there was no data loss or misalignment.

Some activities included using DAX queries to create calculated columns/ measures and ensuring
that the relationships and calculations were as intended.

By properly implementing these steps and considering vital data characteristics, I guaranteed
that the dataset merged in Power BI was clean and appropriate for analytical outputs.

1B: Using the merged dataset, create a visualisation in Power BI that displays the correlation
between budget allocations and property price changes over time. Explain how your visualisation can
be interpreted. (5 Marks)

Datasets Used:

1 Onecleaned_Location.xlsx
2 Twocleaned_Budget.csv

ANSWER:
Analysing the line chart, we observe the relationship between the budgets set and appropriated
and the changes observed in the property prices.

From the visualisation:

Trend Observation: Comparing the variation, it could be observed that increased funding has
drastic implications for property prices. This implies that perhaps more investment in some areas
could be pushing up the property prices.

Data Analysis: By comparing the trend of the two variables, the chart suggests that as the
fluctuations in property prices slow down, so does the budget fluctuation. This might mean that
dense initial investments result in considerable property alterations, with small subsequent
investments implying minor alterations in property prices.

HI6037FIA T12024
Yearly Comparison: Using the slicer option, new budget allocation for the property can be
observed in different years, and hence, the trend of price changes in that particular year can be
seen. This assists in looking for any specific trend or irregularity in certain periods. In totality, this
visualisation assists the various stakeholders in seeing how the budget they set impacts property
price change and gathering insight regarding their investment impact on consumption over the
years.

Question 2 (10 marks)

2A: Explain how you would assess the accuracy of sales forecasts using the cleaned_Forecast.csv and
cleaned_dim_tables_final.xlsx datasets. What specific Power BI functionalities would you use? (5
Marks)

ANSWER:

In the next step, the cleaned_Forecast will be used to evaluate the sales forecasts' suitability. csv
and cleaned_dim_tables_final. xlsx datasets in Power BI, I followed these steps and utilised
specific functionalities within Power BI:xlsx datasets in Power BI, I followed these steps and used
specific functionalities within Power BI:

HI6037FIA T12024
Loading the Datasets:

First, I imported the cleaned_Forecast. Convert the CSV file into the Power BI using ‘Get Data’,
then select the ‘Text/CSV’ option.

Next, I imported the cleaned_dim_tables_final. Through the ‘Get Data’ section on the ribbon,
select the ‘Excel’ option and import the data from the stocks_2020—xltx file.

Inspecting and Preparing the Data:

Next, I looked at both datasets in the grid view, focusing on the structure of the loaded data.

Before conducting the analysis, data cleaning was performed, which involved identifying and
dealing with any gaps in the data.

Creating Relationships:

I ensured the connections between the forecast data and the needed dimension tables in the'
Model' view. To realise this, standard fields like ‘Date’ or ‘ProductID’ were mapped to offer
correct data merging.

Calculating Forecast Accuracy:

I have used the calculated columns and measures to check the efficiency of the sales forecast.
Key calculations included:

Absolute Error: This was done using the DAX formula on actual and forecasted sales to give the
absolute difference.

Percentage Error: This was found using the expression of the measure of total absolute
difference between the calculated sales and the actual sales divided by the actual sales, and all
this was in percentage using a DAX formula.

Building Visualizations:

I used the following Power BI functionalities to create insightful visualisations: I used the
following Power BI functionalities to create insightful visualisations:

Line Chart: To show the forecasted sales against the actual sales on the time series. This chart
assists in determining the accuracy of the forecasts with the actual sales trend.

Bar Chart: This would be to demonstrate the percentage error and the absolute error by year.
This assists in identifying intervals where the forecasts were relatively higher or lower than the
actual values.

HI6037FIA T12024
Interactive Filters and Slicers:

I created filters for ‘Year’ and ‘Cost Element’ to drill down the result as I wished. This made it
possible to distinguish the accuracy of the forecast depending on the different segments and
periods under consideration.

I explored, analysed, and evaluated all the required features of Power BI tools in detail and made
a comprehensive recommendation regarding the sales forecasts’ accuracy and potential further
improvements of such models.

2B: Develop and describe a geographic visualisation in Power BI that highlights sales distribution and
high-performing regions. Include details on any filters or slicers you would add to enhance
interactivity. (5 Marks)

Datasets Used:

 cleaned_Forecast.csv
 cleaned_dim_tables_final.xlsx

ANSWER:

HI6037FIA T12024
The geographic map view gives one an overview of the sales coverage and distribution of sales in
all the regions.

Key aspects include:

Data Points: Every region is represented by a point; the size and colour of each signifies the actual
quantity sold. The requirement in the problem states that if larger values are more prominent,
they also represent higher sales.

Legend: The legend, apart from the map, shows the comparison of various sales ranges, thus
helping identify some regions that have sales on the higher side.

Interactivity: The slicers on the right pane enable the users to select only the region and year of
their preference. This is a great way to compare the sales performance of the multitudes of
periods and areas in the worksheet.

Overall Sales: A card with the total actual sales value is present to recall the achievements in
general.

Interpretation

With the help of this map, I can quickly see which areas of the map have the most significant
sales and which parts of the world are most affected by the stock. This analysis assists in
identifying some geographical areas that generate impressive sales and provides a yardstick for
when some sales tactics are most productive. The previously mentioned interaction filters are

HI6037FIA T12024
more detailed than simply choosing a country or another; they allow for a temporal view of the
data and specific choices of the region.

Question 3 (10 marks)

3A: What variables have been included in this synthetic dataset for simulating tourism data? Can you
briefly explain each one? (5 Marks)

ANSWER:
The synthetic dataset for simulating tourism data includes the following variables: The synthetic
dataset for simulating tourism data consists of the following variables:

Date:

Description: This variable relates to the date on which the data were collected.

Purpose: It is utilised to monitor the understanding of tourism on any given day to determine
trends and patterns on specific dates.

Attraction:

Description: This variable captures the names of the tourist attractions they visited.

Purpose: It helps ascertain which sites are popular and gives projections of other significant
tourist destinations.

Visitor_Count:

Description: These variables define the rate of visitors visiting a specific point of interest on a
given date.

Purpose: It is used to determine the visits to the attractions and helps in evaluating the flow,
trends, and rushes.

Average_Spend:

Description: This variable indicates the average expenditure of all visitors at a particular attraction
centre.

Purpose: In the case of tourism, it can predict the effects of tourist expenditure in areas of
interest for every individual tourist attraction.

HI6037FIA T12024
Local_Business_Revenue:

Description: This variable contains the income tourists have spent locally from the total
expenditure incurred in each attraction.

Purpose: It is used to determine the socio-economic importance of tourism and its impact on the
economy of a given region and to measure the potential revenue that the tourism sector
generates within the specific region.

These variables offer a broad picture of the tourism activities, tourism guests and visitors, and the
result on the local commerce, which makes it possible to analyse the specifics of the tourism
activity and come to a conclusion.

3B: Create a dynamic map in Power BI using your synthetic dataset that shows tourism density and
its economic impact on local economies. Describe how you would create this visualisation to provide
insights into peak times and spending patterns. (5 Marks)

Datasets Used:

 cleaned_Synthetic_Tourism_Data.xlsx

ANSWER:

HI6037FIA T12024
To create a dynamic map in Power BI using the synthetic dataset
(cleaned_Synthetic_Tourism_Data. xlsx) that shows tourism density and its economic impact on
local economies, I followed these steps:

Load the Dataset:

I imported the cleaned_Synthetic_Tourism_Data. If you want to export an xlsx file into Power BI,
use the ‘Get Data’ option and select ‘Excel’.

Inspect and Prepare the Data: Inspect and Prepare the Data:

First, I recalculated the dataset to confirm it was loaded correctly and to look for missing or
incomplete data.

I ensured that the dataset contained variables like date, attraction, visitor count, average
spending per visitor, and local business earnings.

Create Relationships (if needed):

If there were other tables, I linked them based on aspects such as Attraction or Date to maintain
data cleanliness.

Build the Map Visualization:

I expanded the Visualizations pane in Power BI and chose the ‘Map’ option from all the available
choices.

Instead of manually entering the coordinates into a map, I selected the Attraction column for the
Location field, which contains the coordinates as the field to plot the data points on the map. The
second column represents the list of names of tourist attractions, including geocoding lists.

Concerning the Size field, I employed the Visitor_Count to present the number of tourists
frequenting the various regions.

Regarding the Color Saturation field, I employed the revenue of local businesses as the marker of
its influence on Color Saturation: the higher the value of Local_Business_Revenue, the higher the
Color Saturation will be.

Additional Visuals:

To analyse trends in average spending, I developed a line chart that displays the monthly trends
concerning Average_Spend.

HI6037FIA T12024
In another Line Chart, I used Local_Business_Revenue and Average_Spend as Columns and
Visitor_Count as the Categories to gain an understanding of the impact that a large number of
visitors may have on local business revenue and how much a visitor might spend on average.

Interactive Filters and Slicers:

I added slicers for the date and attraction, giving the user a feature to manipulate the data
displayed at their convenience. This allowed the analysis of the density of tourism-related
activities and the consumer spending stimulus for specific periods and points of interest. I also
included a slicer to filter it based on Month to see the system traffic during different months and
peak hours.

Question 4 (10 marks)

4A: Discuss how you would analyse customer demographics, purchasing patterns, and profitability
using the synthetic transaction data. What analytical techniques would you apply? (5 Marks)

ANSWER:

To analyse customer demographics, purchasing patterns, and profitability using the synthetic
transaction data, I would apply the following analytical techniques:

Data Exploration and Cleaning:

Inspect the Data: First, I would import the data into Power BI and look at the data structure to
understand what kind of variables are provided: customer characteristics, transaction data, and
profitability indicators.

Data Cleaning: I would also preprocess the data to follow the criteria of the data cleaning
exercise, such as getting rid of the unnecessary missing values row and replicates, as well as
adopting the typical algorithm for data cleaning, which enhances the correctness of the data as
desired.

Customer Demographics Analysis:

Segment Customers: I would employ clustering to categorise customers with similar


characteristics concerning age, gender, income, place of residence, etc.

HI6037FIA T12024
Visualise Demographics: For the demographics, I would use items like the bar chart and pie chart
to show the proportion of the different demographics among the customers.

Purchasing Patterns Analysis:

Transaction Analysis: I would study customer relations for the frequency of their purchases, the
amount they spend on average in a single transaction, and their significant areas of interest.

Time Series Analysis: To analyse the purchasing behaviour, I would use the line charts and carry
out a time series analysis to get trends and seasonality.

Basket Analysis: I would use market basket analysis to identify the products that customers
commonly purchase together, enabling me to determine products regarding cross-selling and
upselling products.

Profitability Analysis:

Customer Lifetime Value (CLV): For example, where possible, I would assess each segment by its
Customer Lifetime Value; this would help to look at the potential profitability of customers in the
long term.

Profit Margins: I would review the gross or net income of the different products or services to
determine which one is the most and least profitable.

RFM Analysis: After analysing the characteristics of the customers, I will divide customers using
Recency, Frequency, and Monetary (RFM analysis to select high-value customers).

Advanced Analytical Techniques:

Predictive Modeling: I would process data with the help of machine learning models to predict
the further tendency of purchasing activity and critical factors influencing profitability.

Churn Analysis: Regarding attrition rate, I would also consider conducting churn analysis to
determine customers who are likely to stop doing business with my company and then come up
with solutions to prevent this.

Customer Segmentation: I would use k-means clustering or any related clustering method to
categorise customers according to their purchasing behaviours and profitability.

4B: Design a dashboard in Power BI to visualise distinct customer segments based on profitability
and purchasing behaviour. Explain how each element of the dashboard contributes to understanding

HI6037FIA T12024
customer segments. (5 Marks)

Datasets Used:

 cleaned_Synthetic_transaction_data.csv

ANSWER:

Interpretation

Identifying High-Value Customers: From the TotalAmount and UnitPrice by CustomerID chart, it
will be easy to see which customers contribute much to the revenue and at what price level. This
also assists in uniquely marketing high-value consumers in the marketplace.

Understanding Purchasing Behavior: The Quantity by CustomerID area chart enables me to


identify the customers who are either frequently buying or buying highly in quantities. This
information is essential for the business’s inventory control and customer retention analysis.

Overall Engagement: The gauge and KPI cards are helpful when making general decisions since
they give the overall idea of total engagement and profitability.

HI6037FIA T12024
Question 5 (10 marks)

5A: Describe the process you would use in Power BI to cleanse and prepare a complex dataset for
analysis. What challenges might you encounter, and how would you address them? (5 Marks)

ANSWER:
To cleanse and prepare a complex dataset for analysis in Power BI, I would follow these steps: To
cleanse and prepare a complex dataset for analysis in Power BI, I would follow these steps:

Import the Data:

First, open the ‘Get Data’ window and choose the data source type (for example, Excel, CSV, SQL
server, etc).

Inspect the Data:

Using the data view to examine the data structure and the data types of the columns, there
might be apparent problems, such as missing values or data type mismatches.

Data Profiling:

Right-click in the data pane and select ‘Profile’ to begin querying the data in the Query Editor;
observe the data type of each column along with the number of unique values, blanks, and
errors.

Data Cleaning Steps:

Handle Missing Values: Replace missing values: When necessary, discard rows of the dataset
with missing values, impute with the mean/median value of the corresponding column or the
entire dataset for numerical values, and impute by forward or backward filling for time series.

Remove duplicate records: Deduplicate records are used to eliminate duplicated output rows.

Correct Data Types: Make sure each column has the appropriate data type (for example, dates
should be in date format, numerical values – numeric, etc).

Standardise Formats: Normalize text data about its case (use either lower- or uppercase) and
trim the whitespaces.

Transform the Data:

Column Splitting: If two or more pieces of information are included in a single column, then it is
wise to split the column to enhance the analysis.

Merging Tables: Join the tables to arrive at a single data set at the intersection of given keys.

HI6037FIA T12024
Create Calculated Columns and Measures: Use Microsoft Excel to carry out analytical tasks using
Analysis ToolPak or use Visual Basic for Applications to write applications to automate data
analysis and also DAX (Data Analysis Expressions) to extract new insights.

Addressing Challenges:

Handling Large Datasets: To avoid unsatisfactory performance levels, apply filters and selectively
load only the required data or query the data in DirectQuery optionally.

Data Quality Issues: Comprehensively profile and clean the data to sustain a high quality of the
implemented data.

Complex Data Structures: Maintain relationships with people using the relationship view.

Performance Optimization: Some of the practical tips are as follows: minimise the number of
attributes by excluding irrelevant ones, exclude the records with missing or incomplete data and
sum levels of measurements as necessary.

Validation and Testing:

Thereby performing sample analyses and other tests of the dataset to ensure that it is
appropriate for the study.

Develop the initial chart to understand the correctness of the data.

Challenges and Solutions

Handling Missing Values: In the same regard, one should consider the domain's extent when
deciding the right approach to employ.

Ensuring Data Consistency: Understanding the basic formats and determining and controlling that
data powers query is the fitting kind.

Dealing with Large Datasets: When using this formula, consider the DirectQuery mode and, to
enhance the speed of the query, implement various kinds of data compression techniques.

Complex Data Relationships: Everybody: It makes working in the relationship view easier, which
helps in visual interrelatedness.

If I follow the steps mentioned above, I will be in a much better position to clean and transform
complex data to be fit for analysis in the Power BI tool with much better efficiency and quality
and in less time than before.

HI6037FIA T12024
5B: Prepare a presentation in Power BI to showcase findings from one of the datasets provided.
Explain how you would use narrative elements and annotations to guide viewers through your
analysis. (5 Marks)

Datasets Used:

 Any provided dataset (e.g., cleaned_Budget.csv)

ANSWER:

Before getting into the solution, let’s understand the task and working definition that I have deduced
from the prompt: To clean the dataset and create an informative presentation in Power BI. CSV
dataset, I created the following dashboard and used narrative elements and annotations to guide
viewers through the analysis:
Load the Data:
 I imported the cleaned_Budget. CSV file in Power BI using the ‘Get Data’ option, I opted for
‘Text/CSV’.
 Upon loading the data, I scanned through them to get a feel for what was in there and
separately cleaned up any issues, such as missing values.

HI6037FIA T12024
Analysing the Current Visualization Elements in the Dashboard
Budget by Country (Line Chart):
Purpose: Illustrates how a company splits its budgets and expenditures among various countries.
Narrative: In the column, I emphasised the countries that received the largest and the smallest
budgets, pointing out the possible reasons why some countries could get more money.
Budget Gauge:
Purpose: The visual interface displays a round gauge indicating the overall available budget as a
percentage.
Narrative: This gauge presents a straightforward and immediate picture of the complete payments,
easing the viewer's understanding of the total budget.
Budget by Year (Bar Chart):
Purpose: Helps those comparing the current budgets of their institution to those of the past to
realise how the budget allocations have evolved.
Narrative: This involved highlighting trends and changes that may have occurred in the organisation’s
budget allocation over the years, including irregular ones.
Map Visualization:
Purpose: Demonstrates where in the country what part of the budget has been set aside.
Narrative: The map even shows the focus areas by region and any inequalities in budget distribution
between nations.
Interactive Filters and Slicers:
Country Slicer: This enables the application of a layer of filtered results to watch budget distributions
of particular countries.
Year Slicer: There is an option to filter by year, making it possible to evaluate the changes in the
budget through different years.
The Uncertainty of Critical Theory: Postmodern Discourse and Epochal Time Postmodern Notes and
Narrative Elements

Title and Introduction:


I kept it simple by using the title on the top of the slide as ‘BUDGET-DASHBOARD’ to give the viewer
an idea of what the presentation will be all about.
A sentence was included above the graph to explain the dataset's details and the analysis's objective.
Annotations and Data Labels:
To make it easier for people to read, I used annotations to underscore the findings we deemed
valuable, like the country with the highest budget – the USA- and the country with the lowest budget
– New Zealand.
To achieve figure exactness, data labels were enabled on the chart to improve clarity further.

HI6037FIA T12024
Narrative Text Box:
However, the primary helpful information I mentioned beneath the visualisation is a brief narrative
text describing essential conclusions. For example, it mentions that the USA has the largest budget
and gives a rationale regarding the gap that is much larger than New Zealand’s budget.
Sequential Flow:
The arrangement of the visualisations was sequential, where a cursory orienting visualisation (the
gauge) was first followed by a detailed inspection of visualisations (line/bar charts, maps).
The Dynamic Role of Narrators: Guiding Viewers Through the Analysis
Dashboard Overview:
The first several elements of the dashboard offer the overall view and are focused on the total
budget; in this case, the gauge should be used.
Detailed Analysis:
The line chart and bar chart on budget expenditure and actuals indicate the distribution of budgets
by country and year, respectively.
It gives a more geographical outlook, providing viewers with a better way to look at the regional
results.
Interactive Exploration:
Selecting slices with relevant labels and their dynamic counterparts enables users to explore more
detailed information about particular countries and years.
Summary and Recommendations:
The slight narrative text box at the bottom of the dashboard gives viewers an executive summary of
what has been presented to them in the infographic.
Lastly, for the conclusion of the work, I focused on analysing potential courses of action to be taken,
followed by specific suggestions for the decision-making process.
Thus, I have adhered to this structure. With the help of the story narration and annotations in Power
BI, I made a presentation that gave the audience a clear understanding of going through the budget
data analysis and sharing the conclusions.

End of Assessment

HI6037FIA T12024
Marking Rubric

Criterion-Referenced Grading
Question Excellent Very Good Good Slightly deficient Unsatisfactory

1 Insightful Adequate Meets Incomplete or Missing or irrelevant


questions, clear questions, expectations unclear questions, unclear
screenshots, screenshots, and with competent questions, screenshots, and
and thorough explanations that execution and screenshots, and inadequate
explanations meet the basic minor areas for explanations that explanations that fail
that exceed all requirements. improvement. do not fully meet to meet basic
requirements. requirements. requirements.
2 Insightful Adequate Meets Incomplete or Missing or irrelevant
questions, clear questions, expectations unclear questions, unclear
screenshots, screenshots, and with competent questions, screenshots, and
and thorough explanations that execution and screenshots, and inadequate
explanations meet the basic minor areas for explanations that explanations that fail
that exceed all requirements. improvement. do not fully meet to meet basic
requirements. requirements. requirements.
3 Insightful Adequate Meets Incomplete or Missing or irrelevant
questions, clear questions, expectations unclear questions, unclear
screenshots, screenshots, and with competent questions, screenshots, and
and thorough explanations that execution and screenshots, and inadequate
explanations meet the basic minor areas for explanations that explanations that fail
that exceed all requirements. improvement. do not fully meet to meet basic
requirements. requirements. requirements.
4 Insightful Adequate Meets Incomplete or Missing or irrelevant
questions, clear questions, expectations unclear questions, unclear
screenshots, screenshots, and with competent questions, screenshots, and
and thorough explanations that execution and screenshots, and inadequate
explanations meet the basic minor areas for explanations that explanations that fail
that exceed all requirements. improvement. do not fully meet to meet basic
requirements. requirements. requirements.
5. Insightful Adequate Meets Incomplete or Missing or irrelevant

HI6037FIA T12024
questions, clear questions, expectations unclear questions, unclear
screenshots, screenshots, and with competent questions, screenshots, and
and thorough explanations that execution and screenshots, and inadequate
explanations meet the basic minor areas for explanations that explanations that fail
that exceed all requirements. improvement. do not fully meet to meet basic
requirements. requirements. requirements.

HI6037FIA T12024

You might also like