Professional Documents
Culture Documents
SC-200 Qad - 1
SC-200 Qad - 1
com/exams/microsoft/sc-200/custom-view/
DigitalNomad is exactly right. Usual examtopics story (this is the 5th one I've done in the past year or so) - do your own research as
Tested it here too, ActionType is the type of action obviously, and FailureReason is the reason why the Action failed. Then it makes no
That should clear the doubts that "LogonFailed" is the correct option, not "FailureReason". Strongly suggest going through the official
The summarize clause counts the number of events that match the previous criteria, grouping the results by DeviceName and LogonType.
The count() function counts the number of events in each group, and the LogonFailures alias is used to label this count in the resulting
Under DeviceLogonEvents schema, below are the ActionType values available and FailureReason is the column in the schema that can be
FailureReason itself is a parameter, so it cannot be equated (compared) to ActionType, so it should be equated to "LogonFailed". You may
First, both "Impossible travel" and "Activity from infrequent country" are detection rule that help prevent breaches from foreign attackers.
The difference between the rule is the type of historical data. "Impossible travel" actually compares between the new location's sign-in
with the last known one. So it basically means if someone already logged into a location (corporate network with USA-based IP range) and
now he is logged into a China network then it is likely the user is compromised (assume the organization doesn't have any traffic/record
/association with China network). Moreover it is based on geographically distant locations within a time period shorter. So in my example
"Activity from infrequent country" is a bit different. Instead of comparing with the last known location, it detects if an account is logged in
from a country that has never been accessed by any user in the organization. This rule is based on user behavior using entity behavioral
information about previous locations used by users in the organization. An alert is triggered when an activity occurs from a location that
information about previous locations used by users in the organization. An alert is triggered when an activity occurs from a location that
You have a custom threat detection policy based on the IP address ranges of your company's United States - based offices. You receive
You have a custom threat detection policy based on the IP address ranges of your company's United States - based offices. You receive
Activity from a country/region that could indicate malicious activity. This policy profiles your environment and triggers alerts when activity
locations. This can indicate a credential breach, however, it's also possible that the user's actual location is masked, for example, by using
information about previous locations used by users in the organization. An alert is triggered when an activity occurs from a location that
Nah, A is not the correct answer. The link clearly supports answer C, Infrequent country: An alert is triggered when an activity occurs
You must use Azure AIP as a tool for DLP. And then Regex is a way to build your pattern in case there is not any built-in sensitive pattern
"Microsoft Purview Compliance comes with built-in Sensitive information types like Credit Card Numbers, Bank Accounts, and more. You
protection service that can be used to classify and label documents, and to set up data loss prevention (DLP) policies to protect sensitive
documents. It can detect documents that contain sensitive information, such as customer account numbers, using pattern matching,
I would say it is D as the question already states you need to make a DLP policy meaning you have chosen the technology which you are
Whilst you are "protecting" the data it doesn't necesarily need to be AIP, it could be DLP with the RegEx policy to prevent it being sent
The answer is clear if the doubt is between C and D. To use the AIP you would need an Azure subscription, but the question doesn't say
it sounds like you want to use RegEx pattern matching to detect which documents are sensitive. RegEx, or regular expression, is a type of
pattern matching that can be used to identify specific patterns of characters in a string of text. In this case, you can use a RegEx pattern to
Azure Information Protection (AIP) is part of Microsoft Purview Information Protection (formerly Microsoft Information Protection or MIP).
Go to Security and Compliance Center in the administrative portal where you can create a data loss prevention (DLP) policy to protect the
"Set-MpPreference will always overwrite the existing set of rules. If you want to add to the existing set, use Add-MpPreference instead."
I would say A&D as the question states "Each correct answer presents a complete solution.", so choosing one of the audit options would
Yes, normally we add a new audit policy with Add-MpPreference and change the policy to enabled with Set-MpPreference, but in this
Use audit mode to evaluate how attack surface reduction rules would affect your organization if enabled. Run all rules in audit mode first
so you can understand how they affect your line-of-business applications. Many line-of-business applications are written with limited
security concerns, and they might perform tasks in ways that seem similar to malware. By monitoring audit data and adding exclusions
Reference: https://docs.microsoft.com/en-us/microsoft-365/security/defender-endpoint/enable-attack-surface-reduction?view=o365-
Link : https://docs.microsoft.com/en-us/microsoft-365/security/defender-endpoint/enable-attack-surface-reduction?view=o365-worldwide
Reference: https://docs.microsoft.com/en-us/microsoft-365/security/defender-endpoint/enable-attack-surface-reduction?view=o365-
I was thinking C,D because normally we set a new policy with Add-MpPreference with audit-mode to see the effects of the policy, and after
Set-MpPreference to change policy to Enabled mode. But, the correct is A,D because the questions said "Each correct answer presents a
complete solution", so both Add-MpPreference and Set-MpPreference in Enable mode is correct, cause Set and Add can create policies,
I've got another practice test in which the question is "You need to identify which Office VBA macros might be affected" -> The correct
You can Hide or Resolve alert and all of those actions you can perform on any device or device groups or single device. But in question
It's about the alert queue in Defender so you need to general scope for any alerts that's about the Word documents containing macros.
That the documents are often used on devices used by the Account team does not matter. It is about the document that you company
The documents are used on devices by the account team, the questions said to hide false positive alerts, and not all the alerts, so i
https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/introducing-the-new-alert-suppression-experience/ba-
Hi, may be the documentation is not updated, the scope is to select organization or user/device/device groups, as they mentioned
Given answer BCE is correct. The question states "alerts must be hidden from queue". Automatically resolving is not correct solution as
https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/introducing-the-new-alert-suppression-experience/ba-
Create a suppression rule scoped to a device group. This will ensure that the rule only applies to the devices of the accounting team, while
Create a suppression rule scoped to a device group. This will ensure that the rule only applies to the devices of the accounting team, while
Select the scope by selecting All Organization or User/Device/Device Groups (as mentioned accounting team in the question) Answer is
Action on the suppression rule (Options are Hide or Resolve) > As mentioned in the question to hide false positive, Answer is 'Hide the
BCE is the correct answer "Documents are used "FREQUENTLY" on the devices of accounting" this doesn't mean that it is only restricted to
"5. In the Scope section, set the Scope by selecting specific device, multiple devices, device groups, the entire organization or by user"
First you need to generate the alert, or you have nothing to suppress. Then a suppression rule on those devices (not globally), then in the
The "source appliance" mentioned here is the Firewall or Proxy in use. The Block script is specific to the product in use. The block is done
If your tenant uses Microsoft Defender for Endpoint, Zscaler NSS, or iboss, any app you mark as unsanctioned is automatically blocked by
I differ with your opinion. it should be Unsanction. The above URL clearly says "You can unsanction a specific risky app by clicking the
three dots at the end of the row. Unsanctioning an app doesn't block use, but enables you to more easily monitor its use with the Cloud
Somehow I still have doubts whether it should be executed on Source appliance OR Azure Cloud Shell? Any thoughts will be helpful.
The query posted on MS docs doesnt actually work (I have tested in a live tenant) - it needs to be amended to match the below before it
But if you choose the following from the answers presented in the question you will get the results you need to answer the question:
question is missing the DeviceName , DeviceId , so the answer should be Join , Extend , Project as you mentioned , but in case the real
Well! Extend operator is for calculated columns and would followed by a custom variable name and equal to sign (something akin to
" and
have a non-empty SHA256 hash value. This effectively filters the results to only include attachments from the specified sender and that
The join operator combines the results of the previous step with the results of a second query that selects the DeviceFileEvent table and
projects the FileName and SHA256 fields. This effectively creates a join between the two tables based on the SHA256 field, linking the
join,project,project. Why? because "Only the columns specified in the arguments are included in the result. Any other columns in the
input are dropped." https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/projectoperator. For this current query to
https://docs.microsoft.com/en-us/microsoft-365/security/defender/advanced-hunting-query-emails-devices?view=o365-worldwide#check-
I tested this in live environment with join, project, project, it gives an error. Join, extend, project just says no results found and if I change
reading this discussion, trying to figure out if you guys are trying to pass teh test or get teh answer right. I want to pass the test, even if
the answers seem insufficient. Help me make sense of the discussion. Thanks
Join, Extend, Project. If I try join, project, project, it gives an error. Join, extend, project just says no results found.
https://docs.microsoft.com/en-us/microsoft-365/security/defender/advanced-hunting-query-emails-devices?view=o365-worldwide#check-
if-files-from-a-known-malicious-sender-are-on-your-devices
Correct answer
https://docs.microsoft.com/en-us/microsoft-365/security/defender-endpoint/manage-alerts?view=o365-worldwide#suppress-an-alert-
Create a detection rule: A detection rule is a configuration in Microsoft 365 Defender that specifies the conditions under which an alert
should be generated. You can create a detection rule based on the advanced hunting query provided, which will trigger an alert whenever
Add DeviceId and ReportId to the output of the query: In order to include relevant information about the device and alert in the generated
alert, you should include the DeviceId and ReportId fields in the output of the query. You can do this by adding DeviceId and ReportId to
Not C = Avoid filtering custom detections using the Timestamp column. The data used for custom detections is pre-filtered based on the
--> A: So you create this custom group(AllDeviceTempGroup) and add a Tag filter(RansomIRTag) to group devices into this device group.
In the details of the question, you are informed that these devices already have a group. Which means if your group is not promoted to
Thanks for the explanation. Probably i skipped some aspect while studying, i have this question, Assuming i have 100s of device how
Thanks for the explanation. Probably i skipped some aspect while studying, i have this question, Assuming i have 100s of devices how can i
ACD. No admin role is required in the scenario given (automated), and obviously the rank needs to be 1, not 4 for the group that contains
As far as I can tell, you can't assign tag to a device group, only devices. And why would you need to assign tags to both devices and device
On second thought, I think this answer is correct, ACD. You would add the tag to the devices, then assign the tag to the device group to
Honeytoken entities are used as traps for malicious actors. Any authentication associated with these honeytoken entities triggers an alert.
This is what honeytoken accounts are meant for (i.e. dormant accounts that generate alerts if accessed). Sensitivity tags are meant for
You can also manually tag entities as sensitive or honeytoken accounts. If you manually tag additional users or groups, such as board
Dynamic Delivery is a feature in Microsoft Defender for Office 365 that allows you to optimize the delivery of email messages containing
attachments by scanning the attachments for malware before they are delivered to the recipient's mailbox. This allows you to quickly
To configure the Safe Attachments policies to use Dynamic Delivery, you can go to the Mail flow > Safe attachments policies page in the
Options B, C, and D are not relevant to reducing the delivery time for email messages with attachments and do not provide the same level
https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/safe-attachments?view=o365-worldwide#dynamic-delivery-
The third answer: take 20, according to MS "here is no guarantee which records are returned, unless the source data is sorted.", "take and
Given Answer is correct, check the following page that has this query under the section "Review logon attempts after receipt of malicious
Answer is correct, but the solution is incomplete, as the results needs to be sorted before "take" command (most recent logons). "Top" is
In February 2021 in the EmailAttachmentInfo and EmailEvents tables, the MalwareFilterVerdict and PhishFilterVerdict columns have been
The third answer: take 20, according to MS "here is no guarantee which records are returned, unless the source data is sorted.", "take
Active Remediation Actions role in Defender for Endpoint meets need to 'approve and reject' pending actions with respect to Defender For
D = Quite redundant, but gives reader roles read access to the portal up until RBAC is turned on the defender permissions. Least Privilege.
Active remediation actions role in MDE portal is enough for approve and reject pending actions generated by Microsoft Defender for
As soon as you enable Active remediation actions role, Security Reader role is not work in MDE portal. If you don't enable RBAC in MDE
Security Administrator role assigned in either Azure Active Directory (Azure AD) (https://portal.azure.com) or the Microsoft 365 admin
Option C will not follow the least prervilige principle. Further, this role is more for administration stuff like editing or deleting roles etc.
Security Admin: A user that belongs to this role has the same rights as the Security Reader and can also update the security policy and
Please don't approve this and the previous comment here on this question, as it is a mistake. Meant to comment on the above question.
The where clause filters the EmailAttachmentInfo table to only include attachments sent by the specified sender and that have a non-
empty SHA256 hash value. This effectively filters the results to only include attachments from the specified sender and that have a known
The join operator combines the results of the previous step with the results of a second query that selects the DeviceFileEvent table and
Settings>Information Protection>Microsoft Information Protection>Automatically scan new files for Microsoft Information Protection
D. From Settings, select Information Protection, select Azure Information Protection, and then select Automatically scan new files for
Azure Information Protection classification labels and content inspection warnings. This will enable Cloud App Security to automatically
scan new files for Azure Information Protection classification labels and content inspection warnings, which can be used to detect and
monitor files for external sharing and other activities, and to generate alerts and trigger remediation actions in response to potential
I think C might be one of the responses because I see this sentence "Create a file policy that detects these stale public files by selecting
https://learn.microsoft.com/en-us/defender-cloud-apps/policies-information-protection#detect-and-prevent-external-sharing-of-sensitive-
https://learn.microsoft.com/en-us/defender-cloud-apps/policies-information-protection#detect-and-prevent-external-sharing-of-sensitive-
Not D= This setting would be ideal in the long run, but for the question this is just too much configuration not mentioned in the answer.
How does the given answer "generate alerts and trigger remediation actions in response to external sharing of confidential files" without
account. This detection uses a machine-learning algorithm that reduces "false positives", such as mis-tagged IP addresses that are
It actually does, it is called cloud discovery anomaly detection policy; but not suitable for this question as you cannot filter by any
Control -> Templates -> Logon from a risky IP address -> Create (activity) policy -> Activities matching any of the following -> IP address |
account. This detection uses a machine-learning algorithm that reduces "false positives", such as mis-tagged IP addresses that are
While I want to say Anomaly detection policy for the 1st answer, when I tested it in the lab, it came back with no templates under that
Activity policies in Cloud App Security are designed to detect specific types of activity in the cloud apps that are being monitored. The
"Botnet Network Activity" policy template is a pre-defined policy that is designed to detect and alert on suspicious activity from botnet
networks. This policy uses a combination of machine learning and threat intelligence to identify botnet network activity, such as attempts
"Access Policy," is a type of policy that is used to control access to specific cloud apps or resources based on specified conditions, such as
"Anomaly detection policy," is a type of policy that is used to detect anomalies in the activity of specific cloud apps or resources based on
specified conditions, such as the number of times a resource is accessed or the type of activity that is being performed. Neither of these
These IP addresses are involved in malicious activities, such as performing password spray, Botnet C&C, and may indicate compromised
For anomaly detections says "such as mis-tagged IP addresses" so it doesn't use IPs tagged as malicious, and the filter used according to
This detection identifies that users were active from an IP address identified as risky by Microsoft Threat Intelligence. These IP addresses
detection uses a machine-learning algorithm that reduces "false positives", such as mis-tagged IP addresses that are widely used by users
it is clear that the first response is "anomaly detection policy" but there is not "ip add tag" filter in it" there is "risky ip address" and such
And there is no "Anomaly policy" after pushing the "Create policy" button so all the Anomaly Policies are built-in and not custom ones.
Also, when creating a policy based on the anomaly template, none of the options in the second part of the question are valid options.
I think the answer is in the word 'custom'... The anomaly policies are just built in pols that have no actions associated. An activity policy can
The question says "You need to create a custom template-based policy"...only Anomaly detection policy has template with which we can
Reason being that you cannot create an anomaly detection policy as it's a built-in policy. Also, you cannot filter the policy based on IP
If it asked for you to ensure that activity from Botnets were alerted on and it was to be filtered for specific users then yes this policy could
Description: Alert when a user logs on to your sanctioned apps from a risky IP address. By default, the Risky IP address category contains
This detection identifies that users were active from an IP address identified as risky by Microsoft Threat Intelligence. These IP addresses
detection uses a machine-learning algorithm that reduces "false positives", such as mis-tagged IP addresses that are widely used by users
authenticated user log-ins (Activity from suspicious IP addreses) - I'm going with the Anomaly Detection Policies. Anyone see where I'm
The key here is "all users who work remotely." Only a named location can be used to define a CA rule to allow MFA for everyone except a
Apologies, the answer provided is correct. I just know checked the site myself and the options exist, you add the IP address range and a
At least B makes sense as the question reads a custom policy based on a custom IP range is in place. So false positive alerts that are
E - Creating a new policy when there is already an existing one that you need to reduce the alerts from, would not reduce the number of
Best answer IMHO. Stop (it says configure, it should say untick) the enrichment (for the impossible travel) add the addresses of your US
B. Add the IP addresses to the corporate address range category. This will ensure that sign-ins from these IP addresses are not flagged as
E. Create an activity policy that has an exclusion for the IP addresses. This will allow you to exclude sign-ins from these IP addresses from
A and D is correct because of this statement "You need to prevent alerts for legitimate sign-ins from known locations." it's not a bespoke
You've discovered they are legitimate sign ins so basically you need to whitelist the PIP they are coming from and override what MCAS
Another reason for picking B over D (from https://docs.microsoft.com/en-us/defender-cloud-apps/ip-tags): Built-in IP address tags and
custom IP tags are considered hierarchically. Custom IP tags take precedence over built-in IP tags. For instance, if an IP address is tagged
https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/view-email-security-reports?view=o365-worldwide#threat-
Number of messages: Use the Mailflow view in the Mailflow status report to see the number of ZAP-affected messages for the specified
Message details: Use Threat Explorer (and real-time detections) to filter All email events by the value ZAP for the Additional action column.
The answer is A. The mail flow report does show the flow of all mail on aggregate - so you do see the number of mails moved by ZAP - but
Number of messages: Use the Mailflow view in the Mailflow status report to see the number of ZAP-affected messages for the specified
Message details: Use Threat Explorer (and real-time detections) to filter All email events by the value ZAP for the Additional action column.
Yes, the answer D said "mail flow report in Exchange", this is not the same as "Mail Status Report" at Microsoft Defender for Office 365.
"The report provides the count of email messages with malicious content, such as files or website addresses (URLs) that were blocked
by the anti-malware engine, zero-hour auto purge (ZAP), and Defender for Office 365 features like Safe Links, Safe Attachments, and
• Number of messages: Use the Mailflow view in the Mailflow status report to see the number of ZAP-affected messages for the specified
https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/zero-hour-auto-purge?view=o365-worldwide#how-to-see-if-
You can't choose 'Security Admin' because the key in the questions is 'at the subscription level'. Read the Security Admin section in the
However, only the Owner can 'Enable auto provisioning'... to be the owner of the extension you're deploying. "For auto provisioning, the
specific role required depends on the extension you're deploying." Check the section under the roles table https://docs.microsoft.com/en-
Contributor on subscription role for both. Remember that big difference between Contributor and Owner is that Owner also has access to
For auto provisioning, the specific role required depends on the extension you're deploying. Half the available extensions require owner...
Currently studying and I looked at the link provided on this : https://docs.microsoft.com/en-us/azure/defender-for-cloud/permissions . The
The question states at the start that this is at the subscription level. Thus, the contributor role cannot Add/assign initiatives (including)
Correct answer should be, Contributor (least privileges at subscription level) for both users: https://docs.microsoft.com/en-us/azure
vulnerability management. The correct answer should be "Automation in Full mode", because it is the only correct answer since the last
provided answer is to set Automation to "Not automated" which is not correct as per Microsoft docs on Live Response, check it out here
"Ensure that the device has an Automation Remediation level assigned to it." https://docs.microsoft.com/en-us/microsoft-365/security
You'll need to enable, at least, the minimum Remediation Level for a given Device Group. Otherwise you won't be able to establish a Live
Second need to be full. Whole concept of REMEDIATION during live connect is based on Remediation assigned. If its off, then Live connect
"Create a device group that contains the devices and set Automation level to full" is the only answer have automation remediation
This query retrieves a list of alerts that are related to devices where the user "user1" is logged on, and it includes the alert ID, timestamp,
title, severity, and category for each alert. The "join" and "project" operations in the query are used to combine and filter the data from the
https://learn.microsoft.com/en-us/microsoft-365/security/defender/advanced-hunting-query-emails-devices?view=o365-worldwide#get-
https://learn.microsoft.com/en-us/microsoft-365/compliance/insider-risk-management-policies?view=o365-worldwide#data-theft-by-
When users leave your organization, there are specific risk indicators typically associated with data theft by departing users. This policy
My first day on my first job was to review the trainning session from Micrsoft, they highlighted this "Insider Risk"feature several times, the
D & C are both reasonable answers. However, the question did not state that the deleted user is leaving the organization. Therefore, for
Generates an alert when an unusually large number of activities are performed on files in SharePoint or OneDrive by users outside of your
100% correct, first option when creating an insider risk policy is data theft from departing user, de default policy timeframes is 30 days
Question emphasizes on 'incident'. Though you can view affected entities by clicking on Alerts tab > Alert list, it will be for that particular
alert one alert doesn't necessarily be an incident. An incident can have multiple alerts. So you need to click on Incidents tab, open the
Correct answer is indeed C: wheb you click on an incident it will open the Summary tab, on the summery tab you can see the overview
The description even says 'Microsoft 365 Defender automatically investigates all the incidents' supported events and suspicious entities in
The description even says 'Microsoft 365 Defender automatically investigates all the incidents' supported events and suspicious entities in
The description even says 'Microsoft 365 Defender automatically investigates all the incidents' supported events and suspicious entities in
To identify all the entities affected by an incident in the Microsoft 365 Defender portal, you should use the Evidence and Response tab.
The Evidence and Response tab in the Microsoft 365 Defender portal provides a detailed view of an incident, including information about
the affected entities. When you select an incident in the Investigations tab, the Evidence and Response tab will display information about
the affected users, devices, applications, and other entities. You can use this information to understand the scope of the incident and to
The Devices, Alerts, and Investigations tabs may also contain information about affected entities, but the Evidence and Response tab
https://learn.microsoft.com/en-us/microsoft-365/security/defender/investigate-incidents?view=o365-worldwide#evidence-and-response
dashboard in the Microsoft 365 compliance center. This tab will display a list of all the events that triggered the alert, including the specific
entities (e.g. files, emails, etc.) that were affected. You can further investigate each event to identify the specific user, device and action
Answer is (B) No, because the question is asking for recommendations on the current alert that you are viewing(Mitigate the threat) and
(Mitigate the threat) - provides manual remediation steps for this security alert (Provides recommendations on what to do to resolve the
prevent future attacks.(Provides recommendations for security in general such as Remediate Vulnerabilities and win defender should be
Prevent future attacks - provides security recommendations to help reduce the attack surface, increase security posture, and thus prevent
Prevent future attacks - provides security recommendations to help reduce the attack surface, increase security posture, and thus prevent
The given answer is correct. You create firewall rules and adds trusted range to ensure Key Vault can only be accessed from those trusted
Shouldn't the firewall already be turned on? There fore why would a solution be to Turn the Firewall on??? This can't be the correct
Enable the Azure Key Vault firewall as described in Configure Azure Key Vault firewalls and virtual networks. Configure the firewall with
Open the key vault's access policy settings. Remove the corresponding security principal, or restrict the operations the security principal
Automate responses to Microsoft Defender for Cloud triggers using workflow automation - the trigger conditions selected is “Security
Agreeing with this answer, and add to it that you can actually find the "Trigger logic app" when you open both "Security alerts" and
"Recommendations". This is mentioned in the docs page "To manually run a Logic App, open an alert or a recommendation and click
suggestions for remediation or improvement that are provided by Azure Security Center, but they do not trigger the execution of a logic
Option 2, "When an Azure Security Centre is created or triggered," is also not a valid trigger. Azure Security Center is a service, not an
Option 3, "When a response to an Azure Security Center alert is triggered," is also not a valid trigger. Responses to alerts are actions taken
automatically-run . if you look at the 2nd or 3rd paragraph it says Logic apps can trigger from security alerts, doesn't mention workflow
If you are using the legacy trigger "When a response to a Microsoft Defender for Cloud alert is triggered", your logic apps will not be
There are both offered answers in the Add workflow automation creation wizard, so "When an Azure Security Center Recommendation is
created or triggered" as well as "When an Azure Security Center alert is triggered". I can't find anything in the question that would imply
I think that the answer is "When an Azure Security Center Recommendation is created or triggered" because you want to remediate
To ensure that SecAdmin1 can apply quick fixes to the virtual machines by using Azure Defender, while also following the principle of least
The Contributor role for RG1 will allow SecAdmin1 to perform tasks such as deploying resources and modifying resource properties within
RG1, but it will not grant them access to perform administrative tasks at the subscription level. This will allow SecAdmin1 to apply quick
Why is it so important to copy the file ONLY as "asc_alerttest_662jfi039n". Please consider that I am a newbie in securities, and help guide
Correct, the renaming of the .exe is referenced directly in the documentation - https://docs.microsoft.com/en-us/azure/security-center
Answer A is correct: you go to your Log analytics workspace -> agent management -> Data collection rules, here you would create a
https://docs.microsoft.com/en-us/azure/security-center/security-center-alert-validation#simulate-alerts-on-your-azure-vms-linux- is not
https://learn.microsoft.com/en-us/defender-cloud-apps/connect-google-gcp#how-to-connect-gcp-security-configuration-to-defender-for-
We copy the GCP Cloud Shell script from the GCP Connector, then run it at GCP Cloud Shell, and then we have an auto-generated service
In addition to #2, you can either choose “Prevent future attacks” OR “Mitigate the threat” as options since the “Mitigate the threat”
The correction option would be to choose "mitigate the threat" as the recommendations from this tab resolves the alert, whereas
Correct answer, from the provided link in the answer it is explained: Continuous export lets you fully customize what will be exported, and
Answer is correct. To be able to prevent unauthorized access to the key vault through suspicious IPs you have to change the networking
"Enabling it at the workspace level doesn't enable just-in-time VM access, adaptive application controls, and network detections for Azure
Seems correct you want a security trigger for the login and you use this trigger to start the automation workflow with the powershellscript
Nvm, JIT remediation only applies to "Security Control: Secure Management Ports", not to "Security Control: Restrict Unauthorized
Azure Security Center and Azure Defender are now called Microsoft Defender for Cloud. We've also renamed Azure Defender plans to
Sorry folks I mean it should be B - Under Mitigate the threat it gives you recommendations on how to resolve this particular alert.
Answer should be B, but one more point needs to be added in the answer that is Alert needs to be triggered, then select the alert, take
Prevent future attacks - provides security recommendations to help reduce the attack surface, increase security posture, and thus prevent
Answer B would prevent future alerts from being supressed but the question is asking to view alerts created in the last 5 days - these
Suppression rules don't work retroactively - they'll only suppress alerts triggered after the rule is created. Also, if a specific alert type has
Answer C may change the filter, however changing the filter won't matter because the supresssion rule is still active. Godsky is correct
...but if it's accessed by multiple Azure Function Apps -- each would have a different Resource ID, right? (So...you would need to know the
defender for Cloud. It will use log analitics agent to parse logs but log analitics agent itself will not connect on prem server to Defender.
I dont think the correct answer is here. IT should be Azure Guest agent as that could then install/configure windows defender thru the
You can only select one security alert and create a supression rule for it. When selecting multiple security alerts and click 'Supression rules'
then click 'Create new suppression rule', the drop down menu under Alerts (when selecting 'Custom') would allow you to select only one
Look at the question. It states resources within a subscription. Without knowing the design of the subscription, only allocating a
Avoid assigning broader roles at broader scopes . By limiting roles and scopes, you limit what resources are at risk if the security
it is clearly mentioned in the following link that for disabling/enabling you can use "Security Admin" at least and for applying security
As per the given link, the answer should be Security Admin and Resource Group owner. Since in question, the ask is to apply security
contributor at the Subscription Level allows you to Enable/Disable Defender Plans...not what is requested. Owner/Contributor at the RG
Wrong way around. Security Admin can enable/disable Azure Defender but not make changes to resources. Subscription contributor
Correct asnswer. Microsoft docs say: "For a rule to suppress an alert on a specific subscription, that alert type has to have been triggered
at least once before the rule is created." https://docs.microsoft.com/en-us/azure/defender-for-cloud/alerts-suppression-rules#create-
You should first C. trigger a PowerShell alert on VM1 to create a custom alert suppression rule that will suppress false positive alerts for
suspicious use of PowerShell on VM1. After triggering the alert, you can use the information provided in the alert to create a suppression
"To connect hybrid machines, you install the Azure Connected Machine agent on each machine. This agent does not deliver any other
functionality, and it doesn't replace the Azure Log Analytics agent / Azure Monitor Agent. The Log Analytics agent or Azure Monitor Agent
B is Correct. The full correct answer should be "You enable Azure Arc to onboard the virtual machines to Azure Arc, then you enable auto-
NO: Si va a incorporar máquinas que se ejecutan en Amazon Web Services (AWS), el conector para AWS de Defender for Cloud controla de
forma transparente y automática la implementación de Azure Arc. Obtenga más información en Conectar cuentas de AWS a Microsoft
That's for AWS accounts, the question says VMs on AWS. This article is from this year and it says you can use Azure-Arc for VMs on AWS
A machine with Azure Arc-enabled servers becomes an Azure resource and - when you've installed the Log Analytics agent on it - appears
It's B because it doesn't mention Azure Arc, it just says Log analytics agent (which by the way is going to be deprecated and replaced by
B is Correct. The full correct answer should be "You enable Azure Arc to onboard the virtual machines to Azure Arc, then you enable auto-
A machine with Azure Arc-enabled servers becomes an Azure resource and - when you've installed the Log Analytics agent on it - appears
To use Microsoft Defender for Cloud to protect on-premises Linux servers, you should first install the Log Analytics agent on the servers.
To identify which blobs were deleted in the storage account, you should review the activity logs for the storage account in Azure Monitor.
unusually high volume of delete operations occurred. You can also use advanced filtering options to look for specific blobs. Additionally,
you can use Azure Monitor's Log Analytics to query and analyze the logs in more detail, which can help you identify patterns or trends in
https://docs.microsoft.com/en-us/azure/sentinel/connect-common-event-format#designate-a-log-forwarder-and-install-the-log-analytics-
In the Timeline tab, review the timeline of alerts and bookmarks in the incident, which can help you reconstruct the timeline of attacker
I think the key word for the second one is "navigate": if you click on the timeline alerts, you won't navigate an item, your graph could
yes as Logic app is already available and it pre configure to trigger manual based ... now when you connect it as Playbook you need to
But how do you search for that activation event if the logs aren't coming to sentinel? The key is "Fisrt", you need to get the events
Some people are saying this is correct as a logic app connector, actually this is referring to that as you have literally just deployed Sentinel,
No. There is nothing to do with AAD Connector. This is not about threat hunting against AAD. It is about how to integrate Azure
Logic App to work with Azure Sentinel. You must modify existing Logic App and choose Azure Sentinel actions either the following
To use the existing logic app as a playbook in Azure Sentinel, you should first modify the trigger in the logic app to allow it to be triggered
Currently, the logic app is triggered manually, which means that it can only be initiated by a user manually clicking the "Run" button or by
an external system sending a request to the logic app's HTTP trigger. In order to use the logic app as a playbook in Azure Sentinel, you will
Data connector is already there cos it is being deployed manually - if not the playbook wouldn't work. therefore D is the right answer.
Sentinel. Microsoft Sentinel comes with many out of the box connectors for Microsoft services, which you can integrate in real time. For
Given answer is correct. The keyword in the question is "First". Don't assume the AAD connector is there. If you don't add it, or check it's
The question is not "How would you do it?" It is asking what you should do "first"... a common mistake analysis do is to assume the
connector is there and the KQL in the query will result in anything. If no results are generated, the analytics rule will not get triggered,
and the question said you just deployed Sentinel, so there is no ground for the assumption that AAD connector is already set. The
logic app was set to trigger manually, so the FIRST thing I would do after deploying Sentinel and before I assume my playbook will
The question says the logic app "is used" to block Azure Active Directory (Azure AD) users, so it assumes the connector it's already setup,
From the question, the logic app existed before sentinel. It was just a standalone to disable accounts. You deploy Sentinel. With what
will you trigger this logic app ? You've just deployed sentinel - from where does it get its data to trigger ? Its a bad question, but B or D
And since you already have a logic app which is manually triggered, it's asking how you automate this via playbook, so modify the trigger.
Ignore my previous statement. Regardless. I do not see the logic of using a manual response. The change must be on the trigger,
visualizations, and narrative text. These documents can be codified and served for specialized visualizations, an investigation guide, and
Because the question states that there is already an alert that "sends an email to a distribution group", you should add a parameter
Its not A or B because they mention Trigger. You can't send email as a trigger then it must be action. Doesn't mention anything about
incidents or alerts so I don't think its D either. It must be C I'm guessing you can create a condition where it matches the resource owner
the question states that there is already an alert that "sends an email to a distribution group", you should add a parameter and modify the
It appears to be a variable, but that's not in the provided answers, so I guess parameter? https://techcommunity.microsoft.com/t5/azure-
IP is not returned in the query. We can see that the Account and Computer were mapped to entities and were returned in the 'summarize'
Correct answer A & D: Tested query on Sentinel. Only we have the Account (user) and Host (computer) in "incident settings (preview) tab
What are two primary drawbacks of implementing single-tenant with regional workspaces Microsoft Sentinel in your environment as
Microsoft Sentinel supports data collection from Microsoft and Azure SaaS resources only within its own Azure Active Directory (Azure AD)
Definitely B and E. To query logs across multiple workspaces, all workspaces should have a sentinel solution deployed on top of them and
Click on View playbooks for the chosen alert. You will get a list of all playbooks that start with an When an Azure Sentinel Alert is triggered
D - https://docs.microsoft.com/en-us/azure/sentinel/tutorial-detect-threats-custom#issue-a-scheduled-rule-failed-to-execute-or-appears-
D is a "Permanent failure - rule auto-disabled" - "In consecutive permanent failures, Azure Sentinel stops trying to execute......"Adds the
Azure Sentinel Contributor can, in addition to the above, create and edit workbooks, analytics rules, and other Azure Sentinel resources.
Azure Sentinel Automation Contributor allows Azure Sentinel to add playbooks to automation rules. It is not meant for user accounts.
The question is about the “Create and run playbooks”. Logic App contributor role is not sufficient to Run the play book. You need atleast
Use playbooks together with automation rules to automate your incident response and remediate security threats detected by Microsoft
The first scenario will not generate any alerts, as each series by Caller generates a single result; there is only one caller, therefore 1 result,
In the second scenario, there will be 3 results (one for each caller), so one alert will be generated (as this is above the threshold and the
make-series is going to make lists of all the EventSubmissionTimestamp values for each user, with each user being on a separate row. This
means that if 1 user creates 3 machines, it will aggregate them all into 1 row. And if 3 users create 1 virtual machine we will see 3 separate
The make-series operator creates a series of specified aggregated values along a specified axis. In this case the "Caller", this will make 3
This will create a table that shows arrays of the ResourceID's of each query result from each "Caller" ordered by specified time range.
The rule specifies when the query returns 2 results in a 5 minute timespan, trigger the alert, in this case the first scenario would only
You will end up with 3 counts of computers been created through a single deployment, regardless this should be visible under the 5 hours
I would say if all 3 individual users created a VM within 5 minutes of each other i.e. 3 VM's created within the 5 minute window, then an
So which ones?
Cross-workspace queries can now be included in scheduled analytics rules. You can use cross-workspace analytics rules in a central SOC,
* Alerts generated by a cross-workspace analytics rule, and the incidents created from them, exist only in the workspace where the rule
You can create scheduled rules from Data connector pages (Next steps tab). But the bottom line is whoever wrote this question should be
After connecting your data sources to Microsoft Sentinel, create custom analytics rules to help discover threats and anomalous behaviors
Analytics rules search for specific events or sets of events across your environment, alert you when certain event thresholds or conditions
are reached, generate incidents for your SOC to triage and investigate, and respond to threats with automated tracking and remediation
I dunno... You can create alerts from scheduled queries. You can the create incidents from alerts. Question doesn't suggest you cant.
Creating a Microsoft incident creation rule for a data connector will not meet the goal of creating an incident in Azure Sentinel when a
An incident creation rule is used to create incidents in Azure Sentinel based on specific criteria, such as when a certain number of alerts
are triggered within a certain timeframe. While an incident creation rule can be used in conjunction with data connectors to analyze data
To detect sign-ins from malicious IP addresses, you would need to create an analytics rule that looks for specific signs of a malicious IP
address, such as a high number of failed login attempts or login attempts from a known malicious IP. Once the rule detects a sign-in from
automate and orchestrate your response, and can be set to run automatically when specific alerts or incidents are generated, by being
Playbooks in Azure Sentinel are based on workflows built in Azure Logic Apps, which means that you get all the power, customizability,
To send a Microsoft Teams message to a channel whenever a sign-in from a suspicious IP address is detected in Azure Sentinel, you will
Add a playbook: A playbook is a set of actions that can be triggered in response to an incident, such as sending a message to a channel in
Microsoft Teams. To add a playbook, you will need to navigate to the Playbooks tab in Azure Sentinel and create a new playbook that
Associate a playbook to an incident: After creating the playbook, you will need to associate it with an incident in Azure Sentinel. This can
be done by navigating to the Incidents tab in Azure Sentinel and selecting the incident that you want to associate the playbook with. Then,
The same API is also available for external tools such as Jupyter notebooks and Python. While many common tasks can be carried out in
the portal, Jupyter extends the scope of what you can do with this data. It combines full programmability with a huge collection of libraries
To visualize Azure Sentinel data and enrich it by using third-party data sources to identify indicators of compromise (IoC), you can use
Notebooks in Azure Sentinel are interactive documents that allow you to run queries, create visualizations, and perform data analysis on
your Azure Sentinel data. They also allow you to connect to other data sources, such as third-party threat intelligence feeds, to enrich the
Once you have connected to the third-party data source, you can use Azure Sentinel notebook to blend the data, and create visualizations,
The query returns two columns: Requests metric and Result category. Each value of the Result column will get its own bar in the chart with
Use hunting livestream to create interactive sessions that let you test newly created queries as events occur, get notifications from the
The requirement says "You need to receive an alert in near real-time", making livestream the better option here. A and D are the answer.
Answer is defo A&D, it's even there in the link they provided, plain as day but they still show the wrong selection......OMG!!!! I am paying for
Someone needs to start remediating some of these wrong answers and start providing proper, tangible
The correct answer should be A&B. B because you have to connect the "Azure Storage Account" and make sure the diagnostic settings are
set to send activities about the key enumerations to Sentinel workspace. I see people in discussion debates ignore the importance of
checking the data connector and jump into the discussion about whether the solution should be X or Y. Please check the data connector if
Livestream, not to hunting. With Livestream, any matching results will generate "Azure Alerts" which are shown at the Notification bell. So
incidents start as unassigned. You can also add comments so that other analysts will be able to understand what you investigated and
To send a Microsoft Teams message to a channel whenever an incident representing a sign-in risk event is activated in Azure Sentinel, you
Add a playbook: A playbook is a set of actions that can be triggered in response to an incident, such as sending a message to a channel in
Microsoft Teams. To add a playbook, you will need to navigate to the Playbooks tab in Azure Sentinel and create a new playbook that
Associate a playbook to the analytics rule that triggered the incident: After creating the playbook, you will need to associate it with the
analytics rule that triggered the incident. This can be done by navigating to the Analytics tab in Azure Sentinel, select the analytics rule
that you want to associate the playbook with, and then selecting the "Associate Playbook" button and selecting the playbook that you
I think there is similar question but it was incident access from suspicious iPaddress and for that no need to enable entity behaviour hence
the steps were Associate Playbook and Add playbook but for this issue is related to sign-in risk event which needed signin logs data source
UEBA also requires data sources from Azure and/or Thirdparty and will be able to help analyze and investigate an incident. This is not
Surely would be easier to connect Identity Protection to Sentinel than use the UEBA stuff... Which seems more for intelligent insider risk (i
Correct answers A&B. You can't assume you have the data source needed to capture risky sign-ins. You should check whether you have
remaining correct answer. D on its on doesn't provide a complete solution (having a playbook on its own is not enough). Hence answer B
I take it back. Correct answers should be A & D. See Question #22 previously had the same options and the correct answers where "Add
It seems like answer A is related to the fact that it is "an incident representing a sign-in risk event" and sign in risks are in the category of
Send a message to your security operations channel in Microsoft Teams or Slack to make sure your security analysts are aware of the
Send all the information in the alert by email to your senior network admin and security admin. The email message will include Block and
If the admins have chosen Block, send a command to the firewall to block the IP address in the alert, and another to Azure AD to disable
Actually, they do create. There is an option called "Create incidents from this query" which you can enable or disable it.
You're Right. , Create incidents from alerts triggered by this analytics rule is set by default to Enabled. The another option of
analytics rules called "Create incident rule" is to create incidents based on alerts generated by others security products like
Defender.
Thanks!
Answer is correct
For the correct events to be audited and included in the Windows Event Log, your domain controllers require accurate Advanced Audit
To enhance detection capabilities, Defender for Identity needs the Windows events listed in Configure event collection. These can either
be read automatically by the Defender for Identity sensor or in case the Defender for Identity sensor is not deployed, it can be forwarded
to the Defender for Identity standalone sensor in one of two ways, by configuring the Defender for Identity standalone sensor to listen for
Because livestream notifications for new events use Azure portal notifications, you see these notifications whenever you use the Azure
I follow A and D. ASC indeed shouldnt be in the equation when you are already using sentinel. D for the logon events and A for the
You create an Azure Sentinel workspace named workspace1. In workspace1, you activate an Azure AD connector for contoso.com and
Thinking logically, there question gives no mention of MCAS and there are only two connectors active, AD (not identity protection) and
O365. AD connector can give only what happened not what's suspicious, so you would need Azure AD Identity protection connector for
B should not be the case as we are talking about identity compromise and then actions followed in O365, not on-prem so Azure Security
Coming to A, there are samples of analytics rules about Log4J activities and other type of suspicious patterns, but there is nothing sort of
specific about covering everything, so nothing is there built-in but all logs are there to capture any kind of activities in O365 and their
To use the Fusion rule to detect multi-staged attacks that include suspicious sign-ins to contoso.com followed by anomalous Microsoft
Create custom rule based on the Office 365 connector templates: By creating a custom rule based on the Office 365 connector templates,
Create an Azure AD Identity Protection connector: Azure AD Identity Protection is a security solution that provides visibility, control, and
protection for Azure AD identities. By creating an Azure AD Identity Protection connector, you can monitor Azure AD activity for suspicious
Create an Azure AD Identity Protection connector: You can use the Azure AD Identity Protection connector to import data from Azure AD
Identity Protection and use it in your Azure Sentinel workspace. This will allow you to detect suspicious sign-ins to contoso.com and use
Create a Microsoft Cloud App Security connector: You can use the Microsoft Cloud App Security connector to import data from Microsoft
Cloud App Security and use it in your Azure Sentinel workspace. This will allow you to detect anomalous Microsoft Office 365 activity and
combining alerts from the scheduled analytics rules that detects specific events or sets of events across your environment, with alerts
Fusion Rule needs signals from Azure AD Identity Protection connector and from Microsoft Cloud App Security connector to generate the
https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-fusion-advanced-multistage-attack-detection-scenarios/ba-
So you only have to create your own rule from (editing, modifying) a rule template ("There are rule templates to create incidents in
Azure Sentinel based on alerts from Azure Security Center, Office 365 Advanced Threat Protection (Preview) and Microsoft Defender
You create an Azure Sentinel workspace named workspace1. In workspace1, you activate an Azure AD connector for contoso.com and an
Fusion would have everything enabled, the task is to set the detection for anomalous activity. One connector the Azure defender for the
As per comments about, the guide recommends enabling extra connectors To enable these detections, we recommend you configure the
If you wanted to return a list of all incidents sorted by their incident number but only wanted to return the most recent log per incident,
2) Server 1 Change to disable the Log Analytics Agent from syncing with Microsoft Sentinel's Syslog configuration so that changes made in
SW1 receives the data only from CEF1, CEF1 needs to send data collected in CEF message from Server1 and Plain Syslog from Server2.
Seeing that Server 2 is configured to send plain syslog to CEF1, there is no log analytics agent on this server to duplicate data. Everything
If you plan to use this log forwarder machine to forward Syslog messages as well as CEF, then in order to avoid the duplication of events
facilities that are being used to send CEF messages. This way, the facilities that are sent in CEF won't also be sent in Syslog. See Configure
b. You must run the following command on those machines to disable the synchronization of the agent with the Syslog configuration in
If you plan to use this log forwarder machine to forward Syslog messages as CEF, then to avoid the duplication of events to the Syslog and
On each source machine that sends logs to the forwarder in CEF format, you must edit the Syslog configuration file to remove the facilities
From <https://docs.microsoft.com/en-us/learn/modules/connect-common-event-format-logs-to-azure-sentinel/3-connect-your-external-
configuration file to remove the facilities that are being used to send CEF messages. This way, the facilities that are sent in CEF won't also
- Server1 change - You must run the following command on those machines (the ones you ran it previously, i.e SERVER1) to disable the
synchronization of the agent with the Syslog configuration in Microsoft Sentinel. This ensures that the configuration change you made in
If you plan to use this log forwarder machine to forward Syslog messages as well as CEF, then in order to avoid the duplication of events
**On each source machine that sends logs to the forwarder in CEF format**, you must edit the Syslog configuration file to remove the
facilities that are being used to send CEF messages. This way, the facilities that are sent in CEF won't also be sent in Syslog. See Configure
You must run the following command **on those machines** to disable the synchronization of the agent with the Syslog configuration in
Use playbooks together with automation rules to automate your incident response and remediate security threats detected by Microsoft
personnel, close noisy incidents or known false positives, change their severity, and add tags. They are also the mechanism by which you
First one is kind given away within the answer. Then second keyword is evaluate, which is the operator for the autocluster() plugin.
Answer is correct.
The Potential malicious events map in the Overview section of Microsoft Sentinel can display geolocation information for incidents, such
as a brute force attack against an Azure Portal analytics rule. By reviewing this map, you can identify the location from where the attack is
Btw, the provided link in the answer seems quite out of date too. That overview map was removed in the "New" Overview version which
Correct answer is A - once you click on the red or orange circle within the map it forwards you to logs analytics where the query is shown,
How will you do this from the overview, i think it must be answer B. You go the the incident you click in de IP under Entities in the Details
To suppress specific Defender for Cloud security alerts at the root management group level, you can create an Azure Policy assignment.
This will allow you to apply a policy that will suppress the alerts across all subscriptions and management groups in the tenant. Azure
To ensure that specific Defender for Cloud security alerts are suppressed at the root management group level, you should create an Azure
https://docs.microsoft.com/en-us/azure/defender-for-cloud/continuous-export?tabs=azure-portal#manual-one-time-export-of-alerts-and-
To ensure that app1 launches when Microsoft Sentinel detects an Azure AD-generated alert, you should create an automation rule first.
Fusion rules in Microsoft Sentinel are rules that are used to detect advanced multistage attacks by combining alerts and activities from
administrative effort. You only need to configure the CEF server to listen for syslog from all the linux vms and then send the CEF data to
CEF is a standard format for log data that is used by many security and event management systems, including Microsoft Sentinel. By using
a CEF connector, the log data can be ingested into Sentinel with minimal parsing required, reducing administrative effort. Additionally, CEF
While the Common Event Format (CEF) connector can be used to collect log data from systems and devices, it may not be the best choice
for minimizing administrative effort and the parsing required to read log data in this scenario. CEF is a proprietary log format developed
by ArcSight, and it requires parsing to extract the relevant data from the logs. This can require additional effort and resources, particularly
In contrast, the Syslog connector is a widely-used standard for logging system events that is supported by many systems, including Linux
systems. Syslog uses a simple text-based format that is easy to parse and understand, minimizing the effort and resources required to
extract the relevant data from the logs. Therefore, using a Syslog connector may be a better choice for minimizing administrative effort
While CEF and Syslog are both log formats that can be used to collect log data from systems and devices, they are not interchangeable
As the question asks that 'the solution must minimize administrative effort' - configuring 100 subscriptions can't be correct, so the answer
To stream alerts into //Syslog servers// ,and other monitoring solutions, connect Defender for Cloud using continuous export and Azure
The question says to export to a Syslog and Syslog is not used by Microsoft Sentinel since is already natively integrated, the answer should
Not sure about the options - You wouldn't add a syslog connector to the workspace, as it is on by default. - I'm wondering if they want you
The fact that I'm paying contributor access to have these easy questions with a wrong answer really triggers me... This being said, the
A Workbook is a collection of visualizations and data that can be used to analyze and report on data in Azure Sentinel. It can be used to
Agree: AUTOMATION RULES are a way to centrally manage automation in Microsoft Sentinel, by allowing you to define and coordinate a