Sys - DM - Exec - SQL - Text For Stored Procs: Comment

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 51

Sys.

dm_exec_sql_text for stored procs


When looking at query plan execution stats, and cross referencing to the Sys.dm_exec_sql_text
function so the one can see the execution text, why is it that calls to execute stored procs have a text
that starts 'Create Proceedure...'
This gives the impression what is being executed is the creation of the SP, but really it is being
called.
Would this also mean there would be no way to distinguish the acts of calling and creating?
3 Comments
Give Award
Share
UnsaveHideReport
100% Upvoted
Comment as aarondeep09

COMMENT
Markdown mode
SORT BY
BEST
level 1
ScotJoplin
1 point·4 months ago
Because it gives you the whole text. You need to use the sttent start and end values to get the actual
statement.
ReplyGive Award
Share
ReportSave

level 2
chadbaldwin
SQL Server Developer2 points·4 months ago
I think what they're asking is, why does it contain the words "CREATE PROCEDURE" if it's not
actually creating the procedure, when it's actually just executing the contents of the procedure.
I wanted to reply earlier and basically say "that's just the way it is" lol, cause I've never actually
thought about this. But that's not really a helpful comment, so I held back. I just figured it wasn't
something I needed to worry about because I know it's not actually creating the procedure every
time.
ReplyGive Award
Share
ReportSave

level 3
ScotJoplin
1 point·4 months ago
In that case most likely because it’s an object reference that looks in the meta data to pull the text.
That includes the pre create comment, if there is one, and all the text to the end of the creation
batch.

SSIS Connecting to Live Data


Question

I'm working through Andy Leonard Stairway to Integration Services and completed the first few steps
and built the incremental load package. As I don't have any databases that update regularly, I am
trying to connect to a safe, live data source so that I can build the package from scratch and learn
the concepts.
In Python, I would do this by accessing an API but have no idea how SSIS does this. Read about
OData and what it is capable of but don't know how to access anything other than .svc files.
What's the easiest/best way to pull live data into a database with SSIS?
2 Comments
Give Award
Share
UnsaveHideReport
100% Upvoted
Comment as aarondeep09

COMMENT
Markdown mode
SORT BY
BEST
level 1
burgerAccount
2 points·4 months ago
SSIS isn't really for live data, it's for scheduled data. You would still use an application like a python
or c# for regular transactions. In terms of scheduling the ssis packages, you would use SQL agent.
You would want to make sure integration services is configured, deploy your package to the
integration services catalog, then schedule your job using sql agent.
ReplyGive Award
Share
ReportSave

level 1
Prequalified
2 points·4 months ago
An ETL tool or Azure Data Factory would be a better choice because of built in integrations. You
could leverage your python skills as well.
Query optimization inquiry
Answered.

EDIT: Thanks everyone for the feedback. I've gotten much more response than I intended with this
question. I've learned a lot and appreciate everyone's input.
does using a sub query in a join speed up the overall execution rather than using the where clause
later in the query? My thoughts were to do this because the customers table is huge, and I wanted to
filter it down before hand.
Example:
LEFT JOIN (select * from dbo.Customers
where CustomerTypeID in(2,8,12)
AND EnrollerID <> 7
AND CustomerStatusID = 1
) as c
17 Comments

Give Award

Share

UnsaveHideReport

90% Upvoted

Comment as aarondeep09

COMMENT
Markdown mode
SORT BY

BEST
level 1
SQLBob

Microsoft Certified Master, MVP8 points·5 months ago

It can, but there are several variables, including indexes on your dbo.Customers table and whatever
it's being joined to. Only way to tell for sure is to test with your dataset in your environment. The
extra time you put in on testing now can save you tons of time later.
Other thoughts:
 Unless you're using all the columns in dbo.Customers, don't select *. Just select the
columns you need. This can affect which indexes are used and ultimately your performance.
 Inequalities (such as your EnrollerID <> 7) can force scans, which again can slow
things down. Sometimes there's no way to avoid them though.
ReplyGive Award

Share

ReportSave

level 2
w_savage

1 point·5 months ago

Hey this is great feedback, thank you! I'll do some testing.


ReplyGive Award

Share

ReportSave

level 2
pooerh

SQL Server Consultant1 point·5 months ago

Just select the columns you need. This can affect which indexes are used and ultimately your
performance.
Just fyi, this only matters on the outer query, query optimizer is smart enough to figure out which
columns are necessary. If you select* in the subquery and then only select columns that are part of
an index in the outer, it'll work just fine.
create index ix_y_l on y (l) include (k);

select x.id, suby.id, suby.k


from x
left outer
join (select * from y where l = 5) suby
on x.id = suby.id
This will correctly use the index on y, regardless of how many columns y has (assuming clustered
index on id, and good enough cardinality of the index of course).
live demo with plan
ReplyGive Award

Share

ReportSave

level 2
ScaryDBA

Data Platform MVP1 point·5 months ago

HUGE emphasis on not using SELECT * here. Whoa boy could that really badly affect plan choices.
ReplyGive Award

Share

ReportSave

level 2
Baldie47

1 point·5 months ago

Even if he needs all columns. Is good practice to name each column that you want. Not only is better
for future proofing. More clear and efficient.
ReplyGive Award

Share

ReportSave

level 1
Voltrondemort

6 points·5 months ago

Abandon all hope, all ye who enter here. SQL query optimization is a black art that will take years off
your life, and then the new SQL server version will change the query planner on you anyways.
The principle is that you're supposed to say what you mean and the SQL server figures out how to
do that.
In practice? Not so much.
ReplyGive Award

Share

ReportSave

level 2
LorenzoValla

2 points·5 months ago

Agreed. It can be a deep dark tunnel of letdowns.


ReplyGive Award

Share

ReportSave
level 1
Euroranger

3 points·5 months ago

What are you left joining onto with your "c" table and what columns of that join are you wanting to
pull out in your original select? Doing the select * in the subquery is one place you can cut down on
your execution time...only pulling in the columns you actually need.
Another is to check to see if the columns in your subquery where statement are indexed properly.
You're doing filtering against them so indices on those columns might could help.
Another possibility is to do the entire subquery as a previously declared temp table and then do the
left join onto that. It may or may not help with execution time but it pulls out the subset of the data
you're joining onto outside of the main query which could have a lot more than this going on. Hard to
say seeing only a small snippet of the overall query.
ReplyGive Award

Share

ReportSave

level 2
w_savage

1 point·5 months ago

Im using customerID as the keys. I guess I should have said in general is my thought correct? The
current query runs in about 25 seconds.
ReplyGive Award

Share

ReportSave

level 3
Euroranger

2 points·5 months ago

So, your "main" query will have a customerID value and you're left joining onto a table called
customers...presumably to retrieve customer information for that particular ID. If customerID is a
unique ID value for customers what's with the left join? Is it possible the customerID is not unique? If
it's not then pushing the where statement into the subquery makes some sense.
However, if you're doing the join on customerID because it is a unique value then the where clause
in the subquery isn't doing anything for you. I'm guessing the where statement isn't superfluous for
the overall query but it could be you should be starting with the customer table, doing a join onto the
rest of the tables through customerID and then applying the where statement to the main query
further down to qualify the customerIDs you're using to join with.
ReplyGive Award

Share

ReportSave

level 1
caveat_cogitor

2 points·5 months ago

Learn to check and interpret Query Execution Plans. It gets way more complex/dark-artsy depending
on how recent the version of SQL Server engine is. However, in some cases, you will end up with
the exact same execution plan. In others... not so much. Learn a bit of SQL Server internals, and
once you get a solid understanding of how the engine processes your code, you will hopefully start
to internalize how code becomes execution, and you will keep that in mind as you are writing
queries.
ReplyGive Award

Share

ReportSave

level 2
ScaryDBA

Data Platform MVP3 points·5 months ago

Agreed. And here's a free book that can help with that.


ReplyGive Award

Share

ReportSave

level 3
DrFishTickledMyToes

2 points·5 months ago

I forgot that you posted the .pdf for free. Thanks! For those that don't know, this is THE book on
execution plans.
ReplyGive Award

Share

ReportSave

level 1
phunkygeeza
Business Intelligence Specialist2 points·5 months ago

Like everyone says, it's complicated.


But as a rule of thumb try to stick to one query and avoid subqueries unless the needs of the logic
force you to.
For your example you should just join the table and put those predicates in the ON clause.
ReplyGive Award

Share

ReportSave

level 1
[deleted]

1 point·5 months ago

Post the full query, along with the index details. In case of quick fix, I would put the sub query result
into a temp table and join it.
Reply

Share

ReportSave

level 1
pooerh

SQL Server Consultant1 point·5 months ago·edited 5 months ago

It's hard to advice on the optimization if all we have is one subquery, without even the join
conditions. There are certainly two issues here.
First, id IN (1,2,3) is short for id = 1 OR id = 2 OR id = 3, and logical OR is usually bad for
performance because it heavily skews cardinality estimates; so the query optimizer will have little
idea on how many rows does this query return and thus may have issues with selecting the correct
physical operation for your join, the amount of memory granted to the query and whether to
parallelize. Eventually, this can lead to a situation where your query returns 50 rows but the
optimizer works under the assumption that it will return 718 billion rows and needs 2 TB of memory
for it.
Second, as already mentioned, inequality operators are bad as well, because it's hard to select from
an index based on an inequality operator (how effective it is depends on some factors, mostly how
good your column statistics are). It may be hard to avoid though.
As I said, without the whole query, knowing the database and its data, seeing the plan, it's kind of
difficult to advice on the query alone. For code simplicity, this will have the exact same effect:
LEFT OUTER JOIN dbo.Customers c
ON c.CustomerTypeID IN (2, 8, 12)
AND c.EnrollerID <> 7
AND c.CustomerStatusID = 1
AND .... -- your regular join conditions
What you can do though, if this is an analytics database that you want to speed up, and possibly if
you use this kind of query often, is to create a materialized view of that subquery:
CREATE VIEW vSomeKindOfCustomers
WITH SCHEMABINDING
AS
SELECT .. -- columns that you need
FROM dbo.Customers c
WHERE c.CustomerTypeID IN (2, 8, 12)
AND c.EnrollerID <> 7
AND c.CustomerStatusID = 1
GO

CREATE UNIQUE CLUSTERED INDEX ixSomethign ON vSomeKindOfCustomers (id)


GO
Then you join to this view instead of your table directly. Important note: Having a view like that will
slow down INSERT performance on the base table, because the view needs to be updated.
If materializing a view is not acceptable, or if the filters you have on that customers table are
dynamic, you may want to consider creating a temporary table, if you're in some kind of procedural
environment (e.g. stored procedure, or manually running these queries).
SELECT * -- or just the columns you need
INTO #filteredCustomers
FROM dbo.Customers
WHERE ... -- your conditions
Then you join to that #filteredCustomers temp table instead, it's already filtered. You may even
create a clustered unique index on it (just regular CREATE UNIQUE CLUSTERED INDEX ix ON
#filteredCustomers (id) or some such) and it'll give you extra performance if you're joining on
that id column. Depending on the size of your table, speed of IO, etc. it may speed up your queries
by orders of magnitude. Or slow them down. So test everything.
ReplyGive Award

Share

ReportSave

level 1
ScotJoplin

1 point·5 months ago

In almost all cases the answer is no. The optimiser can promote predicates into joins or apply them
later. It can make a difference but usually not with trivial rewrites.
How to Upload Files to Azure Storage via
PowerShell
techcommunity.microsoft.com/t5/ito...

Script Sharing

3 Comments

Give Award

Share

UnsaveHideReport

90% Upvoted

Comment as aarondeep09

COMMENT
Markdown mode
SORT BY

BEST
View discussions in 3 other communities

level 1
Method_Dev

12 points·4 months ago·edited 4 months ago

Here is a different way without the REST API.


EDIT:
I Provided both the method to upload to a blob container as well as a file share using the Az
module instead of the REST API. Added a method that allowed you to upload files using the REST
API without using InFile
#Blob Version Start
if(!(Get-Module -ListAvailable | ? { $_.Name -like 'Az*' }))
{Install-Module -Name Az -AllowClobber -Force}
$StorageAccountName = "{StorageAccountName}"
$StorageAccountKey = "{StorageAccountKey}"

$azure_StorageContextSplat = @{
StorageAccountName = $StorageAccountName
StorageAccountKey = $StorageAccountKey
}

$storageContext = New-AzStorageContext @azure_StorageContextSplat

$filesToUpload = GCI "C:\temp" -File | select FullName

$filesToUpload | % {

$azure_FileToUploadSplat = @{
Context = $storageContext
File = $_.FullName
Container = "{StorageContainerName}"
Force = $true
}

Set-AzStorageBlobContent @azure_FileToUploadSplat

}
#Blob Version End

#File Share Version Start


if(!(Get-Module -ListAvailable | ? { $_.Name -like 'Az*' }))
{Install-Module -Name Az -AllowClobber -Force}

$StorageAccountName = "{StorageAccountName}"
$StorageAccountKey = "{StorageAccountKey}"

$azure_StorageContextSplat = @{
StorageAccountName = $StorageAccountName
StorageAccountKey = $StorageAccountKey
}

$storageContext = New-AzStorageContext @azure_StorageContextSplat


$Container = Get-AzStorageShare -Name "{FileShareName}" -Context $storageContext

$filesToUpload = GCI "C:\Temp" -File | select FullName


$filesToUpload | % {
$azure_FileToUploadSplat = @{
Source = $_.FullName
Share = $Container
Force = $true
}

Set-AzStorageFileContent @azure_FileToUploadSplat
}
#File Share Version End
Here is a method without using InFile:
<# Get SAS Token Start #>
$StorageAccountName = "{StorageAccount}"
$StorageAccountKey = "{StorageAccountKey}"

$azure_StorageContextSplat = @{
StorageAccountName = $StorageAccountName
StorageAccountKey = $StorageAccountKey
}

$storageContext = New-AzStorageContext @azure_StorageContextSplat

$container = "{StorageContainerName}"
$expiryTime = (get-date).AddHours(1)
$permission = "rwa"

$SASToken = New-AzStorageContainerSASToken -Name "{StorageContainerName}" -Permission


$permission -ExpiryTime $expiryTime -Context $storageContext
<# Get SAS Token End #>

<# Upload File Start #>


$StorageURL = "https://{StorageContainerName}.blob.core.windows.net/{MyStorageName}"
$FileName = "{FileName}.jpg"
$FileToUpload = "C:\temp\{FileName}.jpg"
$Content = [System.IO.File]::ReadAllBytes($FileToUpload)

$blobUploadParams = @{
URI = "{0}/{1}{2}" -f $StorageURL, $FileName, $SASToken
Method = "PUT"
Headers = @{
'x-ms-blob-type' = "BlockBlob"
}
Body = $Content
}

Invoke-RestMethod @blobUploadParams
ReplyGive Award

Share

ReportSave

level 1
Wireless_Life

3 points·4 months ago

I was looking for a way to automate moving video content to cloud storage and cobbled together this
PowerShell script via research. Let me know if there is a better way to accomplish this.
ReplyGive Award

Share

ReportSave

level 2
Method_Dev

5 points·4 months ago·edited 4 months ago

That’s how you do it, at least this is how I do it. It’s simple.
You can also use Set-AzStorageBlobContent or Set-AzStorageFileContent

Interview query review for sql practice?


Discussion

Prepping for a data science interview and I've been re-learning SQL. I've seen this site
recommended by some people. Can anyone speak to if it was worth it?
I’ve done a lot of the SQL questions on leetcode but they don’t have a lot of the analysis component
in them.
6 Comments

Give Award

Share

UnsaveHideReport

96% Upvoted

Comment as aarondeep09
COMMENT
Markdown mode
SORT BY

BEST
level 1
Past-Haunts

7 points·4 months ago

Not sure how much it’s worth, but it’s probably worth it if you can recoup the cost in a few hours on
the new job.
ReplyGive Award

Share

ReportSave

level 1
AdmiralAdama99

6 points·4 months ago

I like this free website for learning basic sql and practicing basic questions. Its interactive too which
is nice.
https://sqlbolt.com/
ReplyGive Award

Share

ReportSave

level 1
Stev_Ma

2 points·4 months ago

Check out Hackerrank and Stratascratch


ReplyGive Award

Share

ReportSave

level 1
Kris_Tyhoney

3 points·4 months ago

IMO I think it's been mostly helpful for me to understand the type of tech interview questions each
company have used and less on knowing the questions themselves. Regardless I did go through a
lot of the SQL questions and I thought the answers are usually thorough and nice to have
breakdown in steps. 
ReplyGive Award

Share

ReportSave

level 1
MinecraftBattalion

1 point·3 months ago

DataCamp I think has the best SQL courses to learn functions and gets as advanced as window
functions.
That being said since you can write a query a ton of different ways it’s just a matter of putting in the
reps, hackerrank has good practice questions
ReplyGive Award

Share

ReportSave

level 1
Laserpainter

1 point·4 months ago

I like it. User interface doesn’t look that great but overall is pretty helpful in terms of learning what’s
on the interview after I did the last round of cycles.
Reply

How to keep database clean of not needed


tables
Hi all,
from time to time you have to create some custom tables within production database either to create
some reports or as backup when updating some data manually. What are the best practices to
perform housekeep on such tables so they are not present in database forever?
What I was think is creating some table, where I would define list of tables which are needed by the
application which is using the DB and then some schedules task, which would for example notify me
via email about all other tables which are not defined within that first table and are older than X days.
By this I would be easily reminded to either put the table to list of predefined tables or to drop it.
When you are dealing with PII data this is extra important and you also don't want to drop tables
automatically as you could lose data that were important / needed for other purposes.
Thank you for any tips.
10 Comments
Give Award
Share
UnsaveHideReport
100% Upvoted
Comment as aarondeep09

COMMENT
Markdown mode
SORT BY
BEST
level 1
ihaxr
4 points·4 months ago
What are the best practices to perform housekeep on such tables so they are not present in
database forever?
We just don't create them in the Production database in the first place. We created another database
which contains all of our modified stored procedures, tables, views, quick backup tables, etc...
ReplyGive Award
Share
ReportSave

level 2
Bezdak
1 point·4 months ago·edited 4 months ago
Well that doesn't really solve the issue if you are dealing with PII data and you need to have all data
properly managed due GDPR eg. such as when customer requests to delete their data, you need to
be fully aware of all places where you have their personal data to be able to delete it, your
suggestion would only complicate this procedure.
ReplyGive Award
Share
ReportSave

level 3
LaughterHouseV
1 point·4 months ago
Why must they be in production in the first place? This is a huge smell.
ReplyGive Award
Share
ReportSave

level 3
alinroc
1 point·4 months ago
you need to have all data properly managed due GDPR
You're starting from the invalid assumption that what you're doing today is properly managing your
data. You can't scatter the data all over the place and then decide to clean it up.
ReplyGive Award
Share
ReportSave

level 3
rotist
1 point·4 months ago
If you don't want to create additional database, then maybe create a schema dedicated to store
temporary data?
ReplyGive Award
Share
ReportSave

level 1
alinroc
3 points·4 months ago
either to create some reports
Temp tables, not regular tables. They disappear when their scope terminates. Problem solved.
as backup when updating some data manually
That's why you take transaction log backups throughout the day. Or better yet, you don't update data
manually in the first place.
I'm with /u/ihaxr. You keep the database clean of these tables by not creating them in the first
place. Everything you describe is a terrible band-aid on a very avoidable problem.
ReplyGive Award
Share
ReportSave

level 1
hunua
2 points·4 months ago
I keep all my DB schemas in a git repo. It's an easy way to keep track of everything.
Then I ran into a similar problem with a system I inherited from someone else. There were hundreds
of production DBs that were supposed to be identical, but all had so much temporary junk in them
we were drowning in it.
So I created a simple script to visualise the differences. Check
out https://github.com/rimutaka/posts/tree/master/azure-sql-migration-series/visualising-differences-
and-similarities-in-multiple-database-schemas if you want.
ReplyGive Award
Share
ReportSave

level 2
ScaryDBA
Data Platform MVP1 point·4 months ago
Source control to the rescue!
This is the right way to get the job done. Further, once you've got a database in source control, you
can start to develop and build the database from there. It gives a second kind of backup (structure
only, but, another backup), a way of auditing changes to know who did what, and, the ability to easily
undo changes because you can track those changes.
ReplyGive Award
Share
ReportSave

level 1
blobbleblab
1 point·4 months ago
I would do it the other way - define a list of tables what you want to drop, with timeouts and a daily
cleanup task. The reason for defining tables that I know I want to clean up is that it sounds like your
data requirements mean you will need to keep data which is new/less defined rather than delete it.
Having a "positive" table where you delete given tables rather than a "negative" table is much better.
Having a negative table as you suggest means that if you create a new table which you want to
keep, but forget to put it into the "negative" table, it will be deleted on a clean up cycle. Imagine a
scenario where you roll out some changes, then go on leave for a week (planned or unplanned) and
forget to update the table which lists those you want to keep. Half way through the week they get
deleted and you are in the poop. Having a listing of objects you know you want to delete is much
better as you will be forced to put deletable objects into it. It would be a lot safer and you wouldn't be
relying on you getting emails to check and act on.
I would structure the table something like:
CREATE TABLE dbo.TablesToCleanup (TablesToCleanupID int IDENTITY(1,1) PRIMARY
KEY,TableSchema varchar(100) NOT NULL,TablePattern varchar(200) NOT NULL,TimeOutDays
smallint NOT NULL,TimeOutAction varchar(20) NOT NULL,Active bit NOT NULL DEFAULT 1)
To get the tables to delete on a given day:
SELECT so.[name] AS [TableName], so.[crdate] AS [CreatedDate], t.TimeOutActionFROM
INFORMATION_SCHEMA.TABLES AS itINNER JOIN sysobjects AS so ON it.[TABLE_NAME] = so.
[name]INNER JOIN dbo.TablesToCleanup t ON it.SCHEMA_NAME = t.SCHEMA_NAMEAND
t.TABLE_NAME LIKE t.TablePatternWHERE DATEADD(day,t.TimeOutDays,so.[crdate]) <
CURRENT_TIMESTAMPAND t.Active = 1
So a few things:
 I would have a table pattern because you may want to delete patterns like all tables
beginning with a particular word or phrase, so you can do
"TempReportForFinanceForDate*" or similar
 Have a way you can turn them off, hence an "Active" indicator
 Have an action indicator. Sometimes you may want to truncate the table, for instance,
rather than delete it. Or archive it somewhere else
Likely you would need some dynamic SQL here to make it work well, or wrap the above in a loop. I
would then create a SQL Agent Job that runs daily, cleaning up the tables.
ReplyGive Award
Share
ReportSave

level 1
dbxp
1 point·3 months ago
You could probably set up something like this with Redgate DLM running as a scheduled task, we
use it for deployments and it picks up any manually modified objects.
However using manual data extracts is not a good backup strategy as you're only backing up that
one table. If the UI updates more than one table or you have triggers or the user can perform a
second action after the change that they previously wouldn't have been able to then reverting the
data to match your backup will not result in the same data you had before the manual modification.

Creating personal database


Discussion

I am taking a SQL class on Udemy. The class provided SQL code to fill the data of on a database so
I can work along with the class... but I need to create the database first. I'm using a macbook. I was
trying to use Microsoft SQL server because that's what I learned on, but I've been jumping through a
lot of hoops (downloading Docker to make a container, downloading Node.js for runtime), and I'm
not sure it's the most efficient way.
Do you guys know the easiest way to make a database on a mac?
3 Comments

Give Award

Share

UnsaveHideReport

67% Upvoted

Comment as aarondeep09

COMMENT
Markdown mode
SORT BY

BEST
level 1
pacman_and_dot

2 points·4 months ago

Can you run a VM and install it on that?


ReplyGive Award

Share

ReportSave

level 1
rajandatta

2 points·4 months ago

Use Sqlite. By far the simplest install. One line to install, one line to create a database.
ReplyGive Award

Share

ReportSave

level 2
xpis2

Discussion1 point·4 months ago

I'll look into that, that sounds perfect. Thank you!

Is this stupid? Can Powershell do automation


like this?
Question

I fear I've gone in over my head here, but up until now, I believe PowerShell could do it...but now I'm
wondering if it SHOULD do it.
I have created a script that mainly revolved around the FileSystemWatcher function. Quick rundown:
It watches a folder location and if anything is added, it does w/e action I want it to.
Short and sweet of my question: Is PowerShell something that can run 24/7 on a host machine in the
background so that my file system watcher is constantly active or am I completely using the wrong
tool?
130 Comments
Give Award
Share
UnsaveHideReport
88% Upvoted
Comment as aarondeep09

COMMENT
Markdown mode
SORT BY
BEST
level 1
Betterthangoku
20 points·4 months ago
Howdy,
If the folder resides on a Windows Server there is a better tool for you, File Server Resource
Manager. It does exactly what you describe, and it's really good.
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
5 points·3 months ago
Although probably a correct answer and a great tool, I don't think I have access to the servers in this
manner and they're making some big changes soon that'll likely just mean this'll be a bigger pain.
Tossing this in my back pocket though!
ReplyGive Award
Share
ReportSave

level 2
funkmesideways
2 points·3 months ago
True that. Forgot about that bad boy. Good call.
ReplyGive Award
Share
ReportSave

level 1
2PhatCC
35 points·4 months ago
I would think your best bet would be to set the script to run on a scheduled task. Does it need to run
every single second? Could it run every minute? Every 10 minutes?
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
10 points·4 months ago
Because of what it does (Watch the file system), it currently is set up to run continuously. So as long
as my PowerShell session exists, it sees what happens and reacts. Setting it on a scheduled event
wouldn't be impossible, but I genuinely also want to see if the file watcher can be used in this
manner.

To be honest, I could probably just set the script to run once per day and if the files in there are
created today, do my action, but for automation and instant update purposes, I'd like to see it use the
system watcher
ReplyGive Award
Share
ReportSave

level 3
gangculture
41 points·4 months ago

try it, see where it gets ya. take a rolling fuck at a donut, take a flying fuck at the moon. and let me
know how it goes.
ReplyGive Award
Share
ReportSave

level 4
nilly24
15 points·4 months ago
I have no useful input to OPs powershell question, but my goodness that comment cracked me right
up. Thanks for that!
ReplyGive Award
Share
ReportSave

level 4
MyOtherSide1984
5 points·4 months ago
I've never heard that expression before, but I'm still researching and seeing how this object works :)
ReplyGive Award
Share
ReportSave
level 5
gangculture
10 points·4 months ago
haha it’s a paraphrasing of an expression from some kurt vonnegut books, not sure if your 1984
name is reference to george orwell but you’d enjoy them books
ReplyGive Award
Share
ReportSave

level 6
MyOtherSide1984
4 points·4 months ago
kurt vonnegut
lol a quick search gave me some decent quotes. This makes more sense now
ReplyGive Award
Share
ReportSave

level 3
funkmesideways
5 points·4 months ago
If it's not a resource hog you could probably look at packaging it up and running it as a service. Done
that before. Can't remember how but sure Google does.
ReplyGive Award
Share
ReportSave

level 3
PMental
3 points·4 months ago
Another option is to have the script run every minute or so and log the current files to file, then
compare the current run with the logged one. Perhaps save the file listing as an object using Export-
Clixml and import for comparison using Import-Clixml.
ReplyGive Award
Share
ReportSave

level 4
MyOtherSide1984
5 points·4 months ago
This seems like the rout I'll have to go :/
ReplyGive Award
Share
ReportSave

level 3
DblDeuce22
3 points·4 months ago
I had a while loop going for over a day and there was no memory leak, was a constant 200MB or
something iirc. So I guess it would depend on the optimization of the watcher program you
mentioned, and what all the script is doing, would be my guess.
ReplyGive Award
Share
ReportSave

level 4
MyOtherSide1984
3 points·4 months ago
That's intriguing to know. Did you have to leave PS up?
ReplyGive Award
Share
ReportSave

level 5
tocano
3 points·4 months ago
Look into Start-/Get-Job. It's easy to run a script with a while ($true) that checks for changes and
then pauses for __ seconds before next loop and you dont have to keep an unusable console open.
ReplyGive Award
Share
ReportSave

level 6
MyOtherSide1984
1 point·3 months ago
Wow, I forgot about jobs lol. 20,000 commands and I can't seem to remember them all ;P
Could you break down Jobs a bit more. Where do they run? How do they start/end? Do they ever
expire or break?
ReplyGive Award
Share
ReportSave

level 7
tocano
3 points·3 months ago
You might have to dive deeper than I have and understand how Runspaces work. But I was able to
create a scheduled task to run on startup that starts a background job which runs a persistent loop.
This allows the sched task job to complete while the loop process continues.
Because of what I was doing in the loop it would "break" occasionally after running for extended
periods of time (not a fault with PSJobs). So I added another Trigger to the scheduled task to run
once a week, kill the old loop process and create a new one.
This seems to work well for my need and it would seem to be an (albeit inelegant) option for your
need as well.
ReplyGive Award
Share
ReportSave

level 8
MyOtherSide1984
2 points·3 months ago
If it ain't broke, don't fix it :P. This wouldn't be an entirely final solution, but would be an acceptable
option for as long as I'm employed and no one else wants to take on the task. Since I've been
designated for this fix, I think I can just do w/e and if I have to explain it, I can, and I'll document it
well. This might be how I end up doing it tbh
ReplyGive Award
Share
ReportSave

level 9
tocano
2 points·3 months ago
Good luck!
ReplyGive Award
Share
ReportSave

level 5
ericm272
2 points·4 months ago
If what he's saying is correct - you could create a startup task to launch the script, then let it ride in
the loop. No need to run a PS prompt.
ReplyGive Award
Share
ReportSave

level 5
DblDeuce22
1 point·3 months ago
Yea, I left it minimized, no issues.
ReplyGive Award
Share
ReportSave

level 3
Awesome_Guy_Rain
1 point·4 months ago
I see what you're saying... But that's kinda where the transition from a script, to a program/tool
comes into play.
If you could do both then you'd code a tool to do it and run that, not a script that ran indefinitely in it's
session.
ReplyGive Award
Share
ReportSave
level 4
MyOtherSide1984
1 point·3 months ago
To be completely honest, this IS a tool that exists, but my company would rather me re-engineer it
over paying for it :P. What would be a good starting point for figuring out how to make a script a
program?
ReplyGive Award
Share
ReportSave

level 3
j0hnnyrico
1 point·3 months ago
As per my understanding the FileWatcher class works in another way, it watches the windows api for
opened files. Hence if you run it once per day you'll get only the results that he finds on that one
instance. So to respond to OP question: yes you can leave it running, at some point it will either
crash powershell session or who knows what. I was looking into a solution like that but meh, too
much testing. Depends on your purpose: if you want instant alerts and they are essential or not.
ReplyGive Award
Share
ReportSave

level 1
anon_a_mouse2
9 points·4 months ago
You could run the script as a service. https://4sysops.com/archives/how-to-run-a-powershell-script-
as-a-windows-service/
ReplyGive Award
Share
ReportSave

level 2
slvrmark4
2 points·4 months ago
I've used this before with file system watcher. Setup a jump sftp server. Counts pages and scans
before sending them on.
ReplyGive Award
Share
ReportSave

level 3
MyOtherSide1984
1 point·3 months ago
Any issues with crashing?
ReplyGive Award
Share
ReportSave
level 4
slvrmark4
1 point·3 months ago
I have a scheduled task to restart the service once a week. Never had trouble with it freezing.
ReplyGive Award
Share
ReportSave

level 1
timsstuff
11 points·4 months ago
The problem is you have to have a user session for it to run, I've never tried installing a Powershell
script as a Windows Service but I have created Windows Services in .NET. I would look into that
instead.
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
4 points·4 months ago
I know nothing about .net, but my coworker mentioned using a service instead as well, and the
filesystemwatcher is in C++ from what I can tell, so it should be pretty easy to put it all together. It
may be more work than it's worth though, we'll see
ReplyGive Award
Share
ReportSave

level 3
timsstuff
3 points·4 months ago
You can just download Visual Studio 2019 Community Edition for free and the Windows Service
project is under Visual C#, Windows Desktop. C# is a direct descendant of C++ so the code should
port pretty easily.
If I were you I would start with a Console App in the same section and test it out, once it works there
you can take the same code and apply it to the Windows Service project but it's much easier to
debug and write to the console with a Console App, with Windows Service you would need to write
to the Event Log or text file or something since there's no UI at all.
ReplyGive Award
Share
ReportSave

level 4
nightbladeofmalice
2 points·4 months ago
You should note here that VS Community is only available for 5 people within an organization with
less than 250 computers and $1m in revenue. If your workplace does not satisfy the above
conditions you can go for VS 2017 Express instead, which offers basically unlimited commercial use.
ReplyGive Award
Share
ReportSave

level 4
Collekt
1 point·4 months ago
I'm a noob but was just watching a video about creating Windows services using the Visual Studio
project. It said you can run it from within Visual Studio and it will give you a console for exactly this
reason, to test before you deploy it to run in the background. Do you know if this is the case?
ReplyGive Award
Share
ReportSave

level 5
alexuusr
1 point·4 months ago
I don't think you can directly test a service project in Visual Studio, at least I couldn't figure it out and
my experienced developer coworker said it wasn't possible.
What I did was create all the logic in a console app project as the reply above , iron out the bugs etc,
then just copy the code into a Windows service project.
ReplyGive Award
Share
ReportSave

level 5
timsstuff
1 point·3 months ago
You can debug and watch any error messages but I would start it with a console project then just
copy the code into the service project when you're happy with it, much easier.
ReplyGive Award
Share
ReportSave

level 6
Collekt
1 point·3 months ago
Ah ok, cool. Thanks! :)
ReplyGive Award
Share
ReportSave

level 3
dathar
2 points·4 months ago
We've used filesystemwatcher for both creation and modification for smaller-scale temporary
projects. It is ran on startup via a scheduled task running as a service account although it can also
run as any type of account that has access to the thing you're watching and what you're actioning.
One was that if a file was modified or created, it would change the XML file for an internal
Chocolatey package and build a new one, then send it upstream to an internal Chocolatey server.
We haven't tried to run it for prolonged times (5+ days) but it does work.
ReplyGive Award
Share
ReportSave

level 4
MyOtherSide1984
1 point·3 months ago
I'm probably building EXACTLY what you built. I'm wanting to import new files from PatchMyPC into
SCCM for testing purposes. Cut out the first few steps and automate a handful of the task without
paying for the product. I don't want to ask for your solution, but if you have any guidance, I'd be
appreciative! :)
ReplyGive Award
Share
ReportSave

level 5
dathar
1 point·3 months ago
I think I stripped out all the crazy bits but basically we have some very large files that we have to
distribute to a ton of boxes. We just use an in-house torrent for the share and Chocolatey for
versioning. Puppet handles the Chocolatey aspect with its own package manager wanting the latest
version. Basically if there's a new torrent file dropped into a specific folder, it'll modify the nuspec file,
insert in a new version number, build it and send it to the package repository.
$scriptpath = Split-Path -parent $MyInvocation.MyCommand.Definition

$TorrentFileFolderPath = "D:\Torrents"
$ChocoTemplatePath = "D:\Torrents\ChocoBase"

$filesystemwatcher = New-Object System.IO.FileSystemWatcher


$filesystemwatcher.Path = $TorrentFileFolderPath
$filesystemwatcher.IncludeSubdirectories = $false
$filesystemwatcher.EnableRaisingEvents = $true
$filesystemwatcher.Filter = "*.torrent"
$filesystemwatcher.NotifyFilter = [System.IO.NotifyFilters]::LastWrite -bor
[System.IO.NotifyFilters]::FileName
$builderAction = {

$torrentfile = Get-Item ($eventArgs.FullPath)


$torrentitem = ($torrentfile.BaseName).Replace(".****","").Replace(".****","")

$choconugetname = "***********-$torrentitem"
$structure = Join-Path -Path $ChocoTemplatePath -ChildPath $choconugetname
$nuspeccontent = [xml](Get-Content -Path (Join-Path -Path $structure -ChildPath
"$choconugetname.nuspec"))
$nuspeccontent.package.metadata.version = (Get-Date).ToString("yy.MM.dd.HHmmss")
$nuspeccontent.Save((Join-Path -Path $structure -ChildPath
"$choconugetname.nuspec"))
Copy-Item -Path $torrentfile.FullName -Destination (Join-Path -Path $structure
-ChildPath "tools\$($torrentfile.FileName)") -Force -Verbose
& choco pack "$structure\$choconugetname.nuspec"
}

$handlers = . {
Register-ObjectEvent $filesystemwatcher "Created" -SourceIdentifier TorrentMaker
-Action $builderAction
}
try
{
do
{
Wait-Event -Timeout 1
} while ($true)
}
finally
{
Unregister-Event -SourceIdentifier TorrentMaker
$handlers | Remove-Job
$FileSystemWatcher.EnableRaisingEvents = $false
$FileSystemWatcher.Dispose()
}
ReplyGive Award
Share
ReportSave

level 6
MyOtherSide1984
1 point·3 months ago
That doesn't seem to be exactly what I'm building, but still very cool! IDK what chocolatey all offers,
but they seem quite exhaustive and outside of the normal bounds of a corporate world. This is
roughly what I have, just unique to chocolatey. What you do have though is the trigger method, I'll
take a deeper look into that and strip out your action for my own and test it!
ReplyGive Award
Share
ReportSave

level 7
dathar
1 point·3 months ago
Chocolatey gives us the following:
1. Package repository/source of truth
2. Versioning
3. An installer of sorts that you can use, or trash the entire prebuilt installer and just write
straight PowerShell code. It is usually elevated so you have access to everything not user-
specific
4. Something that can be referenced and called upon by many things. Puppet can ensure
that the package is on, if it should keep the latest version or a really specific one, etc.
It is handy :)
ReplyGive Award
Share
ReportSave

level 8
MyOtherSide1984
1 point·3 months ago
So you're using Chocolatey as a repository and Puppet as a packager? Are you paying for both?
I dislike command line interfacing, so I strayed away from Chocolatey pretty quickly. It also didn't
have everything I wanted (surprisingly), but I may have to give it another look.
ReplyGive Award
Share
ReportSave

level 9
dathar
1 point·3 months ago
Using Puppet as a state config manager. I basically have a list of servers that I want to have specific
functions and content. They should have a list of things already installed and kept up to date. Puppet
helps out with that.
Puppet abstracts out the command line part so you don't even see it - Puppet is running it as its own
account or as Local System (depending how its agent service is installed).
In my line of work, I have temporary events with a lot of computers that demo software products.
Sometimes it is 30-60 computers at a small hotel or bldg. Sometimes it is 2000+ systems with
different software all throughout an entire convention center. It gets rather complex so we automate
out what we can so we don't end up putting all the work on the techs and QA guys :)
ReplyGive Award
Share
ReportSave

level 10
MyOtherSide1984
1 point·3 months ago
Ahhh okay this is making much more sense now, I can see why you're doing what you do. I was
missing some key components. We have different applications, but want the same end goal basics.
You've been very helpful! I am intrigued to hear of the different configurations that people use! It's
like a puzzle that doesn't have a right or wrong answer, it just depends on how you view it
ReplyGive Award
Share
ReportSave

level 3
brandeded
3 points·4 months ago
I was going to say, use c# and write a service. The learning curve exists, but I look at PowerShell as
the c# shell.
ReplyGive Award
Share
ReportSave

level 4
MyOtherSide1984
1 point·3 months ago
Intriguing honestly, learning C# doesn't seem like a bad career move ;P. Just lots of work
ReplyGive Award
Share
ReportSave

level 5
brandeded
2 points·3 months ago
Less work than you think.
ReplyGive Award
Share
ReportSave

level 6
MyOtherSide1984
1 point·3 months ago
Enticing ;)
ReplyGive Award
Share
ReportSave

level 2
motsanciens
3 points·4 months ago
This article is a pretty excellent write-up on creating a windows service in dotnet core, and it
happens to use FileSystemWatcher as part of its example: https://codeburst.io/create-a-windows-
service-app-in-net-core-3-0-5ecb29fb5ad0
ReplyGive Award
Share
ReportSave

level 3
MyOtherSide1984
1 point·3 months ago
Thanks! :)
ReplyGive Award
Share
ReportSave

level 2
Fallingdamage
1 point·4 months ago
If they made it a service..
ReplyGive Award
Share
ReportSave

level 1
SeeminglyScience
9 points·4 months ago
You can, but PowerShell kinda sucks at running forever.
Also may or may not be relevant for what you're doing, but FileSystemWatcher isn't perfectly
reliable. If it's imperative that every single possible event fires, don't use it.
What's the scale? How many files are you expected to be added at what frequency? How many files
will sit forever? Do actions really need to fire exactly when a file is created? Or can it be 10 minutes
later? I ask because it's generally better to just poll the directory, but that obviously doesn't scale
well.
ReplyGive Award
Share
ReportSave

level 2
anomalous_cowherd
5 points·4 months ago
You also need to watch for files that haven't finished writing yet, and device what to do about files
that stay open for a very long time.
ReplyGive Award
Share
ReportSave

level 3
SeeminglyScience
3 points·4 months ago
Yeah you definitely need a while loop with a try catch for sharing violations. Probably need it even if
you're doing simple polling, but much less likely to happen.
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
3 points·4 months ago
It's not mission critical and I'd anticipate a maximum of 10 writes per day. The rest of my script
cleans up old files and logs so that the directory stays clean (+-21 days). I noted that
filesystemwatcher isn't reliable if you were working with files that offered temporary saves and such,
but I'm only dealing with exe's and MSI's. I honestly think you answered my question: Powershell
just isn't meant for running forever. I'm leaning towards just doing scheduled tasks instead, but can
you define your last sentence about polling the directory?
ReplyGive Award
Share
ReportSave

level 3
SeeminglyScience
6 points·4 months ago
I noted that filesystemwatcher isn't reliable if you were working with files that offered temporary
saves and such, but I'm only dealing with exe's and MSI's
No it's not reliable in that sometimes it just won't fire. Usually that has to do with too many events
being triggered at the same time, but it's not impossible otherwise. It's a fantastic API when you're
doing something that has a separate refresh mechanism like checking a settings file for changes (so
if it misses an event, you can just reload the app or make another change).
but can you define your last sentence about polling the directory?
Yeah, like Get-ChildItem C:\path\to\directory\with\changes . If you don't need to detect
changes between invocations (which sounds like the case here), then that's really all you need to do.
ReplyGive Award
Share
ReportSave

level 4
MyOtherSide1984
1 point·3 months ago
So you mean storing an older version of the directory into a variable and then comparing it with a
new variable to find difference and act on those changes?
ReplyGive Award
Share
ReportSave
level 5
SeeminglyScience
1 point·3 months ago
If you are cleaning up the folder after each run, you don't need to store or compare anything.
If you aren't doing that, you'd need to store the results of the last run in a file and compare against
that (you won't be able to persist variables between invocations).
ReplyGive Award
Share
ReportSave

level 6
MyOtherSide1984
1 point·3 months ago
Okay, that makes sense. My action involves copying the file over. I could easily remove the file, but
diagnostics would be much easier if the file remains. I'll check out exporting the file names and such
and working with that. It's partially already doing that, so shouldnt' be hard to impliment
ReplyGive Award
Share
ReportSave

level 2
hire_a_wookie
2 points·4 months ago
Can just use a while(1) and include a sleep in the loop...
ReplyGive Award
Share
ReportSave

level 3
SeeminglyScience
4 points·4 months ago
Yeah for sure. The problem isn't that it's syntactically or conceptually difficult. It's more that it's
incredibly easy to create memory leaks and generally degrade performance because of how a lot of
things work behind the scenes. A whole lot of things are cached in ways that make clean up difficult
or impossible.
That's not to say you can't write a script that can run forever without running into those issues. I say it
sucks at running forever because it's really easy to accidentally cause performance problems, and
the only real way to find what's causing the problem is to have a boat load of knowledge about the
engine and a memory profiler.
ReplyGive Award
Share
ReportSave

level 4
hire_a_wookie
2 points·4 months ago
Sure fair. I think it depends on how often he wants to check I think. I run a lot of my dumb ad hoc
scripts every ten minutes and they fail appropriately depending on what they are doing.
ReplyGive Award
Share
ReportSave

level 5
MyOtherSide1984
1 point·3 months ago
I think there's a large difference between a scheduled task and a continuously running script.
Especially with what I'm building, it creates LOTS of questions and issues, so I think the
filesystemwatcher is just a bad idea mixed with constantly running the application rather than once
every 10 minutes or so
ReplyGive Award
Share
ReportSave

level 1
DevinSysAdmin
3 points·4 months ago
Tell us the problem you are trying to solve with Powershell.
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
1 point·3 months ago
It's a lot bigger than what I've inquired, but essentially, I want to recreate PatchMyPC's paid
application on my own
ReplyGive Award
Share
ReportSave

level 3
DevinSysAdmin
1 point·3 months ago
Okay, and you’re wanting to use this personally or you work for a business with how many
computers that would use this?
ReplyGive Award
Share
ReportSave

level 4
MyOtherSide1984
1 point·3 months ago
I'm full time staff at one company with roughly 2000 nodes. My script runs fully in a test environment
and will never be exposed to production. It eliminates the manual process of creating the test
applications for SCCM
ReplyGive Award
Share
ReportSave

level 5
DevinSysAdmin
1 point·3 months ago
Can you list the applications?
ReplyGive Award
Share
ReportSave

level 6
MyOtherSide1984
1 point·3 months ago
Do you need an exhaustive list or just general top 10? I can provide both shortly
ReplyGive Award
Share
ReportSave

level 1
Sarumad
3 points·3 months ago
It is possible, I found myself in a similiar situation a few years ago.
You can create your own functions to do whatever you need it to do on
create/delete/changed/renamed events.
I used task scheduler to run the script on startup. This might not be the best way to do this anymore,
but 4 years ago it was for my situation.
$Block = {
Function Do-Something
{
param ($message, $event)

# Do something here
}

$watchedFolder = "\\server\folder"
$watcher = New-Object System.IO.FileSystemWatcher
$watcher.Path = $watchedFolder

Register-ObjectEvent -InputObject $watcher -EventName Created -SourceIdentifier


File.Created -Action { Do-Something "Created" $event }
Register-ObjectEvent -InputObject $watcher -EventName Deleted -SourceIdentifier
File.Deleted -Action { Do-Something "Deleted" $event }
Register-ObjectEvent -InputObject $watcher -EventName Changed -SourceIdentifier
File.Changed -Action { Do-Something "Changed" $event }
Register-ObjectEvent -InputObject $watcher -EventName Renamed -SourceIdentifier
File.Renamed -Action { Do-Something "Renamed" $event }
}

$encodedBlock = [Convert]::ToBase64String([Text.Encoding]::Unicode.GetBytes($block))

Start-Process PowerShell.exe -verb Runas -argumentlist '-WindowStyle Hidden', '-


NoExit', '-EncodedCommand', $encodedBlock
ReplyGive Award
Share
ReportSave

level 1
billy_teats
5 points·4 months ago
Powershell isn’t really meant to be used that way. You want a windows service that just tails a log
file and if a particular event happens, do a thing. C++ or c# are probably where you’ll end up
ReplyGive Award
Share
ReportSave

level 1
Comment deleted by user4 months ago
level 2
MyOtherSide1984
1 point·4 months ago
Yeh the two things my co-worker mentioned was: How will I know if it crashes and will it cause any
memory leaks. I'm more so curious how far I can go with this concept at this point lol
ReplyGive Award
Share
ReportSave

level 3
lemon_tea
1 point·4 months ago
Run it. Write a second script that runs as a scheduled job that monitors the process and logs mem
use, cpu use. Then you can check that file for stats until you're comfortable running it long term.
As for monitoring the health of the file monitor, access the file/sir being monitored to cause a log
entry then check the last write time on the log. If it's not recent enough, sound an alarm. This can
also be a scheduled task.
ReplyGive Award
Share
ReportSave

level 1
_benp_
2 points·4 months ago
I've done similar things with a powershell script as a scheduled task that is run once per day and
restarted every 24 hours (in case you are worried about memory leaking). It worked just fine for me.
ReplyGive Award
Share
ReportSave

level 1
SomnambulicSojourner
2 points·4 months ago
I've been running Powershell as a service for a couple years now, doing exactly what you're doing. It
uses the FileSystemWatcher to watch for new files and when it sees them, it uses handbrake to
transcode them to a different format.
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
1 point·3 months ago
Are you doing this on a PLEX server or something like that? Might be a little different than on a office
server
ReplyGive Award
Share
ReportSave

level 3
SomnambulicSojourner
1 point·3 months ago
No, it's in an enterprise environment for an internal web application.
ReplyGive Award
Share
ReportSave

level 1
TheGooOnTheFloor
2 points·4 months ago
It could be a scheduled task that's triggered at boot time. But I don't know about running the file
system watcher nonstop, early versions had a habit of running away with the CPU. I don't know if
that was fixed or not in more current versions.
ReplyGive Award
Share
ReportSave

level 1
Fallingdamage
2 points·4 months ago
Maybe some sort of infinite loop of If/then combined with a try/catch loop?
ReplyGive Award
Share
ReportSave

level 1
jimb2
2 points·4 months ago
I think you'd want to run this as a service. Services have more robust startup, are more isolated from
user activity like accidental termination, and can have their access rights pared for security. It's
possible to run PS as a service but it's a bit clunky. Probably better to write it in C# where there's
going to be a service template to customise.
If you need the flexibility of a PS script to easily change actions you might run a stub service that
monitors the file system and launches a script but that's messy and less secure too.
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
1 point·3 months ago
I like where you're going here since it sticks with PS in mind (which is the extent of my knowledge).
Lots of people saying C# is the way to go, I'll have to check it out.

Although less secure, it would be easy to pass it with parameters :/....idk


ReplyGive Award
Share
ReportSave

level 1
kcinybloc
2 points·4 months ago
Plenty of programs run as a service all the time. The question really comes down to what the most
efficient way to do that is. I admittedly have not researched this but I think something else in the
.NET framework would be better suited for this.
ReplyGive Award
Share
ReportSave

level 1
averburg1
2 points·4 months ago
Didn’t go through all the comments so sorry if it’s been said... you can make powershell run as a
windows service. So it can start automatically, stop, restart, etc. I used this function to monitor some
file shares on a sensitive folder. Best of luck, Andrew
ReplyGive Award
Share
ReportSave

level 1
128bitengine
2 points·4 months ago
Can you install it as a service? https://4sysops.com/archives/how-to-run-a-powershell-script-as-a-
windows-service/
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
1 point·3 months ago
Gonna give this a look
ReplyGive Award
Share
ReportSave

level 1
AWDDude
2 points·4 months ago
Yes it can. I have a very similar script running 24/7 that watches for files being created on a net work
share and tracks how long they exist before being deleted, then send the data to splunk.
ReplyGive Award
Share
ReportSave

level 1
joerod
2 points·4 months ago
If you want to run powershell as a service check out
https://nssm.cc/
I've used this and a powershell while loop, works well
ReplyGive Award
Share
ReportSave

level 1
pharnos
2 points·3 months ago
I wrote this years ago and it has been running ever since with no problems so far. There is a nightly
restart of the script just in case. It handles a very large media folder (hundreds of thousands of files,
100,000+ subfolders) and frequently added files that need to be synced to multiple destinations.
There might be some pieces you can use? Basically it uses FSW to watch for new files created on
Server01, and then builds RoboCopy commands to run that will copy the file to multiple destination
servers (i.e. Server02 and Server03).
It is no doubt really poorly written (always learning PowerShell), inefficient and laughable code - but
it has so far been working great for my application. I should really go back and refactor it at some
point :/
https://github.com/borough11/Folder-MirrorToMultipleLocations

Monitor master folder (including subfolders) and sync any changes to multiple remote shares using
RoboCopy.
Sets up a FileSystemWatcher on "mediafolder" that monitors this directory and all subdirectories for
the created/changed/renamed/deleted file events only (not directory events). This happens every x
amount of seconds (can be set in variables below). When an event is noticed, it is processed and if
OK a respective RoboCopy command is added to an array. (this is because if 100 files are copied in
we don't want to run a RoboCopy mirror 100 times on the same folder unnecessarily). Every x cycles
of the event monitoring loop (can be set in variables below), this array is checked and if there are
any rows, they are executed as RoboCopy commands. This mirrors the "mediafolder" to all remote
folders and writes to the log file.
ReplyGive Award
Share
ReportSave

level 1
AberonTheFallen
3 points·4 months ago
We ended up just making a script to watch a specific directory, compiled it as a Windows service
exe, and let it go. I made it so it has a config file that can change the interval, program/script it runs
when it finds a file, the file pattern match it's looking for, directory, etc. So far it's been working pretty
slick
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
1 point·3 months ago
This seems like a great solution, what path did you take from PS to EXE?
ReplyGive Award
Share
ReportSave

level 3
AberonTheFallen
1 point·3 months ago
I use the Powershell Pro Tools extension in VS Code, another one of our guys has Powershell
studio. Pro tools comes with a free 1 week or one month trial, something like that. It's $100, but the
ability to easily build a gui and compile an exe that others can understand is worth it for me.
Powershell studio was like $400 last I checked
ReplyGive Award
Share
ReportSave

level 4
MyOtherSide1984
1 point·3 months ago
I believe my co-worker got Powershell Studio licensed at work for like $250 during a sale a few
months back. He told me to request it, but I legitimately cannot justify my use case for it. I'd use it to
learn, and that's about the extent of it. I'll see if he'd be open to implementing these options as
services. I imagine there's an inherent benefit to a service over a scheduled task?
ReplyGive Award
Share
ReportSave

level 5
AberonTheFallen
1 point·3 months ago
Perhaps? The place I'm working at now mandates to not use scheduled tasks for whatever reason,
so this is the solution we used for a way around that. I've used scheduled tasks before for scripts
and it worked just as well as this does
ReplyGive Award
Share
ReportSave

level 1
klik_klik1236
1 point·4 months ago
It sounds like you got this figured out but if I wanted to do something like that I'd do it the "redneck
way". Set up a machine that's not going to turn off and run a script that checks that file. I'd put the
script inside a do-until and have it do it until it can ping a non-existent IP!
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
2 points·3 months ago
Lol exactly what I was thinking! Doesn't have to be on the server, just needs the same access to the
server. I could put it on a dummy headless laptop (no screen, w/e lol) and just run it there. If it
crashes, who cares, if it takes up CPU, just restart :P. I'm with you on the red-neck shit, if it finishes
the job, who cares
ReplyGive Award
Share
ReportSave

level 1
slvrmark4
1 point·4 months ago
I have used nssm and ran powershell as a service. It utilized the file system watcher to monitor a
sftp folder. One thing that did hold me up, scanners will sometimes send as a temp file and then
rename when the file transfer is complete. I was monitoring for new files and not renamed files.
Sometimes file system watcher can trigger too fast if you are transferring larger files.
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
1 point·3 months ago
This is an issue, but I think I bypass it by specifying a filter on *.exe files, which is the only one I'm
worried about for now
ReplyGive Award
Share
ReportSave

level 1
wavvo
1 point·4 months ago
Depending on what you are trying to do with the file system, you could use ms flow.
https://flow.microsoft.com/en-us/connectors/shared_filesystem/file-system/
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
1 point·3 months ago
This also intrigued me, but I wasn't too aware of how to connect it with the server AND teams, I may
revisit this with a more knowledgable team member. Issue is that it'd have to be internet facing
ReplyGive Award
Share
ReportSave

level 3
wavvo
1 point·3 months ago
It doesn't need to be internet facing. You install the connector on a server that can talk to the
machine you want to check and the internet. All calls are outbound over 443. No inbound ports need
to be opened.
ReplyGive Award
Share
ReportSave

level 1
Lu12k3r
1 point·4 months ago
You could do a ps script to run robocopy or just run robo copy with switches /MON to monitor
directory. I don’t know how much resource it’ll consume but worth a shot.
https://serverfault.com/questions/55733/the-job-and-monitoring-options-of-robocopy/723590#723590
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
1 point·3 months ago
Is this C#?
ReplyGive Award
Share
ReportSave

level 3
Lu12k3r
1 point·3 months ago
Sorry I’m not familiar with FileSystemWatcher, I just thought I’d offer up a suggestion with built-in
tools.
ReplyGive Award
Share
ReportSave

level 1
phileat
1 point·4 months ago
You could use Saltstack's Beacon and Reactors:
https://docs.saltstack.com/en/latest/topics/beacons/
ReplyGive Award
Share
ReportSave

level 1
purple8jello
1 point·4 months ago
Logic app is a good tool
ReplyGive Award
Share
ReportSave

level 1
jagallout
1 point·4 months ago
You might consider powershells Desired state configuration (DSC) stuff. It can watch a folder and
you can use a custom script to handle the action based on a
condition. https://docs.microsoft.com/en-
us/powershell/scripting/dsc/reference/resources/windows/fileresource?view=powershell-7
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
1 point·3 months ago
This appears to be slightly limited to me KNOWING what will be in there. I'm working with
unexpected files, so I'm not positive this the exact answer
ReplyGive Award
Share
ReportSave

level 1
iceph03nix
1 point·4 months ago
I've built stuff like this in PS and c#/.net, and .net is definitely the better tool for the job, but PS can
get it done.
PS just seems to require more babysitting when set for long term use through scheduled tasks.
And if you're into PS, c#/.net isn't much of a jump. You're already using a lot of the same tools, just
wrapped into a different package.
https://youtu.be/PzrTiz_NRKA this is a good beginner run down of the worker services you can build.
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
1 point·3 months ago
<3 I appreciate that link! Gonna give it a look for sure. Lots of answers saying C# and .net are my
next move, and I really would love to learn them. It's great to hear that they are similar
ReplyGive Award
Share
ReportSave

level 1
mike-shore
1 point·4 months ago
Yes this can be done, and in real time. I had one up to watch a folder that I didn't want people
making changes to. If anyone changed or deleted a file, FSW would restore the original copy from a
"master" folder. If someone added a file, FSW would delete it. This was a while ago, and I can't
remember for sure if it ran as a system but I believe so, and it didn't take much overhead.
ReplyGive Award
Share
ReportSave

level 1
ishboo3002
1 point·4 months ago
Why not just turn on file auditing and then trigger on the event?
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
1 point·3 months ago
Boss said no :P.
ReplyGive Award
Share
ReportSave

level 1
markdmac
1 point·4 months ago
WMI has the ability to watch a folder and take action and will remain persistent even after a reboot
with no need for a scheduled task or to keep PowerShell running on a box. Microsoft author Ed
Wilson has a script for this in his WMI book.
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
1 point·3 months ago
Oh man, that has me VERY interested. Do you know if why WMI or CIM object?
ReplyGive Award
Share
ReportSave

level 3
markdmac
1 point·3 months ago
Sorry I do not.
ReplyGive Award
Share
ReportSave

level 1
Shipdits
1 point·4 months ago
I feel like you might be able to use FSRM for this.
ReplyGive Award
Share
ReportSave

level 1
moldykobold
1 point·4 months ago
I do this to trigger the restarting of certain programs in our environment as we are operating from
Ansible which connects through a different session than the one that you’d see if you just RDP’d into
the system.
There is obviously a more ideal way to do this, but it works so, meh.
ReplyGive Award
Share
ReportSave

level 1
kf5ydu
1 point·4 months ago
https://ironmansoftware.com/universal-automation/ looks pretty cool. I've never tried it before but I
have powershell Pro tools and I like that a lot. If I had the money to spend I would probably get that.
A good alternative would be task scheduler or if you don't want to use that
perhaps https://www.splinterware.com/products/scheduler.html
ReplyGive Award
Share
ReportSave

level 1
Iguanas14
1 point·4 months ago
Yes you can do it this way. I used this to have Microsoft Flow update an excel file and used that as a
trigger to fire a powershell script. You can setup an autologon account and the script will run using
that. It was more of a proof of concept however not sure I'd want it doing anything production with it.
Probably a much better solution out there if you want to give details of what you're trying to
accomplish.
ReplyGive Award
Share
ReportSave

level 1
melbourne_giant
1 point·4 months ago
Create watcher Create Action script
On event fire, create job, job calls action script, on success, clear job. On fail, do stuff.
This ensures that the watcher script does 2 tasks and only 2.
The action script is where everything could go wrong.
And fyi to those asking, you don't use a loop in some instances because it'll either read a file too
soon (and cause locking issues) or read a file more than once, by subsequent executions.
The FSW will only fire an event when the write is complete and the associated handle let's go of the
file in question.
ReplyGive Award
Share
ReportSave

level 1
Cleokwatro
1 point·4 months ago
How big is the folder? I'd imagine just calculating the hashes on the files? but may be different story
if there are tons of files in the folder.
ReplyGive Award
Share
ReportSave

level 1
get-postanote
1 point·4 months ago
Set up your scrip to run as a service.
There are several articles on how to do that on the web.
ReplyGive Award
Share
ReportSave

level 1
poshftw
1 point·3 months ago
Is PowerShell something that can run 24/7 on a host machine in the background so that my file
system watcher is constantly active
Absolutely
am I completely using the wrong tool
Not quite. You can do that with FSRM/audit events, but in the end it boils down to what is better TO
YOUR CASE.
Some notes:
It is totally okay to run PS for how long do you want, with common programming caveats not
restricted to PS itself, ie don't have leaking variables, don't have add-only variables, clean up
variables after the loops.
Also, you can (and frankly - you should) restart your script every day/week. That would help greatly
with memory (if for some reason you ended up using much more than you should) and with
confidence what script is really running.
On how to do that - just use the task scheduler. It has logs, it has customizeable schedule with
restart options. I have a bunch of scripts running on it for years now, never had a problem with that,
except figure on all nuances at the first time.
Some ppl says about using an NSSM, but I would advise against it, because it is only useful if you
have some monitoring solution (eg Zabbix) which will trigger on a dead/stopped service.
ReplyGive Award
Share
ReportSave

level 2
MyOtherSide1984
1 point·3 months ago
Back to basics, how could I have my script interpret if the files have changed in the last 5-10 minutes
and to only act on those? Would I just use creation time?
ReplyGive Award
Share
ReportSave

level 3
poshftw
1 point·3 months ago
Would I just use creation time?
If it is reliable (there is a difference between CreationTime and LastWriteTime, for reference copy
some old files to a new place and gci | select * on them), why not?
I wrote a script years ago for almost exactly this purpose, and it worked just fine.
Also, does the action on the new/modified files be done ASAP, or it can wait for a 15 minutes? If
former - you better use filewatcher, if latter - you will be fine with a task with 15m schedule.
ReplyGive Award
Share
ReportSave

level 4
MyOtherSide1984
1 point·3 months ago
Can definitely wait 15 minutes, I was more so curious about the abilities of FSW when making this
post. I wanted it to be set it and forget it, but the scheduled tasks would end up doing the same thing
in the end.
ReplyGive Award
Share
ReportSave

level 5
poshftw
1 point·3 months ago
FSW works fine, but then you need to run it constantly. For a such low workload id doesn't makes
sense to bother, though you already wrote it and it works...
If you didn't had anything, I would say go with TS with a schedule. For now.. I kinda missed, how do
you run it now? Interactively? If so, you can easy move it to a scheduled task, run it everyday at
(e.g.) 5am, restart every 15 minutes (in case the server would be rebooted in that time), do not
launch if the task is already running, stop the task after 23h (so it would be restarted and won't end
up eating all memory, if something goes wrong).
ReplyGive Award
Share
ReportSave

level 6
MyOtherSide1984
2 points·3 months ago
Currently, it doesn't actually have a set "play" method, basically I can either remove the action from
the FSW and set it to a loop, or a task or job or whatever, so I'm gathering ideas on what I could use
as a trigger as well as formulating how it'll find the changes. I honestly have both figured out, but
need longevity to be resolved. This is very helpful!
ReplyGive Award
Share
ReportSave

level 1
Exodus85
0 points·4 months ago
Hi, I handled it by creating a scheduled task pointed to my script. Wrap your check in aa while $true
statement and have fun..
Dont forget to finetune the schedtask to catch reboots etc..

You might also like