Professional Documents
Culture Documents
CODE 2-2023 Web
CODE 2-2023 Web
MAR
APR
2023
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95
Introduction
to Snowflake
SECURITY DEFENSES
IDENTIFY POTENTIAL SECURITY PROBLEMS
CODE Security offers application security reviews to help organizations uncover security vulnerabilities.
Let us help you identify critical vulnerabilities in complex applications and application architectures and
mitigate problems before cybercrime impacts your business.
• CODE Security offers three types of security testing for many applications and services,
including simulating a real-world attack scenario. We can test web applications, internal applications,
mobile applications and do code audits and test hardware. We can even reverse engineer software
if you no longer have the source code.
fter our security audit, our CODE Security experts provide a technical report including
•A
all identified vulnerabilities along with a severity rating according to industry standards.
Let 2023 be the year you mitigate cyber security risk in your applications.
CODE Security experts can help you identify and correct security vulnerabilities
in your products, services and your application’s architecture. Additionally,
CODE Training’s hands-on Secure Coding for Developers training courses educates
developers on how to write secure and robust applications.
Contact us today for a free consultation and details about our services.
codemag.com/security
832-717-4445 ext. 9 • info@codemag.com
TABLE OF CONTENTS
Features
8 Chrome Debugging Tips 60 uilding an Event-Driven
B
Oh, those other browsers are okay, but they have their limits.
Sahil takes a look at debugging HTML pages and JavaScript in Chrome.
.NET Core App with
Sahil Malik Dapr in .NET 7 Core
If you need to be able to create event-driven applications quickly
14 reate Your Own SQL Compare
C and efficiently, Joydip suggests exploring Distributed Application
Runtime (Dapr).
Utility Using GetSchema() Joydip Kanjilal
Paul shows you how to retrieve and store metadata using
the GetSchema() method. It’s surprisingly easy.
Paul D. Sheriff
69 rchitects: The Case for Software
A
Leaders
27 S ome Overlooked EF Core 7 Despite our best efforts, relatively few development projects
are a raging success. Jeffrey talks about the various roles—even
Changes and Improvements non-technical ones—that every project should have if there’s
any hope of success.
You were probably swept up in the exciting new changes to Entity
Jeffrey Palermo
Framework that came out near the end of 2022. Julie told you about
the big changes and now she’s going to show you how even the little
changes are terrific.
Julie Lerman
38 Introduction to Snowflake
Cloud databases are both the latest thing and a great way to deal
Departments
with legacy database services. Rod introduces you to a very clever tool,
Snowflake, and shows you that it’s the next best thing.
Rod Paddock
US subscriptions are US $29.99 for one year. Subscriptions outside the US pay $50.99 USD. Payments should be made in US dollars drawn on a US bank. American
Express, MasterCard, Visa, and Discover credit cards are accepted. Back issues are available. For subscription information, send e-mail to subscriptions@codemag.com
or contact Customer Service at 832-717-4445 ext. 9.
Subscribe online at www.codemag.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
POSTMASTER: Send address changes to CODE Component Developer Magazine, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
6 Editorial codemag.com
CUSTOM SOFTWARE DEVELOPMENT
STAFFING TRAINING/MENTORING SECURITY
Contact us today for a complimentary one hour tech consultation. No strings. No commitment. Just CODE.
codemag.com/code
832-717-4445 ext. 9 • info@codemag.com
codemag.com
ONLINE QUICK ID 2303021
etc. that only Safari offers. But when it comes to debug- article. My focus will be on some of the interesting arcane
ging stuff, I despise Safari dev tools. I only use Safari dev tricks, some of which will (hopefully) pleasantly surprise
tools when I absolutely must. The dev tools I still prefer you and be a part of your dev tricks going forward. Let’s
to use are those built into the Chrome browser. And, I get rolling.
understand, Edge dev tools are now also quite similar.
Let me put it this way, I’m happy to use either Edge or
Chrome, but my muscle memory still takes me to Chrome. Supercharge with the Command Menu
One of the biggest frustrations I have with the dev tools
In this article, I’d like to share with you some beyond- is simply how powerful they are and how many com-
Sahil Malik the-obvious tips and tricks when it comes to debugging mands and possibilities they have. Seriously, don’t get
www.winsmarts.com HTML pages and JavaScript in Chrome. me wrong, you need all those facilities to do your work,
@sahilmalik but if someone wrote an end-to-end book on every fea-
ture in dev tools, they’d probably never finish the book.
Sahil Malik is a Microsoft What Are Dev Tools? Sometimes I remember that there’s a way to do something
MVP, INETA speaker,
Although I want to focus on the less obvious but highly in command tools, but I just can’t remember where that
a .NET author, consultant,
useful tips, I must get this out of the way. The main particular command was.
and trainer.
characters in this article are the dev tools, so I must in-
Sahil loves interacting troduce the main character and how to launch it. Dev For instance, when trying to understand another coder’s
with fellow geeks in real tools are built into the Chrome browser. You can launch code, I frequently find code folding useful. I vaguely re-
time. His talks and train- the browser and simply launch them with a short cut key member that there was a feature in Chrome that’s off by
ings are full of humor and (F12 on Windows or CMD_OPT_i on a Mac). Dev tools may default, to allow me to enable code folding. To be honest,
practical nuggets. launch either docked to a side in your browser or they I’ve no idea where to toggle that feature from, but I know
can be free floating. This behavior can be toggled via the I can easily find it using the command menu. Simply press
His areas of expertise are triple dot menu on the top right hand corner of dev tools, CMD_SHIFT_P on Mac or CNTRL_SHIFT_P on Windows to
cross-platform Mobile app as shown in Figure 1. show the command menu and find “Enable code folding”
development, Microsoft from there. This can be seen in Figure 2.
anything, and security As you can see, dev tools are fairly involved. They’re
and identity.
organized into various tabs, such as Elements, Console, As soon as you enable code folding, you’ll notice that all
Sources, Network, Performance, and so much more. Within logical blocks of code shown to you can be collapsed or
each tab, there’s a specific user interface designed for expanded. For instance, if you have an if/then/else state-
that section. Each of these sections is loaded with tips ment, you can collapse the “if” part so you can focus on
and tricks. Some are obvious, like you can use the con- the “else” part. Or you can collapse a large switch state-
sole window to run arbitrary JavaScript commands in the ment, or function, or really any logical part of your code.
scope of the page. Or you can use the Network tab to
record network activity.
Load Any File Quickly with
I’ll assume that you’ve used dev tools in the past and are the Open Dialog
generally familiar with basic usage of dev tools already, Modern websites are skyscrapers built with sticks, twigs,
so I won’t focus on a 101 introduction of dev tools in this and cards stuck together with chewing gum and spit. You
Figure 1: Changing how dev tools appear Figure 2: The command menu
Click on that button and voila! Just like magic, you get all
the indentation, whitespace, and tabs back. Now enable
code folding using the previous command menu tip, and
you can start understanding the logic of this complex file.
The same file after prettifying and enabling code folding
looks like Figure 5.
(Math.random() + 1).
toString(36).substring(7);
You open the dev tools and navigate to the “console” tab.
Figure 5: Prettified and code folded is much easier to Here, click on the “eye” to create a live expression, as
debug. shown in Figure 8.
For instance, I went ahead and stored the above snippet to In the text box, go ahead and place your live expression
a directory called “temp”, and in the same “Sources” tab to text. This could be a variable or the output of a function.
the left of snippets, there’s another tab called “Filesystem” Now, Chrome dev tools pins this value on the top and
that allows you to add a local folder to a given site. You’ll updates it constantly. This is a great way for you to keep
be shown a warning like that in Figure 7. an eye on a value as the value changes.
The advantage now is that you can store any kind of file
on your local file system. You can edit it in VSCode as you
wish. And you can see the results directly in the produc- The Chrome tool pins the value on
tion site without deploying to the server. I’ve found this the top and updates it constantly,
tip extremely helpful in not only diagnosing issues, but
also creating a bunch of helper functions. For instance, if allowing you to keep an eye
a site has a bunch of download links, I could easily write on the value as it changes.
a helper script with UX, if necessary, to help me download
all files quickly. Or I could easily insert a debugging ren-
dering UX with charts and graphs driven completely from
the local operating system. Screenshot Large Pages
Okay, this next one is a pretty amazing tip that I’ve seen
developers install tools for or do all kinds of gymnas-
Keep an Eye on a Value tics for. Little do they know, this is built right inside of
Let’s say that you’re writing a programming a game and Chrome. Sometimes you have a long page, and no matter
are trying to narrow down on a difficult bug. Almost like how big your monitor is, it requires you to scroll. What if
you wish you had four hands as you played the game, you you want to take a screenshot of the entire page, includ-
ing the areas that aren’t visible? Like I said, I’ve literally
seen developers painfully scroll, take pictures, and stitch
them together.
But it’s built right into Chrome. Again, just open dev tools Figure 9: Various screenshot options
and press CMD_SHIFT_P (Mac) or CTRL_SHIFT_P (Win-
dows) and choose to do a “full size” screenshot, as shown
in Figure 9. Doing so creates a screenshot of the entire
page and downloads it.
In Figure 10, you can also see the “Disable cache” and
the “Throttling” UX. Those are great to diagnose hard
client-side problems and get caching out of the picture,
and to simulate low bandwidth situations.
Network tools are essential not just for performance but This really simplifies my task both for debugging and
also for debugging your APIs. For instance, you can right documentation.
Now, visit any site that doesn’t support dark mode. For
instance, I visited https://developers.google.com/docs/
api/samples. Once you’re subjected to the harsh sunlight
vampire style, click the bookmark you just created. It im-
mediately turns it into a nice and readable version in dark
mode, as shown in Figure 15.
I use this technique quite a bit, and it’s really helped me out and solve some very hard problems. Sometimes I won-
across many products. I highly recommend you trying it, der how large the Chrome dev tools team is.
if you’re a dark mode fan like I am.
What are some of your favorite Chrome dev tool tricks?
Would you like to see more such tricks?
Summary
A developer’s life isn’t easy. We deliver complex function- Do let me know. Until next time, happy debugging.
ality through HTML and JavaScript, and we need some
tools to help us out. Chrome dev tools are very powerful Sahil Malik
and have a seemingly infinite number of features. I’m
frequently amazed at the new things I learn and discover
every day.
you select all tables using the sys.tables table or the a valid connection string to the constructor. Open the
Information_Schema.Tables view. To retrieve all tables connection and invoke the GetSchema() method passing
on Oracle, you use the all_tables table. In PostgreSQL, in a string of the collection you wish to return, as shown
you can use the Information_Schema.Tables view. The in the following code snippet.
problem is that if you want code that’s consistent be-
tween different databases, you must write different SQL using SqlConnection cn = new("CONNECTION STRING");
for each database system, then create an API for develop-
ers to call. cn.Open();
Paul D. Sheriff In this article, you’ll learn how to use the GetSchema() DataTable dt = cn.GetSchema("COLLECTION NAME");
psheriff@pdsa.com. method on the DbConnection class to retrieve tables,
www.pdsa.com views, columns, index, stored procedures, and more from The “COLLECTION NAME” you pass to the GetSchema() de-
any database system. This method is implemented by termines which data you get back. For example, if you
Paul Sheriff has been in
each data provider to retrieve schema information in a pass “Tables”, you get the list of tables within the da-
the IT industry since 1985.
generic fashion. What do you do with this information? tabase specified in your connection string. If you pass
In that time, he has suc-
You can present column names to your user to let them “Columns”, you get a list of all the columns within the
cessfully assisted hundreds
of companies architect select columns to filter on for a report. You can use it to database specified in your connection string.
software applications to build your own code generator. You can even use it to cre-
ate a SQL comparison tool. Follow along with this article
solve their toughest busi-
ness problems. Paul has to see how easy it is to use GetShema() to accomplish The Same but Different
been a teacher and mentor these various tasks. Although you can retrieve columns, tables, indexes, etc.
through various media from each provider, be aware that they don’t always re-
such as video courses, turn the same column names for each call. For example,
blogs, articles, and speak- Introducing the GetSchema() Method when you request all tables from SQL Server, GetSchema()
ing engagements at user GetSchema() is a virtual method on the DbConnection returns the columns TABLE_CATALOG, TABLE_SCHEMA,
groups, and at conferences class in the .NET Framework. This method is overridden by TABLE_NAME, and TABLE_TYPE to describe the tables. In
around the world. Paul has the SQL Server, Oracle, ODBC, and OLE DB, and other data Oracle, the columns OWNER, TABLE_NAME, and TYPE are
multiple courses in the providers. Each provider uses the appropriate metadata used to describe the tables in the database. The ODBC
www.pluralsight.com library tables in their respective servers to retrieve the metadata and OLE DB providers provide different column names.
(https://bit.ly/3gvXgvj) for each call you make to the GetSchema() method. The Not all providers return the same data. For a complete list
and on Udemy.com following list is an example of some of the metadata that of the SQL Server, Oracle, ODBC, and OLE DB, check out
(https://bit.ly/3WOK8kX) you may be able to return from your database server. How the Microsoft documentation at https://bit.ly/3V3CMZv.
on topics ranging from C#, many of these items you can retrieve is dependent on the
LINQ, JavaScript, Angular,
data provider you’re using. I wish Microsoft had provided a specification for what
MVC, WPF, XML, jQuery,
was returned by each call, as that would have made it
and Bootstrap.
• Columns easier if you need to target different database systems
• Databases at your work. I guess Microsoft thinks most developers
• Data types typically only work with a single system. For those of us
• Foreign keys who must work with multiple database systems, I recom-
• Index columns mend mapping the data returned from each provider to
• Indexes your own schema classes. You can then create your own
• Procedure parameters collection of schema classes for each type of metadata
• Procedures you need to retrieve. I’ll show you how to do this later
• Reserved words in this article.
• Tables
• Users Adding Restrictions
• User-defined types Most of the calls to GetSchema() support a second param-
• Views eter called restrictions. The restrictions are a string array
• View columns with anywhere from one to five elements. For example,
let’s say you wish to restrict the columns to retrieve from
To use the GetSchema() method, create an instance of the a single table, declare a four-element array, and fill in
SqlConnection or OracleConnection, or your own provid- the second and third elements with the schema and table
er’s implementation of the DbConnection class, and pass name, as shown in the following code.
Try It Out
Get Tables within a Specific Schema Add the restrictions array shown in Listing 2 to the code
As mentioned previously, you can pass in an array of val- you wrote in the Program.cs and run the application to
ues to the second parameter of the GetSchema() method. see that you now only return tables within the SalesLT
schema. If you’re using a different database from Adven-
tureWorksLT, adjust the schema/owner name to what is
Listing 1: Display all tables in a database using the GetSchema() method applicable to your database.
using System.Data;
using System.Data.SqlClient;
Get Views Only
string conn = "Data Source=Localhost; There are two ways to retrieve views within a database.
Initial Catalog=AdventureWorksLT;
Integrated Security=True;"; You can set the fourth element to “VIEW” in Listing 2,
or you can call GetSchema() method using “Views” as the
using SqlConnection cn = new(conn); first argument, as shown in Listing 3. Be aware that the
columns returned when you use “Views” as the first argu-
cn.Open();
ment are different than when you use “Tables”. You still
// Get All Tables in the Database get the catalog/database, schema, and view name, but
DataTable dt = cn.GetSchema("Tables"); you also get CHECK_OPTION and IS_UPDATABLE columns
in SQL Server. In Oracle, you get many other columns as
// Display Column Names well, and you also get the SQL text for the view. That’s
string format = "{0,-20}{1,-10}{2,-35}{3,-15}";
Console.WriteLine(format, "Catalog", "Schema", something I wish Microsoft had included in this call to
"Name", "Type"); GetSchema() for SQL Server.
// Display Data Try It Out
foreach (DataRow row in dt.Rows) {
Console.WriteLine(format, Change the code you wrote in the Program.cs to match
row["TABLE_CATALOG"], row["TABLE_SCHEMA"], the code shown in Listing 3 and run the application to
row["TABLE_NAME"], row["TABLE_TYPE"]); see just the views from your database.
}
Get Columns
When you request column information, 21 discreet pieces
of information about the column are returned when using
SQL Server, but the Oracle provider only returns nine piec-
es of information. The ODBC and OLE DB providers return
21 and 33 data points for a column, respectively. The data
points in common among all providers are the catalog/da-
tabase, owner/schema, column name, data type, length,
precision, scale, and nullable. Listing 4 shows the code
you write to retrieve column data from a SQL Server.
Try It Out
Change the code you wrote in the Program.cs to match
the code shown in Listing 4 and run the application to
Figure 1: GetSchema() can return a list of all tables and views in a database. see all the columns in your database.
Get Columns for a Single Table Listing 3: Pass "Views" to the GetSchema() method to retrieve all views in the database
The code in Listing 5 shows the restrictions you use to using System.Data;
filter the column data. The first element in the restric- using System.Data.SqlClient;
tions array is the catalog/database name. The second ele-
ment is the schema name. The third element is the table string conn = "Data Source=Localhost;
Initial Catalog=AdventureWorksLT;
name. In the code, I’m only setting the schema and table Integrated Security=True;";
name to just list those columns that make up that one
table. The fourth element can be a specific column name using SqlConnection cn = new(conn);
if you only need the data for a single column. cn.Open();
Listing 6: The call to get procedures returns both stored procedures and functions
using System.Data; DataTable dt = cn.GetSchema("Procedures");
using System.Data.SqlClient;
// Display Column Names
string conn = "Data Source=Localhost; string format = "{0,-12}{1,-40}{2,-12}{3,-30}";
Initial Catalog=AdventureWorksLT; Console.WriteLine(format, "Schema",
Integrated Security=True;"; "Procedure Name", "Type", "Created");
Listing 7: Only when using SQL Server can you get the complete list of databases from Try It Out
your server instance Add the code shown above into the appropriate location
using System.Data; in the Program.cs file and run the application to see just
using System.Data.SqlClient; the stored procedures within your database.
string conn = "Data Source=Localhost;
Integrated Security=True;";
Get a List of Databases on SQL Server
using SqlConnection cn = new(conn); In SQL Server, there’s the concept of a database within a
cn.Open(); server. In Oracle, the database is the instance of the Ora-
cle server you connect to. So, the only database provider
// Get the Databases in SQL Server you can pass the string “Databases” to the GetSchema()
DataTable dt = cn.GetSchema("Databases");
method is the SQL Client provider. Listing 7 shows the
// Display Column Names code to get the list of databases within a SQL Server in-
string format = "{0,-25}{1,-10}{2,-25}"; stance. Notice that the connection string does not require
Console.WriteLine(format, "Database Name", the Initial Catalog key/value pair; you just need to con-
"DB ID", "Created");
nect to the server with valid credentials to retrieve the
// Display Data list of databases. Only three columns are returns from this
foreach (DataRow row in dt.Rows) { call: database_name, dbid, and create_date.
Console.WriteLine(format,
row["database_name"], row["dbid"],
row["create_date"]); Try It Out
} If you’re using SQL Server, modify the code you wrote in
the Program.cs to match the code shown in Listing 7 and
run the application to see all the databases in your SQL
string?[] restrictions = new string?[4]; Server instance.
restrictions[0] = null;
restrictions[1] = null;
restrictions[2] = null; Get Users
restrictions[3] = "PROCEDURE"; When using SQL Server or Oracle, you can retrieve the list
of users by passing “Users” to the GetSchema() method.
// Get Stored Procedures Only On each system, you get the user name, the user ID, and
DataTable dt = cn.GetSchema("Procedures", the creation date. On SQL Server, you also get the last
restrictions); time the user information was modified. The code in
In the code shown in Listing 10, you get five columns re-
turned, as shown in Figure 3. The first column is the col- Figure 2: A list of all the collections that GetSchema() can retrieve when using
lection name, the second column is the restriction name the SQL Server provider
Listing 10: The GetSchema() method tells you which restrictions are available for each collection
using System.Data; {2,-20}{3,-25}{4,-10}";
using System.Data.SqlClient; Console.WriteLine(format, "Collection Name",
"Restriction Name", "Parameter Name",
string conn = "Data Source=Localhost; "Default", "Number");
Integrated Security=True;";
// Display Data
using SqlConnection cn = new(conn); foreach (DataRow row in dt.Rows) {
Console.WriteLine(format,
cn.Open(); row["CollectionName"],
row["RestrictionName"],
// Get All Restrictions row["ParameterName"],
DataTable dt = cn.GetSchema("Restrictions"); row["RestrictionDefault"],
row["RestrictionNumber"]);
// Display Column Names }
string format = "{0,-25}{1,-20}
that tells you what you would put into each element of to a development database. When you’re ready to move
the array. The third and fourth column aren’t that impor- your code to the QA team for them to check, you not only
tant, but the fifth column tells you into which element need to create a build of your application, but you also
number you place the data you want to filter upon. need to update the QA database so it matches your devel-
opment database. Once your QA process is complete, you
Try It Out need to determine the differences between your QA data-
Change the code you wrote in the Program.cs to match base and the production database. There are a few tools on
the code shown in Listing 10 and run the application to the market that you can purchase to accomplish this task.
see all the restrictions that are applicable for your data However, you have almost all the tools you need from the
provider. Depending on what provider you’re running, you code shown in this article to create your own SQL compare
might see different values from those shown in Figure 3. utility. You just need a few classes and some LINQ queries
to build this compare utility.
SQL Compare Utility Create Schema Entity Classes
Now that you ‘ve seen how to retrieve metadata about Because each data provider returns different columns in
your databases such as tables and columns, what can you their DataTable objects, it’s a good idea to create some
do with this information? As I mentioned at the begin- classes to hold the table and column information. You
ning of the article, you can present a list of fields to filter could also create classes to hold index, procedure, func-
upon for reports or build a code generator. Another idea tion, and view information as well, but let’s just build a
is to build a tool that checks to see what tables, columns, couple so you can see the design pattern. You can then
indexes, etc. are missing between a development, a QA, add additional classes as you need them.
and production databases.
Create a class named SchemaBase that holds common
Think about a typical software development lifecycle. As properties for different schema items, as shown in the
you develop your application, you make database changes following code snippet.
Listing 11: Map the columns from the DataTable to create a list of TableSchema objects
private static List<TableSchema> TableName = row["TABLE_NAME"].ToString(),
TableSchemaToList(DataTable dt) TableType = row["TABLE_TYPE"].ToString()
{ };
List<TableSchema> ret = new();
ret.Add(entity);
foreach (DataRow row in dt.Rows) { }
TableSchema entity = new()
{ return ret;
Catalog = row["TABLE_CATALOG"].ToString(), }
Schema = row["TABLE_SCHEMA"].ToString(),
Add a method named ColumnSchemaToList() (Listing 12) Pass in two different connection strings to this method.
to the SqlServerCompareHelper class. In this method, you Create three generics lists of TableSchema classes. One is
take the results from the call to the GetSchema(“Columns”) used as the return value, the other two hold the list of
method and turn each row of the DataTable into a generic TableSchema classes returned after building the DataTa-
list of TableColumn objects. If you’re using a data pro- bles from each database by calling GetSchema(“Tables”)
vider other than SQL Server, modify the column names and turning those into the generic lists by calling the
in the DataTable. I’m using the ones returned from the TableSchemaToList() method.
GetSchema() method of your provider.
Instead of using two different servers and two different
Create Table Compare Method databases, I’m going to use the same database, but I’m
Add a method named CompareTables() (Listing 13) to going to simulate that the target database is missing a
perform the comparison of the tables between two differ- few items. I do this by removing a few items using the
ent databases on two different servers. In this method, RemoveRange() method on the targetList variable. The
you pass in two different connection strings. The first data within the sourceList and the targetList collections
are now different.
Listing 12: Map the columns from the DataTable to create a list of TableColumn objects To retrieve the list of what items are missing from the
private static List<ColumnSchema> target database, you can employ the LINQ ExceptBy()
ColumnSchemaToList(DataTable dt) method. This method tells you which items are missing
{ from one collection to another. The ExceptBy() method
List<ColumnSchema> ret = new(); is a generic method so the first generic type you supply
foreach (DataRow row in dt.Rows) { is the type of data to return, in this case that’s Table-
ColumnSchema entity = new() Schema. The second generic type to pass is the key type
{ you’re going to be using to compare the data between the
Catalog = row["TABLE_CATALOG"].ToString(), source and the target lists.
Schema = row["TABLE_SCHEMA"].ToString(),
TableName = row["TABLE_NAME"].ToString(),
ColumnName = row["COLUMN_NAME"].ToString(), The ExceptBy() method needs two lists of data, so two
arguments are passed to it. Create the first argument
OrdinalPosition = by selecting all rows from the targetList collection and
Convert.ToInt32(row["ORDINAL_POSITION"]), turning each row into an anonymous object with all four
DataType = row["DATA_TYPE"].ToString(),
}; properties. The second argument is a lambda expression
created by turning each row from the sourceList into an
ret.Add(entity); anonymous object with four properties that match the
} first list. Remember that when using comparison methods
return ret; in LINQ, unless you’ve implemented an EqualityComparer
} class for the TableSchema class, the method uses refer-
ence comparison to see if one object reference is equal
to the other. That won’t work, so you must create anony-
mous objects to compare all four properties from one col-
Listing 13: Use LINQ to compare two collections to determine what tables are missing
lection to the other.
from a database
public static List<TableSchema>
CompareTables(string connSource, Create Column Compare Method
string connTarget) Now that you’ve seen how to perform comparisons us-
{ ing the LINQ ExceptBy() method, create other comparison
List<TableSchema> ret = new(); methods to find the differences between all the objects
List<TableSchema> sourceList;
List<TableSchema> targetList; between one database and another. In Listing 14, you
see a method named CompareColumns() that compares all
sourceList = TableSchemaToList( the columns from one database to another. I’m just going
GetData(connSource, "Tables")); to write these two comparison methods for you. You now
targetList = TableSchemaToList(
GetData(connTarget, "Tables"));
have the design pattern you can follow to perform com-
parisons between other database objects such as indexes,
// Simulate missing tables stored procedures, views, etc.
targetList.RemoveRange(2, 3);
Now, change the last few lines of code to retrieve the sourceList = ColumnSchemaToList(
differences between columns and run the application to GetData(connSource, "Columns"));
targetList = ColumnSchemaToList(
see the differences. GetData(connTarget, "Columns"));
Get Tables
Fill in the DisplayTables() method with the code shown parameter to see if it’s null, and if it isn’t, add the AND
in Listing 15. This method accepts three parameters: statement to check for where the TABLE_SCHEMA column
a connection string, an optional schema name, and an is like the value passed in. Add the percent sign (%) after
optional table name. For this sample, I’m using the In- the schema variable so it can find all schemas that match
formation_Schame.Tables view, but I could have used the beginning of the schema value. Feel free to modify
the sys.tables system table as well. Check the schema this wildcard to work as you see fit.
Figure 4: Display the foreign key name, table name, and column used in the foreign key
The last option is to pass both a schema name and a table Get Foreign Keys with Columns
name (or partial name) to only display tables that match Another set of data that’s impossible to retrieve using the
the name passed within the schema “dbo”. GetSchema() method is to retrieve the foreign keys with
the column(s) that make up those foreign keys. Yes, you
// Get any Tables in the "dbo" Schema can get the foreign key names and the tables to which they
// That Start with "Cust" belong, and you can get the index columns, but there’s no
TSqlHelper.DisplayTables(conn, "dbo", "Cust"); way to get this data back in a single call. However, using
the system tables in SQL Server, you can create a JOIN to
Get Check Constraints retrieve this information, as shown in Figure 4.
Check constraints is metadata that you can’t retrieve
from the GetSchema() method. Add another method to Add a new method to the TSqlHelper class named Display-
the TSqlHelper class named DisplayCheckConstraints(), ForeignKeys(), as shown in Listing 17. Create a SQL JOIN
as shown in Listing 16. In this method, use the Infor- to retrieve the key name, the parent table and column, and
mation_Schema.Check_Constraints view to retrieve all the referenced table and column for each foreign key in your
check constraints. You may also optionally pass in a sche- database. You can also pass in a table name to just retrieve
ma name to only display those constraints within that the foreign keys for that specific table. This SQL is specific
schema. If you want, feel free to add a third parameter to to SQL Server, but you should be able to create similar SQL
filter on a constraint name similar to what you did in the to retrieve the same information from any database server.
DisplayTables() method.
Try It Out
Try It Out Now that you have the foreign key method created in the
Open the Program.cs file and call this new method to TSqlHelper class, open the Program.cs file, and make the
display all the check constraints in your database. call to this method as follows:
Next try out passing in a specific schema to only retrieve To view just a single table’s foreign keys, pass in the table
those check constraints within a specific schema. name to the second parameter.
// Get all Columns in 'SalesLT' Schema // Get Foreign Keys for a Table
TSqlHelper.DisplayCheckConstraints(conn, TSqlHelper.DisplayForeignKeys(conn,
"SalesLT"); "SalesOrderDetail");
As you can see, you have much more flexibility when you erator tool generates full CRUD classes using the Entity
use the system tables/views in your database system as Framework. It also can generate full CRUD MVC pages us-
compared to using the GetSchema() method. ing .NET 6/7. You can customize this code generator. You
may also add your own sets of classes, pages, or anything
you want.
Free SQL Compare and
Code Generation Utility
Instead of having to build your own SQL Compare tool, Summary
you can download one for free. I created a set of devel- In this article, you learned how to retrieve metadata
oper utilities (Figure 5) years ago and they are available about your database using the .NET Framework’s GetSche-
at https://github.com/PaulDSheriff/PDSC-Tools. Besides ma() method. This method provides a standard method
a SQL Server comparison tool, you also get a Computer to retrieve data about your database objects. However,
Cleaner tool that helps clean up the multiple locations be aware that the data that’s returned from each call is
where Visual Studio and .NET leave a bunch of temporary different from one data provider to another. Retrieving
files. There is also a Project Cleaner to remove folders metadata from your database can be useful to create all
and files that are unnecessary when you’re doing an ini- sorts of utilities such as a SQL comparison tool, or a code
tial check-in to source control, or you just want to zip up generator. You should take some time to understand the
the project to send to a colleague. The next utility is a system tables in your specific database server as they
Property Generator that helps you build different types can provide you with much more information than the
of properties for your classes. Included are auto-proper- GetSchema() method.
ties, full properties, and raise property changed proper-
ties. Simple text templates are available for you to modify Paul D. Sheriff
these or add your own types of property generation.
in the November 24, 2022 bi-weekly updates to get an back FromSql, ExecuteSql, and ExecuteSqlAsync to the
idea of the broad scope of features affected. Unless lexicon of EF Core methods, replacing the Interpolated
you’ve experienced some of the things fixed or enhanced methods. They’re identical to the methods they replace.
by these features, reading through the list doesn’t always
give you a good understanding of the change. I’ve se- int start = 2010;
lected some of the more interesting items from the list int end = 2015;
and experimented with the features to be sure that I truly var authors = _context.Authors
understand what the problem was that they were solving .FromSql($"AuthorsPublishedinYearRange
and how they work now. This meant building a lot of dem- {start}, {end}")
os, running them in EF Core 6, running them in EF Core .ToList(); Julie Lerman
7, re-reading the discussions in the GitHub issues, read- @julielerman
ing code from the repository, and, in one case, emailing The team considered removing the longer versions but left thedatafarm.com/contact
someone from the EF Core team for further information. them in place to avoid breaking changes. Their guidance
Julie Lerman is a Microsoft
is to use the new methods.
Regional director, Docker
In this article, I’ll share what I learned about these issues
Captain, and a long-time
and hope that you find them as interesting and useful as
I have. The relevant projects can be found in my GitHub Use Raw SQL to Query Scalar Values Microsoft MVP who now
counts her years as a
repository https://github.com/julielerman/CodeMagEF- Here’s a wonderful addition to the APIs that I’ve already coder in decades. She
Core72023. benefited from in my own work, as I’m building demos makes her living as a
for a new EF Core and Domain-Driven Design course coach and consultant to
for Pluralsight. A new method of DbContext.Database,
FromSQL: Back to the Future called SqlQuery, lets you pass in a raw SQL query to get
software teams around
the world. You can find
The EF Core team divided the original FromSQL method into scalar data directly from the database. You don’t need a Julie presenting on Entity
two explicit methods—FromSqlRaw and FromSqlInterpo- mapped entity to capture the results. Here’s an example Framework, Domain-Driven
lated—to overcome some issues that could occur if you where I wanted to retrieve an int value from the data- Design and other topics
were expressing your raw SQL as an interpolated string. The base that’s mapped to a private field in an entity. at user groups and confer-
original FromSQL method was removed. The same was done ences around the world.
for ExecuteSQL, which was replaced by ExecuteSQLRaw, var value = _context.Database.SqlQuery<int> Julie blogs at thedata-
ExecuteSqlInterpolated and ExecuteSqlInterpolatedAsync. (@"SELECT TOP 1 [_hasRevisedSpecSet] farm.com/blog,
FROM [ContractVersions]") is the author of the highly
Here’s an example of FromSqlRaw that takes a stored pro- acclaimed “Programming
cedure with placeholders along with parameters to popu- This method isn’t in the documented list of changes to EF Entity Framework” books,
late the placeholders: Core 7 but can be found in the section on Raw SQL. and many popular videos
on Pluralsight.com.
var authors = _context.Authors
.FromSqlRaw(
SqlClient Changed Connection
"AuthorsPublishedinYearRange {0}, {1}", String Encryption Defaults
2010, 2015) This change is notable because it’s a breaking change
.ToList(); and the change wasn’t to EF Core, but to Microsoft.Data.
SqlClient, which is a dependency of EF Core’s SQL Server
The interpolated versions of these methods accept only a provider. Version 7 of the provider depends on SqlClient
FormattableString and that string leverages interpolation 5.0 and it’s this version of SqlClient that introduced the
to supply the parameters. change.
int start = 2010; Prior to this, the default value of a SQL Server connec-
int end = 2015; tion string’s Encrypt parameter was False. This made life
var authors = _context.Authors easy during development because it meant that we didn’t
.FromSqlInterpolated have to have a valid certificate installed on our develop-
($"AuthorsPublishedinYearRange {start}, {end}") ment computer. But the team responsible for SqlClient
.ToList(); decided it was time to make SqlClient more secure. If you
take a look at the GitHub issue for this change (https://
But oh boy they are a PIA to type or discuss with your github.com/dotnet/SqlClient/pull/1210), you’ll see that
teammates! The EF Core team made a decision to bring members of the EF Core team and community shared
Encrypt=True requires that the server is configured with a public int? PatronId { get; set;}
valid certificate and that the client trusts the certificate.
When the book is returned, your code should set PatronId
If you’re targeting a local SQL Server instance on your to null.
development computer and don’t happen to have a
development certificate installed, you can simply add If a patron with a book moves away and you delete them
Encrypt=False to your connection string. Otherwise, from your system, moving them to a different system
you’ll get a SqlException telling you that the certifi- maintaining inactive patrons, by default, EF Core sets the
cate was issued by an untrusted certificate authority. In value of that book’s PatronId to null. This doesn’t make
most other cases, it’s a good thing to have the proper sense because now the book will never get returned. The
setup that aligns with the encryption. Hopefully, you’re behavior that the librarian requested is that the book
using separate connection strings for dev than for get deleted along with the patron. To achieve that, it’s
production, but do be sure not to send that hack into possible to override the default behavior by forcing the
production. OnDelete method on the relationship in OnModelCreating
SPONSORED SIDEBAR:
to Cascade.
CODE Is Hiring! Orphaned Dependents are modelBuilder.Entity<Patron>()
CODE Staffing is accepting Protected from Inadvertent Deletion .HasMany(p => p.Books).WithOne()
resumes for various Although this is listed as a breaking change for EF Core 7, .OnDelete(DeleteBehavior.Cascade);
open positions ranging it’s something that was working in EF Core 5, got broken
from junior to senior in EF Core 6.0.0, and fixed in 6.0.3. As I spent some time Unfortunately, there was a side effect created in EF Core 6
roles. We have multiple investigating it as an EF Core 7 change, I’ll share it with that, because of that explicit cascade delete, also deleted
openings and will consider you because it’s still notable. a book if you set its PatronId to null.
candidates who seek
full-time employment or Although it’s possible to define a nullable relationship in And that’s the scenario that was fixed in 6.0.3 and is
contracting opportunities. a few ways, this is specific to the case where you have a listed as a change in 7.0.
For more information, visit
nullable foreign key property or a reference in a depen-
www.codemag.com/jobs.
dent. Now, if you have a nullable relationship that you’ve over-
ridden so that deleting the principal cascades and deletes
the dependents, simply setting the foreign key to null will
just update the FK to null in the database.
ADVERTISERS INDEX
Filtered Includes for Hidden
Navigation Properties
Advertisers Index Here’s a feature that improves the experience of using EF
Core to persist types that follow Domain-Driven Design
CODE Consulting guidance. In fact, the community member who requested
www.codemag.com/code 7 this capability called out DDD in their request (https://
github.com/dotnet/efcore/issues/27493).
CODE Legacy
www.codemag.com/legacy 76 The scenario provided was an entity with a one-to-many
CODE Security relationship where the dependents are encapsulated to
www.codemag.com/security 2, 75 protect how they are interacted with. This is an impor-
tant capability. For example, a typical entity might totally
dtSearch expose the dependents for any type of operation whether
www.dtSearch.com 15 you are adding new books, removing a book, or editing a
book. And this class doesn’t at all express the true behav-
LEAD Technologies
ior: that the books are being checked out and returned,
www.leadtools.com 5
Advertising Sales: not added and removed.
Tammy Ferguson
832-717-4445 ext 26
public class Patron
tammy@codemag.com
{
public int PatronId { get; set; }
public string? Name { get; set; }
public List<Book> Books { get; set; }
This listing is provided as a courtesy
= new List<Book>();
to our readers and advertisers. }
The publisher assumes no responsi-
bility for errors or omissions. Alternatively, you can encapsulate the Books and ensure
that the Patron class (as an aggregate root) not only con-
trols how the books are checked out and returned but capability didn’t originally make it into EF Core. If you were
that the Patron entity better describes what’s happening. to look at the original request in GitHub to support this in
Expressing behavior is an important part of DDD. EF Core (https://github.com/dotnet/efcore/issues/620),
you’d see that the team had to keep punting this change
There are different ways to achieve this, but one way is from one version to another. They finally enabled it in EF
shown in Listing 1, where a _books private field allows Core 7 with a new mapping method called SplitToTable.
special protected access to the books but checking in and
out is simply performed on the book in question. At the Here’s an example where I’ve added a property, Mobile-
same time, the class also makes it easy to see the list of Number, to the Patron class:
checked out titles.
public string? MobileNumber { get; set; }
In a repository or other class you may use for persistence,
it’s still possible to query for a patron and books if needed. You can use the SplitToTable method to not only specify
This is made possible by specifying a Books property in the which properties of the type go to the alternate table(s)
data model with the following mapping in the DbContext: but you can also specify a name for the column if it dif-
fers from the property name, as I’ve done here.
modelBuilder.Entity<Patron>()
.HasMany("_books").WithOne(); modelBuilder.Entity<Patron>()
.SplitToTable("PatronContactInfo",
EF Core knows to tie the _books property to the Books p => p.Property(c => c.MobileNumber)
property here because of the naming conventions I used. .HasColumnName("CellPhone"));
So now there’s a secret Books property that’s not exposed
in the Patron class but is known to EF Core. Along with this feature, the ability to specify the column
name was also added to mappings for TPT and TPC inheri-
Given that set up, it was already possible to eager load those tance where multiple tables are also involved. You can see
books using the same string specified in the mapping. examples of these mappings in this GitHub issue: https://
github.com/dotnet/efcore/issues/19811.
context.Patrons.Include("_books").ToList();
But when filtered includes (i.e., a way to sort or filter the Unidirectional Many-to-Many
dependent collection being loaded) was introduced in EF Another improvement that lends support to DDD entities is
Core 6, there was no way to apply this capability to the the ability to expose only one side of a many-to-many rela-
above pattern using the string parameter. tionship. Quite often in your domain models, you might have
such a relationship but only need to navigate in one direction.
The solution the team arrived at was to allow the use of
EF.Property in an Include method. For example, you can have categories for the library books.
Each book may have one or more category and each cat-
That means the above query can also be expressed as: egory may have one or more book. However, while getting
a list of categories for a book is common, the library told
context.Patrons.Include(p => us that it’s highly unusual to want to find every book for
EF.Property<Book>(p, "_books")) a single category. So why complicate the category class
.ToList(); with an unneeded Books property?
And the filtering or sorting methods can be appended The book class has a Categories property:
to the EF.Property, as long as you first specify that the
property is, in fact, a collection: public List<Category> Categories { get; set; }
= new List<Category>();
context.Patrons.Include(p =>
EF.Property<ICollection<Book>>(p, "_books")) Yet the category class has no property for books:
.ToList();
public class Category
{
Entity Splitting public int CategoryId { get; set; }
EF6 allowed us to split the properties of an entity across public string? Name { get; set; }
multiple tables, referred to as “entity splitting” but the }
The database schema reflects the many-to-many and EF First, here’s the ContactInfo class:
Core respects it as well.
public class ContactInfo
In my code logic, I can add categories to a book but I can’t {
add books to categories. Yet because the many-to-many public string? MobileNumber { get; set; }
does exist in the database, I can access the data from other public string? MainEmailAddress { get; set; }
logic in my software as well as reporting solutions. }
Figure 1: Temporal table created by EF Core with columns from an owned entity
if (!EF.IsDesignTime)
{
//do something irrelevant to or in
// conflict with migrations
}
Not only can this flag help you avoid conflicts that could
arise between production code and migrations, you can
also use it for other purposes. For example, I’m having
my sample web app use Serilog, a .NET logging library
(http://serilog.net), instead of the simple logging in EF
Core. Therefore, I’m configuring Serilog and then adding
it to the web app’s services.
if (!EF.IsDesignTime)
{
_logfile = "logs/runtimelog.txt";
}
else {
_logfile = "logs/migrationlog.txt";
}
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Debug()
.WriteTo.File(_logfile,
rollingInterval: RollingInterval.Day)
.CreateLogger();
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Logging.AddSerilog();
find a similarly powerful and easy to use tool to create Naming is the Hardest Job in Software Engineering
web applications. Before we get started, I want to define some terms. Ra-
zor is a language for adding logic to HTML markup (and
was invented for ASP.NET MVC). Because this language
Where You’re Coming From is so useful, it’s used by a number of technologies in
Back in the early 2000s, I was a C++ developer and was the Microsoft stack and that has led to everything being
one of these “you’ll have to take my pointers out of my called Razor-something. In most cases, Razor files end
cold dead hand” guys. But once I was introduced to how in “.cshtml” or “.vbhtml”. So, let’s try to disambiguate:
garbage collection worked in .NET, I was a convert. In
Shawn Wildermuth those early days, I was writing ASP.NET (after my time • Razor View: The files associated with a View in ASP.
shawn@wildermuth.com writing components for ASP projects). NET MVC
wildermuth.com • Razor Page: The files associated with a Page in Razor
@shawnwildermuth The reality was that I didn’t understand how the web actu- Pages
ally worked, but I was tasked with creating websites and • Razor Component: A component used by the Blazor
Shawn Wildermuth has
been tinkering with web apps using ASP.NET. Microsoft came to my rescue by framework for Web Assembly-based web applications
computers and software introducing Web Forms. Nowadays, Web Forms gets quite
since he got a Vic-20 a lot of hate from many directions about how un-web-like Enough with definitions, let’s see how Razor Pages work.
back in the early ’80s. it was. But it helped people like me dip my toe in the web
As a Microsoft MVP since world without the fear that comes from something new. Let’s try to map WebForms nomenclature to Razor Pages,
2003, he’s also involved Microsoft successfully turned desktop developers into web as seen in Table 1.
with Microsoft as an ASP. developers. But it wasn’t without inherent risks.
NET Insider and ClientDev Brief Overview
Insider. He’s the author Web Forms introduced drag-n-drop designing to web de- Razor Pages is based on two fairly simple concepts:
of over twenty Pluralsight velopment. Under the covers, it was trying to hide the
courses, written eight details of the web and feel like the server-side code was • Convention-based URLs
books, an international something akin to a stateful development solution. Add • Razor Pages to produce content
conference speaker, and in ViewState and Session State, and lots of developers
one of the Wilder Minds. were able to accomplish a lot of value for their companies What I mean by this is that if you create a new project
You can reach and employers. using Razor Pages, a new piece of middleware is added to
him at his blog at handle requests:
http://wildermuth.com. But it’s now 2023. We’ve been through a world of change
He’s also making his first,
since those early days. For many Web Forms developers, it var app = builder.Build();
feature-length documen-
can be overwhelming to be asked to learn JavaScript on
tary about software
the client, separate concerns into Controllers and Views, // Configure the HTTP request pipeline.
developers today called
“Hello World: The Film.” and write code that is truly stateless. But that’s where we if (!app.Environment.IsDevelopment())
You can see more about it at are now. There isn’t a perfect upgrade path to ASP.NET {
http://helloworldfilm.com. Core for Web Forms developers. But there are some ways app.UseExceptionHandler("/Error");
to apply our existing knowledge without throwing out the }
baby with the bathwater. In comes Razor Pages.
app.UseStaticFiles();
app.UseRouting();
Introducing Razor Pages app.UseAuthorization();
As an answer to Web Pages, Microsoft introduced ASP.NET
MVC as a Model-View-Controller framework that separated app.MapRazorPages();
(and simplified testability) views and logic. This has been
the prevailing framework for many projects, although it app.Run();
never did replace Web Forms. After .NET Core was intro-
duced, Razor Pages was introduced to have a model closer MapRazorPages simply listens to requests and sees if
to a page-by-page solution instead of complete separa- there is a match to Razor Page files. If found, the middle-
tion. Now with Blazor, another solution has been added ware returns a rendered page, as seen in Figure 1.
to the quiver of tools. For this article, I’m going to fo-
cus on Razor Pages themselves as I think it’s the most How does it know if there’s a Razor Page for the request? It
straightforward migration path for Web Forms developers. uses a convention to find the files. Although the specific
Folders work in the same way. Any folders inside of the Figure 1: Razor Pages middleware
Pages folder map to URL fragments. For example, if you
have an URL like /Sales/ProductList, the file matches, as
seen in Figure 3. WebForms Term Razor Page Term
Web Form (.aspx) Razor Page (.cshtml/vbhtml)
Okay, now that you can see how Razor Pages are mapped,
Web Control (.ascx) Partial Page (also .cshtml/vbhtml)
let’s dig into what makes a Razor Page.
MasterPage` Layout
Anatomy of a Razor Page AJAX Just JavaScript
Although you’ll often use scaffolding to create Razor Pages, global.asax Program.cs or Startup.cs
let’s look at what makes a Razor Page a Razor Page. A Razor
Page is just a file that contains a @page declaration: Table 1: Translating WebForms terms to Razor Pages
@page
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible"
content="IE=edge">
<meta name="viewport"
content="width=device-width,
initial-scale=1.0">
<title>Document</title>
</head>
<body>
<h1>Hello from Razor</h1>
</body>
</html>
Figure 2: Looking for a Razor Page Figure 3: Folders in Razor Pages
This tells the middleware that this is a servable file. Your
Pages folder might include other files like layout files or
partial files that you don’t want to be rendered as in- var title = "CODE Magazine - Razor Pages";
dividual pages. This helps the middleware differentiate }
whether it’s actually a Razor Page or not. The “@” sign <!DOCTYPE html>
isn’t an accident. The Razor syntax uses the “@” symbol <html lang="en">
to indicate the start of a server-side code operation. For <head>
example, you can create an arbitrary code block like so: <meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible"
@page content="IE=edge">
@{ <meta name="viewport"
var title = "This is made with Razor Pages"; content="width=device-width,
} initial-scale=1.0">
<!DOCTYPE html> <link href="~/css/site.css" rel="stylesheet" />
<title>@title</title>
The curly braces (e.g. {}) are just to allow for multi-line </head>
code. This code is just plain C# (or VB.NET if you’re using <body>
.vbhtml files). With that set, you can use it with the “@” <div>
symbol to start: <h1>@title</h1>
</div>
@page </body>
@{ </html>
The Page Model Class public string Title { get; set; } = "Welcome";
When you create a new Razor Page using Visual Studio, it public List<double> InvoiceTotals { get; set; }
will automatically add a .cshtml.cs file of the same name. = new List<double>();
This is called the PageModel class. For example, the index
page would look like this: public void OnGet()
{
// Index.cshtml.cs
for (var x = 0; x < 10; ++x)
using Microsoft.AspNetCore.Mvc.RazorPages;
{
InvoiceTotals.Add(
namespace CodeRazorPages.Pages;
Random.Shared.NextDouble() * 100);
}
public class IndexModel : PageModel
}
{
public void OnGet() }
{
} You can see here that you can create content (in this
} case, random invoice totals) and then use them:
This is a class that derives from the PageModel class. This <h1>Model.Title</h1>
class is wired up to the cshtml file using the @model <p>Today is @DateTime.Now.ToShortDateString()</p>
declaration on the Index Page: <h3>Invoices</h3>
@foreach (var invoice in Model.InvoiceTotals)
@page {
@model IndexModel <div>$ @invoice.ToString("0.00")</div>
}
By doing this, you have access to the PageModel in-
stance. If you were to add a property on PageModel class, Although this is a convoluted example, you could imagine
you could access it here. For example: reading from a database to show information in this same
@page
@{
Layout = "_layout";
}
<h1>Razor Page</h1>
<p>Today is @DateTime.Now.ToShortDateString()</p>
Source Code @addTagHelper *, Now that you have a Layout, you might find it useful
Microsoft.AspNetCore.Mvc.TagHelpers to share information with the Layout. The most common
The source code can be
scenario here is to set the title tag in the header. You can
downloaded from https://
github.com/wilder-minds/ If you’re not getting access to the tag helpers, this file is use a property bag called ViewData. If you set this on the
CodeRazorPages and from missing or incorrect. Now that you can see how individual Razor Page, you’ll be able to access it in the Layout. For
the www.CODEMag.com pages work, let’s look at how to better compose pages example, in your Razor Page:
page associated with from individual components.
this article. @page
@{
Composing Razor Pages ViewData["title"] = "Razor Pages Example";
Like WebForms, Razor Pages has the ideas around break- }
ing out individual parts of a page like UserControls and <h1>@ViewData["title"]</h1>
MasterPages. Let’s see how that looks in Razor Pages. <p>Today is @DateTime.Now.ToShortDateString()</p>
Using Layouts Notice that even though you’re setting it, you can use it
Having the entire HTML page defined in every Razor Page on your page as well. In the Layout, you can just refer to
would be a waste of time. Just like Web Forms, you can the ViewData object:
have master pages, which are called Layouts in Razor
Pages. Layouts are kept in a folder called Shared (this <head>
folder is part of the search path of several kinds of files <meta charset="UTF-8">
(e.g., partials, layouts, etc.). This way, they’re available <meta http-equiv="X-UA-Compatible"
to every page that needs them. By convention, a Layout content="IE=edge">
is called _Layout, as you can see in Figure 4. <meta name="viewport"
content="width=device-width,
The Layout will usually have the HTML boilerplate that initial-scale=1.0">
you want on every page. You can call RenderBody() to tell <link href="~/css/site.css" rel="stylesheet" />
the Layout where you want the page’s content to appear: <title>
CODE Magazine - @ViewData["title"]
<!DOCTYPE html> </title>
<html lang="en"> </head>
<head>
<meta charset="UTF-8"> Notice how you can use the ViewData to insert into con-
<meta http-equiv="X-UA-Compatible" tent, not just be the entire content. In this example,
content="IE=edge"> every page’s title starts with some boilerplate, but then
<meta name="viewport" each page can specify their own title.
content="width=device-width,
initial-scale=1.0"> In addition, Layouts support the concept of Sections. An
<link href="~/css/site.css" rel="stylesheet" /> area of the page that the Razor Page itself can insert into
<title>Razor Page</title> the Layout. For example:
</head>
<body> <head>
<div> <meta charset="UTF-8">
@RenderBody() <meta http-equiv="X-UA-Compatible"
Introduction to Snowflake
Cloud computing is now in its second decade of existence, and those two decades of development have been nothing but
astounding. In these two decades, we’ve moved way beyond virtual machines and cloud storage. We now have tools like cloud
functions (lambdas), containerization, and too many technologies to discuss in one article. One area that’s seen a huge amount
of innovation is cloud databases. We have legacy data- performing analytics on it and a quick turnaround is para-
base services hosting familiar databases like MySQL, Post- mount. This dataset will be around 100,000,000+ records
gres, and SQL Server. There are also offerings for many and will be pushed to the usual Amazon Bucket. I know
other styles of data, such as documents, columnar data, that’s not hard to handle but there’s an exception. Isn’t
key value data, etc. there always? This dataset has PII (personally identifiable
information) in it and needs to be viewable only by the
One of the relatively new entrants to cloud computing, analytics team. Oh, and there’s one more little item. We
and the focus of this article, is called Snowflake. Snow- need to share the raw results with the client in real-time
flake is a unique offering because it provides many capa- every time we process the data. Please get back to us
Rod Paddock bilities that developers need today. Need a cloud-based with an estimate.”
rodpaddock@dashpoint.com SQL database? Need a database capable of querying JSON
data stored in a column? Need the ability to securely There are two ways to handle this request. The first way
Rod Paddock founded Dash share data with external vendors without exposing your is a classic cloud process. You add resources to the EC2
Point Software, Inc. in infrastructure to them? Snowflake handles many of these (server) and RDS (database) instances to handle the load.
2001 to develop high- concepts very well.
quality custom software
1. Create a process to download the data from S3 into
solutions. With 30+ years
In my 30+ years of software development, I’ve seen many a set of objects.
of experience, Rod’s current
products come and go. Every once and a while, I come across 2. Send that data into Postgres (or SQL Server, MySQL, etc.).
and past clients include:
Six Flags, First Premier a product that I consider a game changer. The product that 3. Create a process for querying the data masked or un-
Bank, Microsoft, Calamos I‘m about to discuss is called Snowflake and I can say without masked.
Investments, The US Coast a doubt that this is a game-changing product. 4. Create a process to send data to the client’s pre-
Guard, and US Navy. Along ferred sharing technology (downloadable file, SFTP,
with developing software, Imagine a request from the marketing department. “We’ll S3 bucket, Azure Storage, etc.).
Rod is a well-known author be receiving a huge dataset from Widgemo, Inc. We’ll be 5. Test and put the process into production.
and conference speaker.
Since 1995, Rod has given
talks, training sessions,
and keynotes in the US,
Canada, and Europe. Rod
has been Editor-in-Chief of
CODE Magazine since 2001.
1. Add resources. A good metaphor is that it’s like a car into which you can
add cylinders on the fly. Want to save gas ($$$)? Choose
ALTER WAREHOUSE SET WAREHOUSE_SIZE=LARGE; a small car. Want to head down the Autobahn at 200KM
an hour? Make the car an eight cylinder. Snowflake COM-
2. Import data from S3. PUTE is run in powers of 2: 1, 2, 4, 8, 16, 32, and on up
to 256 cylinders of COMPUTE power. The unique aspect of
COPY INTO FROM S3 <COMMAND OPTIONS HERE> Snowflake is that you can control how many cylinders you
use for any given transaction. You pay for them, but the
3. Control PII information. power of upgrading a single query or set of queries with
the flick of a switch is compelling.
CREATE OR REPLACE MASKING POLICY address_mask
AS (VAL string) RETURNS string -> You pay for it using Snowflake’s simple model to pay for
CASE credits based on the edition of Snowflake you create. Cred-
WHEN current_role() IN ('MARKETING') THEN VAL its are used to pay for credit hours: one credit = one credit
ELSE '*********' hour. Your warehouse size is how the system determines how
END credit hours are billed and you pay more credits for the larg-
er warehouses. All queries are billed on a per-minute basis.
4. Share data with the client. Figure 1 shows the prices per credit hour based on edition.
CREATE SHARE <COMMAND OPTIONS HERE> To better understand how this works, let’s look at an ex-
ample. The following code is simple. It creates new tables
As you can see, each of these processes can satisfy the using different warehouse sizes. The source table is called
user’s request. The bgd difference is the time from request DEMO_DATA and has 100,000,000 records in it. This code
to production. The first process could take a day or more, creates new copies of that table using different ware-
depending on the development team’s backlog. The second house sizes. The command is as follows:
process is only a few commands from start to finish. Speed
is a definite competitive advantage, and it’s the built-in ALTER WAREHOUSE SET WAREHOUSE_SIZE=SMALL;
features of Snowflake that enable it. Let’s have a look. CREATE TABLE DEMO_DATA_SMALL AS
SELECT * FROM
PRARIE_DEVCON_DATABASE
Snowflake Features .PUBLIC.PEOPLE_HUGE;
Snowflake is a strange amalgamation of many modern di-
vergent database concepts. The following list represents ALTER WAREHOUSE SET WAREHOUSE_SIZE=MEDIUM;
some of the high-level features:
CREATE TABLE DEMO_DATA_MEDIUM AS
• It’s SQL compliant, offering SELECT, INSERT, UP- SELECT * FROM
DATE, DELETE, etc., and CREATE TABLE, CREATE VIEW, PRARIE_DEVCON_DATABASE
CREATE SCHEMA, and JSON querying .PUBLIC.PEOPLE_HUGE;
• Snowflake can store and query JSON data stored in
a specialized column type. ALTER WAREHOUSE SET WAREHOUSE_SIZE=LARGE;
• It’s cloud agnostic (Azure, AWS, GCP). I call this
BYOC (bring your own cloud). You can have your CREATE TABLE DEMO_DATA_LARGE AS
Snowflake infrastructure set up on the cloud pro- SELECT * FROM
vider of your choice. PRARIE_DEVCON_DATABASE
• Embedded Python. Snowflake can embed and call .PUBLIC.PEOPLE_HUGE;
Python code from your queries and procedures.
• Secure data sharing. Snowflake can both share and ALTER WAREHOUSE SET WAREHOUSE_SIZE=XXLARGE;
consume shared data with other Snowflake instanc-
es. The cost of compute in shared environments is CREATE TABLE DEMO_DATA_XXLARGE AS
paid by the consumer of that data, not the host. SELECT * FROM
• Multiple client access. You can access your data us- PRARIE_DEVCON_DATABASE
ing the technology of your choice. There are drivers .PUBLIC.PEOPLE_HUGE;
for ODBC, Python, Node, and GO.
• Pay as you go with “infinite” scaling. With the flick of The results are as follows:
a switch (really, a simple command line), you can add
or reduce your computer power for queries performed. • SMALL: 24 seconds
• MEDIUM: 19 seconds
• LARGE: 12 seconds
“Infinite” Computing with • XXLARGE: 8 seconds
Snowflake Warehouses
When you’re dealing with Snowflake, you must understand As you can see, the performance for larger warehouse
that the concept of the warehouse isn’t what you normal- sizes is significant. In some, the tables contain over a
ly think of: organized data storage. Instead, the concept billion records. The difference is astonishing.
Figure 3: The Activity log is used to monitor the performance, success, and failure of commands executed against Snowflake.
Figure 4: The Admin screen is used to add Users, Groups, Warehouse, etc.
file_format=(type='csv'
COMPRESSION='AUTO'
FIELD_DELIMITER='|'
RECORD_DELIMITER = '\n'
FIELD_OPTIONALLY_ENCLOSED_BY= '"'
SKIP_HEADER = 1
TRIM_SPACE = FALSE
ERROR_ON_COLUMN_COUNT_MISMATCH = TRUE
ESCAPE = 'NONE'
DATE_FORMAT = 'AUTO'
TIMESTAMP_FORMAT = 'AUTO'
NULL_IF = ('\\N','NULL')
)
namespace SnowflakeDemo;
public string GetCopyCommand( The following code shows the final COPY INTO command
string databaseName, that you can execute via a database connection within a
string tableName, Snowsight worksheet.
string stageInfo,
string formatInfo) COPY INTO
{ CODE_MAGAZINE_DEMO
var retval = $""" .PUBLIC.DEMO_DATA FROM
COPY INTO 's3://dashpoint-demo/SampleData.csv'
{databaseName}.PUBLIC.{tableName} CREDENTIALS = (AWS_KEY_ID='ACCESS_KEY'
If you wish to execute this command from your applica- The following code should be familiar. You simply open a con-
tions, you need to add one more code fragment. This frag- nection, create a command object and call ExecuteNonQue-
Figure 9: A sample of data imported using the pipeline code in a masked state due to the ROLE selected via Snowsight
Sharing Data of your Snowflake instances. Inbound shares are data sourc-
Once you have data in Snowflake, you can share that data es provided to you by other Snowflake users.
with other snowflake users. The real benefit of this is that
the consumer of that data pays for the compute costs. With You have two ways to create an outbound share: via SQL
other cloud databases, the owner of the data is responsible code or via Snowsight. This article demonstrates the SQL
for its consumption costs. code way to do this. The first is to execute commands
from a worksheet like this:
You have a couple of choices when sharing data. The first
choice is to share data with a consumer in the same re- CREATE SHARE CODE_MAG_OUTBOUND ;
gion where your Snowflake instance was set up. This is by
far the simplest mechanism for sharing data and, from my GRANT USAGE ON DATABASE
experience, the most common use case. CODE_MAGAZINE_DEMO TO
SHARE CODE_MAG_OUTBOUND
The second choice is to share data with a consumer re- GRANT USAGE ON SCHEMA
siding in a different region or other cloud provider. This CODE_MAGAZINE_DEMO.PUBLIC
type of sharing uses Snowflake’s replication tools. For this TO SHARE CODE_MAG_OUTBOUND;
article, I’ll be exploring the simple use case.
GRANT SELECT ON
CODE_MAGAZINE_DEMO
Outbound Shares and Inbound Shares .PUBLIC.DEMO_DATA
There are two basic share types. Outbound shares allow you TO SHARE CODE_MAG_OUTBOUND;
to share data with other Snowflake users from one or more
Figure 13: The shared data has the masking policy in effect
TO SHARE CODE_MAG_OUTBOUND; The share name parameters for this command can be derived
from the screen in Figure 11 from the previous example.
ALTER SHARE CODE_MAG_OUTBOUND
ADD ACCOUNTS=XXXXXX; Once you’ve created a database from a share, you can access
it via Snowsight, as shown in Figure 12.
Creating Databases from Shares Figure 13 shows data shared from the demos created earlier
Once data has been shared with you from another Snow- in this article. One thing you’ll notice immediately is that
flake account, you can access that data via a database the masking policy is in effect on shared data.
that’s created from the share. There are two ways to cre-
ate a database from a share. The first is Snowsight. You I‘ve been working with Snowflake for several years now and it
do this by selecting Data-Private Sharing from Snowsight. wasn’t until 2022 that I realized that the ability to share data
Then choose “Shared with You” on the data sharing screen. in such a simple fashion is monumental. We no longer need to
This brings up a screen with datasets shared with you, as set up S3 buckets or Azure Blog Storage accounts. We simply
shown in Figure 10. grant access to a set of tables/views and the customer can ac-
cess them using the SQL commands they’re already proficient in.
Click on the share you wish to create a database for. You will
be presented with the Get Data dialog, as shown in Figure 11.
Conclusion
From this dialog, you can specify the name of your database. Snowflake is that rare product that possesses immense
You can also optionally specify which security group(s) can power yet can be used by mere mortals to access that pow-
access this database. er. This article only scratches the surface of what Snowflake
is capable of. Other areas of interest to you might include
The second way to create a database from a share is to ex- Python integration, clean room technology, and more in-
ecute a command from a Snowsight worksheet. To create a teresting use cases using another “snow” technology called
database from a share, issue this command: Snowpipe. Give them a look. You won’t be disappointed.
that much? Kubernetes abstracts the entire infrastructure across in my daily work as a senior security consultant.
on which your services run. It doesn’t matter whether They range from broken network isolation over insecure
your services run on bare-metal servers in-house or in host path mounts to missing resource limitations, just to
a cloud environment. Developers don’t have to change name a few. Independent of your level of expertise with
anything when switching from one to the other. This pro- regard to security in Kubernetes, I’ll provide you with
vides a lot of freedom. You move your cluster around as (hopefully new) insights into Kubernetes security.
required. Scaling up your services also becomes trivial. If
you need more services of a particular type to handle the
workload—you just increase the number of Pods of that
Let’s Have a Look at What
service and you’re done. Of course, when you run out of Kubernetes Is Alexander Pirker, PhD
physical resources, you must increase your worker nodes, Before I talk about security in Kubernetes, I’ll give a brief apirker.consultant@gmail.com
but integrating them into the cluster is straightforward introduction to what Kubernetes, also known as k8s, is,
and Kubernetes takes care of scheduling new Pods to the what it does, and how it works. Alexander works as a senior
new worker nodes. security consultant. In his
What’s Kubernetes? daily work, he performs
security audits including
That’s great! Kubernetes provides a lot of flexibility to de- Kubernetes orchestrates the deployment of containers on
assessments, penetra-
velopers and DevOps engineers. Moving around the entire computers down the line. It manages the entire lifecycle
tion testing, and security
cluster, scaling up as necessary, there are so many good of all containers running in such a cluster, including their reviews. He holds secure
reasons to go for it. But what about security in this con- creation and destruction, and also inter-container com- coding workshops, and
text? Is Kubernetes secure? Or, more precisely, did you set munication. For that purpose, the Kubernetes API server also gives trainings for
up your cluster in a secure way? Are there any loopholes? manages, uses, and controls worker nodes. Such nodes Kubernetes security and
can be seen as computational resources, like servers in a provides consulting ser-
Kubernetes provides a lot of configurations to make your data center. Worker nodes just run containers and set up vices for software design
cluster secure. The crucial thing is that you need to know the network routing to enable inter-container and exter- and architecture. He has
about them. Let’s look at a basic example. Think about a nal communication. Of course, under the hood, they per- experience in designing
Kubernetes newbie who starts to configure the first de- form many more tasks, but for understanding Kubernetes, microservices and desktop
ployments and services in a cluster. Because everything it suffices to know that worker nodes run the containers. or mobile applications and
runs in a cluster, our newbie believes it’s isolated from To know which containers the operator wants to run, the also in writing or migrating
everything: the host, the network, the outside world, etc. Kubernetes API server uses a database, the famous etcd them. He received a PhD in
Our newbie couldn’t be more wrong. database. The operator of the cluster defines, in terms of Physics from the University
YAML (yet another markup language) files, which contain- of Innsbruck and holds
By default, the pods can reach the internet without re- ers, services, or other resources should run within the master’s degrees in both
strictions. They can access HTTP servers and so on. They cluster. The Kubernetes API server persists those into the Technical Mathematics and
also don’t have a read-only file system. Hence the ap- etcd database. The Kubernetes API server reads that con- Biomedical Informatics.
plication within the container of the Pod, or anyone else figuration from the database and spawns instances of the In his free time, he likes
to go to the gym and also
in the container (I’ll talk about this later in length), can required resources on worker nodes.
enjoys hiking in the Alps.
modify files. Further, Kubernetes maps, by default, the
service account token of the user running the Pod into Let’s walk through a concrete example to get more famil-
the container. That’s a very handy tool for anyone inside iar with the terms. Suppose you develop a back-end sys-
a container because it serves as an authorization token tem comprising three services, each of those correspond-
to the Kubernetes API server. Using that token, anyone ing to a single container. You have a billing service, a
inside the container could potentially start new Pods, de- shipping service, and an order service. All three together
lete existing deployments, alter ConfigMaps, and so on. make up your back-end application, as shown in Figure 1.
What a nice, insecure, cluster!
If you go with Kubernetes as your deployment infrastruc-
As you see, by default, your cluster is anything but se- ture to run the entire back-end, you probably wrap each of
cure. Depending on the application you run in the cluster, your services first into a Deployment, DaemonSet, Rep-
this poses a significant problem. Suppose your applica- licaSet, or similar resource. Those three types correspond
tion deals with medical data or credit card information. to executable, small, individual applications that run in
Having such security issues in a cluster may even prevent Kubernetes. The operator defines in a YAML file how the
admission to the market. Deployment for instance looks. Specifically, the contents
of this file tell Kubernetes where to find the container
Luckily, you found this article, because I’m here to help image(s) of the application, the ports it requires, the file
you out. I’ll cover the security baseline for a Kubernetes system mounts, etc. The Kubernetes API server, in turn,
cluster. I’ll first explain at a very high level how Kuber- runs such a Deployment as a Pod. A Pod corresponds to a
netes works, what it does, and what the purpose of it is. running instance of a Deployment, DaemonSet, or Rep-
Then I’ll elaborate on the common issues I usually come licaSet. Where do Pods run? They run on a worker node,
On top of Deployments and other Pod-creating resources You now understand the basic deployable unit in Kuber-
(there are many more than just Deployment, Daemon- netes, the Pod, a bit better. You know when they come to
Set, or ReplicaSet), operators define Services. Services live and what happens if one of them crashes. Kubernetes
group together Deployments containing a certain label, takes care of recreating your lost Pod to restore the state
and they act as a load balancer to Pods comprising a of the cluster as the operator configured. But Kubernetes
Deployment. Services provide Pods with a single net- also defines other interesting resources. For example,
work identity, one that other Pods use for network con- most back-end applications require some form of configu-
nections. For a Service, it doesn’t matter on which work- ration. Very often, operators configure services through
er node the Pods run. Kubernetes forwards the traffic to config files, like the appsettings.json file when you run
Services that ultimately distribute the workload among an ASP.NET Web API. Instead of baking an appsettings.
their Pods. In total, there are four types of Services, json file into your container image, it would be great to
including ClusterIP (only reachable within the same have it configurable. That allows you to change the con-
cluster) and LoadBalancer (reachable from outside the figuration of your service on-the-fly without rebuilding a
cluster). new Docker image.
To separate services, Kubernetes allows operators to de- For that purpose, Kubernetes provides the ConfigMaps
fine Namespaces. A Namespace in Kubernetes groups to- resource. Operators use ConfigMaps to persist configura-
gether related services and other components into a larg- tion files, or other configurations to the etcd database
er resource. In big companies that run huge back-end ap- of the Kubernetes API server. Deployments, for example,
plications, very often one team owns a single Namespace reference such ConfigMaps within their YAML file, and
and all the services within it. Namespaces allow you to Kubernetes mounts them into the Pods of the Deploy-
group together services forming a cohesive part of your ment. Yes, Kubernetes fully takes care of that. Even more,
entire back-end system. In the example from Figure 1, I Kubernetes also allows you to store your secrets, like, for
identify three Namespaces: billing, shipping, and order. instance, a database password, in Secrets. Secrets work
Figure 3 shows how the billing Namespace looks. like ConfigMaps, but instead of storing them in plaintext
Code Sample Consider the setting that Figure 13 shows. The figure shows
the example cluster with three Namespaces, namely the
You find an example project billing, shipping, and order Namespace. The shipping and
implementing some of the order Namespaces both have ResourceQuotas resources
security recommendations in place, essentially limiting the overall resources of all
from here at https://github. Pods within each of those Namespaces. By resources, I
com/apirker/k8s-sec. mean the sum of all CPU resources or the sum of the entire
Some of the code is also
memory that all Pods of a Namespace occupy together.
available connected to this
However, note that the billing Namespace doesn’t have a
article on www.CODEMag.com.
ResourceQuota resource in place. That essentially allows
the billing Namespace to bind all of the available resourc-
es of the cluster! So, in case an attacker manages to take
over a Pod (whose resources are unconstrained through
LimitRanges or similar) or even more within the billing
Namespace, the attacker could monopolize all the physi- Figure 13: If one namespace doesn’t have a
cal resources of the cluster, thereby bringing the entire ResourceQuota in place, that namespace could
cluster into a Denial-of-Service situation due to resource monopolize resources.
exhaustion. That’s bad for a cluster, because the goal of
Kubernetes is to have your microservices up and running
24/7. Figure 13 depicts the situation. sourceQuotas, here, only the shipping Namespace will
be affected. It’s still quite a severe situation for the en-
tire cluster if one Namespace isn’t working as it should.
Figure 14 shows the situation.
Use ResourceQuotas and LimitRanges
to limit the resources that a In summary, you’d better take care of the resources within
your cluster. Worker nodes and their resources are pre-
namespace or Pods uses, at most. cious because in Kubernetes, it’s all about how to dis-
tribute workload across available resources in a smart
way. The Code Snippet below shows the definition of a
That was for entire Namespaces. Now let’s have a ResourceQuota.
look at one individual Namespace. Consider the ship-
ping Namespace that has a ResourceQuota resource apiVersion: v1
in place to limit the overall resources within the entire kind: ResourceQuota
Namespace. Recall that there were two Services run- metadata:
ning in the shipping Namespace, the shipping Service, name: mem-cpu
and the logistic Service. Suppose each of those Services namespace: order
contains one Pod. Now, let’s assume that the shipping spec:
Pod has a Limits configuration (in the pod definition) hard:
in place, whereas the logistic Pod does not. Well, that’s requests.cpu: "1"
a quite asymmetrical situation. If an attacker now man- requests.memory: 1Gi
ages to take over the logistic Pod, the attacker could limits.cpu: "2"
monopolize all the resources of the shipping Namespace limits.memory: 2Gi
using the logistic Pod, thereby bringing the shipping Pod
into an interesting situation when it comes to acquir- Finally, There Are Also Users in Kubernetes
ing CPU or memory. It potentially results in a Denial-of- Kubernetes always requires operators in one or the other
Service situation for the Services running in the shipping form to set up the cluster or intervene if something isn’t
Namespace. In contrast to the previous example with Re- working as it should. Operators specify the Services run-
Roles and ClusterRoles build around resources and verbs. Figure 14: Limit and LimitRanges restrict the resource consumption of individual Pods.
The resource corresponds to the resources of a certain
type that the cluster runs, like, for example, Pods, Con-
figMaps, or other kinds of Kubernetes resources. The
verbs specify the action a certain Role or ClusterRole is
allowed to perform for that resource. Verbs include “get”
or “list,” but also “create,” for example. If a certain Role
or ClusterRole should be able to read Secrets, you grant
the “get” verb on Secrets. To associate a certain Role or
ClusterRole with an operator, you specify RoleBindings
or ClusterRoleBindings. Those two resources associate
subjects, like a user, with a Role or ClusterRole. As you
see, it’s straightforward to set up the access control for
your cluster.
In today’s digital age, it’s increasingly important for de- • Scalability: The extent to which the application can
velopers to be able to create event-driven applications serve requests by creating new instances of the ap-
quickly and efficiently. One technology that’s helping plication
make this possible is the Distributed Application Runtime • Synchronous model: The client sends an HTTP re-
(Dapr). This open-source framework enables developers to quest to the web server and then waits until it re-
build serverless and microservice applications using any ceives a response.
language and runtime, making it an ideal platform for
building modern cloud-native applications. The traditional communication pattern used in most web
applications is the request-response approach. This model
Joydip Kanjilal This article discusses event-driven architecture, its archi- involves the client making a synchronous HTTP request
joydipkanjilal@yahoo.com tectural components, the concepts related to Dapr, why to the web server and then waiting for a response. The
it’s useful, and how you can work with it in .NET 7 Core. request-response model is also used in a microservices
Joydip Kanjilal is an MVP architecture. Eventually, there’s a chain or series of HTTP
(2007-2012), software In order to work with the code examples discussed in this requests and responses working in a synchronous manner.
architect, author, and article, you need the following installed in your system:
speaker with more than
Now, let’s assume that one of the requests in the chain
20 years of experience.
• Visual Studio 2022 waiting for a response times out because it doesn’t re-
He has more than 16 years
• .NET 7.0 ceive a response from the web server. As a result, your
of experience in Microsoft
.NET and its related • ASP.NET 7.0 Runtime application becomes non-responsive or blocked. An in-
technologies. Joydip has • Dapr Runtime crease in the number of services increases synchronous
authored eight books, HTTP requests and responses, which explains why a failure
more than 500 articles, If you don’t already have Visual Studio 2022 installed, of one of the systems will also impact the other systems.
and has reviewed more you can download it here: https://visualstudio.microsoft.
than a dozen books. com/downloads/.
What Are Events and Notifications?
You’ll be building two event-driven applications using In the content of an event-driven applications, an event
Dapr. In this article, you’ll: is defined as a significant change of state that can trig-
ger an action. For example, a user enters login credentials
• Understand the importance of event-driven applica- in a login form and then presses the Login button. An
tions event occurs when something happens, like a user clicks
• Learn Dapr and its architectural building blocks on a button or presses a key. The application waits for
• Build a simple Minimal API application (named an event and when one occurs, it executes an appropri-
DaprDemo) in ASP.NET 7 Core ate handler. The handler can perform any task, including
• Configure the DaprDemo application to provide sup- changing the application’s state.
port for Dapr
• Connect to the DaprDemo Minimal API application An event has the following characteristics:
using Dapr HttpClient
• Connect to the DaprDemo Minimal API application • It serves as a record that something has occurred.
using .NET HttpClient • It’s lightweight, distributed, and encapsulates a
• Connect to the DaprDemo Minimal API application change of state.
using DaprClient • It can be distributed via channels that include
• Implement a Publish/Subscribe application using streaming and messaging.
Dapr • It’s immutable, which means that it can’t be altered
or removed.
• It can be persisted indefinitely.
From a Traditional Request- • It can be consumed an unlimited number of times
Response Model to an Event-Driven by an event consumer.
Approach
There are several challenges that modern-day web appli- A notification refers to a message used to inform the oc-
cations face: currence of an event within the application, such as a
new record added to the database, an email sent, etc.
• Availability: Whether one or more services are avail- Typically, this consists of a unique identifier, the details
able—up and running—to serve incoming requests of an event, and the context, such as the date, time, loca-
60 Building an Event-Driven .NET Core App with Dapr in .NET 7 Core codemag.com
tion, etc. An event consumer can determine whether the
event should be processed from this metadata.
Event Schema
An event schema denotes a specified format used to
define an event record. Although the producers publish
event records that comply with this schema, the consum-
ers should know this format to read one or more events.
Here’s how a typical event would look:
{
"message_type": "user_login", Figure 1: Demonstrating the components of an Event-Driven Architecture
"email": "someuser@example.com",
"content": "Thanks, you're now logged in!"
} consumer to process the messages asynchronously. Typi-
cally, an event-driven architecture is made up of the Pro-
ducer, the Message Broker, and the Consumer.
Event-Driven Architecture
An event-driven architecture is defined as a software A publisher publishes or creates a message to describe the
architectural pattern in which a conglomeration of de- event, converts it into a message, and presents it to the
coupled components is capable of publishing and sub- event router for further processing. The event producer
scribing to events asynchronously via an event broker. and the event consumer (also known as the event sub-
Event-driven architectures can create, identify, consume, scriber) are decoupled from each other.
and respond to events and can be asynchronous and mes-
sage-based. An event-driven architecture is a good choice A message broker acts as an intermediary to acquire,
for building applications that are loosely coupled, such as store, and deliver events to the event consumers. A mes-
microservices. Figure 1 shows the components of a typi- sage broker should be highly scalable and reliable and
cal event-driven architecture. ensure that events are not lost during a system failure.
It should be noted that the event broker is optional if
Benefits and Downsides you have a single event producer connected directly to a
The primary advantages of an event-driven architecture single event consumer, i.e., the event producer can send
include increased responsiveness, scalability, and agil- the messages to the event consumer directly.
ity. Responding to real-time information and integrating
additional services and analytics can instantly improve There are two types of message brokers: store-backed and
business processes and customer experiences. Most or- log-based. The former stores the events in a data store to
ganizations believe that the benefits of modernizing IT serve one or more consumers and then purges the events
infrastructure outweigh the costs associated with event- once they have been delivered, and the latter stores the
driven architecture. events in logs and persists the events even after they are
delivered.
In an event-driven architecture, because the event produc-
ers and consumers are decoupled from each other, the out- The consumer receives the message from the event broker
age of one service doesn’t affect the availability of other and performs the appropriate action, i.e., processes the
services. As a result, even when consumers are unavailable, events asynchronously. In other words, a consumer receives
producers can continue to produce event messages. Like- notifications of newly created events and processes those
wise, a consumer listens for the availability of new event events asynchronously. Figure 2 shows the producer, con-
messages but isn’t affected if the producer is down. sumer, and event broker in action where two of the three
consumers have subscribed to more than one producer.
Event-driven architecture provides several benefits:
An event-driven architecture works asynchronously and in
• Loose coupling a decoupled manner, which allows it to scale efficiently.
• Immutability The producer sends a notification when an event occurs
• Independent failure but it’s not bothered about the destination of the noti-
• Scalability fication, i.e., where the notification will eventually be
delivered.
However, there are certain downsides to using event-
driven architecture: Instead, the event broker (also known as an event router)
is responsible for distributing the events as appropriate.
• Increased complexity Event consumers (also known as sinks) start processing
• Difficulty in monitoring the events as they arrive. This allows all of the services
• Difficulty in debugging and troubleshooting to process the events asynchronously. Figure 3 illustrates
how an event broker works.
Components of Event-Driven
Architecture Event-Driven Architecture Patterns
In an event-driven architecture, the producers and con- Event-driven architectures have two different architec-
sumers are decoupled from one another, enabling the ture models: pub/sub and event streams.
codemag.com Building an Event-Driven .NET Core App with Dapr in .NET 7 Core 61
In a pub/sub model, each consumer gets messages in a ten to, process, and transmit events using a decoupled
topic in exactly the same order that they were received. architecture. In a typical event-driven architecture, the
By subscribing to event streams, this messaging pattern application is a conglomeration of different components
enables asynchronous communication between disparate that communicate with each other via events. Events are
system components. When an event is published, an messages sent between various components of the appli-
event notification is delivered to all subscribers who have cation. Events can be triggered using user interactions or
subscribed to the event. It should be noted that multiple triggers from external services via webhook.
event subscribers can listen to the event.
Key Concepts: Publishers, Subscribers, Sources, and Sinks
An event streaming model involves processing a sequence In contrast to conventional architectures that react to
of events in an asynchronous manner. Instead of deliver- user-initiated queries and deal with data in batches that
ing the events to the event subscribers, the published are added and altered at predefined intervals, event-
events are written to a stream store, usually a log. The driven architecture-based applications respond to events
event consumers check the stream store or the log to as they occur. Publishers, subscribers, sources, and event
determine whether new messages are available to be pro- consumers (sinks) are the essential principles of event-
cessed. Additionally, because the events are persisted, driven architecture.
the event consumers can join and start reading events
from an event stream at any point of time. A publisher is the component responsible for capturing
event data and storing it in a repository. The subscriber
is responsible for consuming the event data and respond-
Use Cases of Event-Driven Architecture ing to the event. A source denotes the component where
Some of the common use cases of event-driven architec- the event originates, and sinks represent the destinations
ture are: where the event subscribers send data.
What’s Dapr?
Dapr is an open-source, language-agnostic, event-driven
runtime that enables developers to build resilient and
Figure 3: Demonstrating how event dispatch works in an event-driven architecture distributed applications that are portable, reliable, and
62 Building an Event-Driven .NET Core App with Dapr in .NET 7 Core codemag.com
scalable. Dapr’s runtime handles the underlying infra- • Publish and subscribe: Provides support for scal-
structure, so developers can concentrate on developing able, secure messaging between the services
their applications rather than focusing on the infrastruc- • Bindings: As part of the bindings building block,
ture. Built on the actor model, it allows loose coupling enables external resources to be attached to it for
of components and helps to make your app more resilient the purpose of triggering a service or being called
when a failure occurs. from a service.
• Actors: Encapsulates the necessary logic and data
Dapr is a distributed application runtime that simplifies to manage state in reusable actor objects
the development of cloud-native applications, enabling • Configuration: Used for easy sharing of application
developers to focus on business logic rather than in- configuration changes, and for providing notifica-
frastructure components. Dapr has several architectural tions when changes to the configuration are made
components that make it an ideal choice for developing • Observability: Used to monitor and quantify mes-
applications in the cloud. sage calls across applications, services, and compo-
nents deployed using a Dapr sidecar.
Sidecar Architecture • Secrets: Allows access to external secret stores in a
Dapr can coexist with your service by using a sidecar, secure manner.
which allows it to operate in its memory process or con-
tainer. A sidecar refers to a secondary piece of software
that can be deployed alongside the main application and Using Dapr in Distributed Applications
run in tandem, typically in a separate process. Dapr is a great fit for event-driven architectures. Its
pub/sub model makes it easy to decouple services and
Because sidecars are external to the service they connect scale them independently. Dapr uses a pub/sub model for
to, they may offer isolation and encapsulation without communication between services. This means that each
impacting the service itself. This isolation enables you to service can subscribe to events from other services, and
create a sidecar using many different programming lan- publish its own events. Building blocks of Dapr provide
guages and platforms while allowing it to have its run- capabilities that are common to distributed applications
time environment. such as state management, service-to-service invocation,
pub/sub messaging, etc.
Why Use Dapr for Building Dapr reduces the inherent complexity associated with dis-
Event-Driven Applications? tributed microservice applications. Being event-driven,
Dapr provides a number of benefits for developers build- Dapr, plays an essential role in developing microservices-
ing event-driven applications: based applications because you can leverage Dapr to de-
sign an application that can react to events from external
• Portability: Dapr applications can be deployed on systems efficiently and produce events to notify other
any platform, including Kubernetes, Service Fabric, services of new facts.
and Azure Functions.
• Scalability: Dapr can scale horizontally and verti-
cally, meaning that applications can be scaled up Getting Started with Dapr
or down as needed without requiring code changes. You can get started with Dapr locally using the Dapr Com-
• Resilience: The actor model used by Dapr helps to mand-Line Interface (CLI). You can download a copy of it
make your app more resilient against failures. If one from here: https://docs.dapr.io/getting-started/install-
component fails, the others can continue working, dapr-cli/. You can configure Dapr to run in a self-hosted
ensuring that your app remains responsive even in mode, as a serverless solution such as Azure Container
the face of adversity. Apps, or deployed in Kubernetes. To verify whether the
• Flexibility: Developers can choose from a variety of Dapr CLI has been installed properly, use the following
programming languages when building Dapr apps, command at the console window or at the Windows Pow-
including C#, Java, Node.js, and Python. erShell window:
• Ease of use: Dapr apps can be deployed using a
simple YAML file, making it easy to get started with dapr
event-driven development.
• Loose coupling: A Dapr-based application’s compo- Assuming Dapr has been installed successfully on your
nents are loosely connected, meaning they may be computer, when you run the Dapr executable at the Dapr
built and deployed independently. As a result, your CLI command prompt, a list of all available commands of
application will be more modular, manageable, and the Dapr executable is displayed. Figure 4 captures the
simpler to maintain and extend over time. list of these commands at the Dapr CLI.
• Service-to-service invocation: Responsible for per- If you’d like to initialize Dapr without using Docker, you
forming secure, direct, service-to-service calls can use the following command instead:
• State management: Supports the creation of long-
running stateful and stateless services dapr init –slim
codemag.com Building an Event-Driven .NET Core App with Dapr in .NET 7 Core 63
Figure 4: Verifying the Dapr CLI installation
Building Event-driven Applications 3. Specify the project name as DaprDemo and the path
Using Dapr in ASP.NET 7 where it should be created in the Configure your
new project window. If you want the solution file
In this section, you’ll examine how to use Dapr to build and project to be created in the same directory, you
event-driven applications in ASP.NET 7. can optionally check the “Place solution and proj-
ect in the same directory” checkbox. Click Next to
Create a New ASP.NET 7 Project in Visual Studio 2022 move on.
You can create a project in Visual Studio 2022 in several 4. In the next screen, specify the target framework as
ways. When you launch Visual Studio 2022, you’ll see the .NET 7 (Standard Term Support) and the authenti-
Start window. You can choose “Continue without code” cation type as well. Ensure that the “Configure for
to launch the main screen of the Visual Studio 2022 IDE. HTTPS,” “Enable Docker Support,” and the “Enable
OpenAPI support” checkboxes are unchecked be-
To create a new ASP.NET 7 Project in Visual Studio 2022: cause you won’t use any of these in this example.
5. Because you’ll be using minimal APIs in this example,
1. Start the Visual Studio 2022 IDE. uncheck the checkbox “Use controllers” (uncheck to
2. In the Create a new project window, select ASP.NET use minimal APIs).
Core Web API and click Next to move on. 6. Click Create to complete the process.
64 Building an Event-Driven .NET Core App with Dapr in .NET 7 Core codemag.com
You’ll use this application in the subsequent sections in Alternatively, you can install these package(s) from the
this article. NuGet Package Manager Window. To install the required
packages into your project, right-click on the solution
Create the DaprDemo Minimal API and the select Manage NuGet Packages for Solution....
Open the Program.cs file or the Minimal API project you Now search for these package(s) one at a time in the
just created and replace the default-generated code with search box and install it/them.
the code shown in Listing 1.
Create a New Console Application Project in Visual
In Listing 1, I’ve created a record type named Product. Studio 2022
This record type is used in the HTTP GET endpoint /all- Let’s create a console application project that you’ll use
products that returns a list of records. You can update the for calling Dapr sidecar. You can create a project in Vi-
profiles section of the launchsettings.json file to launch sual Studio 2022 in several ways. When you launch Visual
the endpoint when your application starts, like this: Studio 2022, you’ll see the Start window. You can choose
Continue without code to launch the main screen of the
"profiles": { Visual Studio 2022 IDE.
"http": {
"commandName": "Project", To create a new Console Application Project in Visual Stu-
"dotnetRunMessages": true, dio 2022:
"launchBrowser": true,
"launchUrl": "allproducts", 1. Start the Visual Studio 2022 IDE.
"applicationUrl": "http://localhost:5239", 2. In the Create a new project window, select “Console
"environmentVariables": { App”, and click Next to move on.
"ASPNETCORE_ENVIRONMENT": "Development" 3. Specify the project name as DaprDemoClient and the
} path where it should be created in the Configure
}, your new project window.
"IIS Express": { 4. If you want the solution file and project to be cre-
"commandName": "IISExpress", ated in the same directory, you can optionally check
"launchBrowser": true, the “Place solution and project in the same direc-
"launchUrl": "allproducts", tory” checkbox. Click Next to move on.
"environmentVariables": { 5. In the next screen, specify the target framework you
"ASPNETCORE_ENVIRONMENT": "Development" would like to use for your console application.
} 6. Click Create to complete the process.
}
} You’ll use this application in the subsequent sections of
this article.
When you run the application, you’ll see the list of prod-
uct records displayed on your web browser. Install NuGet Package(s)
Because you’ll be using Dapr Client in this example, you
Install NuGet Package(s) should install the Dapr.Client package. You can do this
You can install the packages inside the Visual Studio 2022 from inside the Visual Studio 2022 IDE or by running the
IDE by running the following command(s) at the NuGet following command(s) at the NuGet Package Manager
Package Manager Console: Console:
codemag.com Building an Event-Driven .NET Core App with Dapr in .NET 7 Core 65
Alternatively, you can install these package(s) from the --app-port 5239 -- dotnet run
NuGet Package Manager Window. To install the required
packages into your project, right-click on the solution Let’s now understand the different options of the Dapr
and the select Manage NuGet Packages for Solution.... command you used:
Now search for these package(s) one at a time in the
search box and install it/them. • app-id: Specifies the application or service ID for
service discovery
Using Dapr Sidecar with Dapr HttpClient • dapr-http-port: Specifies the HTTP port that Dapr
Lastly, open the Program.cs file of the console application should listen to; 3500 in this example
project you just created and replace the default generated • app-port: Specifies the port the application will lis-
code with the source code provided in Listing 2. ten to; 5239 in the example
• app-ssl: Optionally turns on HTTPS when Dapr in-
Execute the Application voked the application.
Navigate to the directory that contains your MinimalAPI proj- • dotnet run: Optionally executes your WebAPI.
ect file, and then execute the following command at the com-
mand window or the Windows PowerShell window to launch Next, navigate to the directory where the DaprDemoCli-
the productservice alongside a Dapr sidecar application: ent project file resides and run the productserviceclient
service alongside a Dapr sidecar:
dapr run --app-id productservice
--dapr-http-port 3500 dapr run --app-id productserviceclient
66 Building an Event-Driven .NET Core App with Dapr in .NET 7 Core codemag.com
Figure 6: Connecting to Dapr Sidecar using Dapr HttpClient
await app.RunAsync();
Implementing a Publish and
public record Order
Subscribe (Pub/Sub) Application {
In this section, you’ll build a simple pub/sub applica- public int Id { get; set; }
public string? Product_Code { get; set; }
tion. The application consists of two projects: a Minimal }
API project and a Console Application project. Follow the
steps mentioned earlier to create the Minimal API and
Console Application projects.
Execute the Publish and
Create the Subscriber Subscribe Application
Replace the default-generated source code of the Pro- Navigate to the folder where the project file of the Mini-
gram.cs file pertaining to the Minimal API project mal API project resides and execute the following com-
(the Publisher in this example) with the code given in mand at the command window or the Windows PowerShell
Listing 5. window to launch the orderprocessing service alongside a
Dapr sidecar application:
Create the Publisher
Replace the default-generated code of the Program.cs file dapr run --app-id orderprocessing
pertaining to the Console Application with the piece of --app-port 7001 --dapr-http-port 3501
code shown in Listing 6. -- dotnet run
codemag.com Building an Event-Driven .NET Core App with Dapr in .NET 7 Core 67
Listing 6: The Publisher Console application
using System.Text; for (int i = 1; i <= 5; i++)
using System.Text.Json; {
var order = new Order() { Id = i,
var baseURL = "http://localhost:3500"; Product_Code = "P000" + i.ToString()};
const string PUBSUB = "orderpubsub"; var content = new StringContent
const string TOPIC = "orders"; (JsonSerializer.Serialize
const string APP_ID = "orderprocessing"; <Order>(order), Encoding.UTF8,
"application/json");
Console.WriteLine($"Publishing to baseURL: var response = await httpClient.
{baseURL}, Pubsub Name: {PUBSUB}, Topic: {TOPIC} "); PostAsync($"{baseURL}/orders", content);
Console.WriteLine($"Published Order Id:
var httpClient = new HttpClient(); {order.Id}");
httpClient.DefaultRequestHeaders. }
Accept.Add(new System.Net.Http.Headers.
MediaTypeWithQualityHeaderValue public record Order
("application/json")); {
httpClient.DefaultRequestHeaders.Add public int Id { get; set; }
("dapr-app-id", APP_ID); public string Product_Code { get; set; }
}
You can hit this endpoint using Postman or Fiddler or any Note that, for the sake of simplicity, I’ve used Order and
other HTTP Client. Alternatively, open your web browser Product record types twice in the code examples—you
and browse this link to register the Dapr pub/sub subscrip- can create a class library that contains these two types
tion: http://localhost:7001/orderprocessing/subscribe and then import them into your projects to avoid code
redundancy.
When you run both the applications, you’ll be able to see
orders published in one command window and subscribed Joydip Kanjilal
in another, as shown in Figure 7:
68 Building an Event-Driven .NET Core App with Dapr in .NET 7 Core codemag.com
ONLINE QUICK ID 2303091
Architects:
The Case for Software Leaders
The software industry has a poor success record. In fact, all of IT does. Throughout phases of various types of formal leaders,
including CIOs, project managers, and scrummasters, the project success rate struggles to get to 20%. In this article, you’ll
explore the multi-decade history of the software industry, some of the attempts at progress, and the results. You’ll also explore
what it takes to be a leader in software. At the end of this of the natural forces at play in software projects.
article, you will be able to:
This twenty-year period from 1960 to 1989 was the early
• Identify the main eras of the software industry part of Mr. Brooks’ career, and throughout the book, he
• Understand the average level of success and failure shares the nature of early software systems and comput-
over time ers. These computers didn’t come with a general-purpose
• List the responsibilities of a software leader operating system. The programmers loaded every bit of
• Decide whether the calling of software leadership code the computer was to run. Software applications
is for you weren’t a thing. We just called them “programs.” Operat-
ing Systems were still in development. Moreover, every Jeffrey Palermo
programmer did as he saw fit. There were no norms. No
Custom Software: A History of Failure methodologies. No standard processes. No large technol-
jeffrey@clear-measure.com
JeffreyPalermo.com
For as many technical advances as the software (and IT) ogy vendors suggesting a “normal” way to proceed. The @JeffreyPalermo
industries have seen over the decades, the rate of fail- name of the game was to figure out how to make the
Jeffrey Palermo is the
ure, outages, business disruptions, and lost investment computer do something. Chief Architect, and
is staggering. To illustrate this, consider the Standish Chairman of Clear Measure,
Group. This organization has been publishing studies 1980-2000: The Waterfall Period Inc., a software archi-
for decades, beginning with the original CHAOS report The Standish Group has labeled the period from 1980 to tecture company that
of 1994: https://www.standishgroup.com/sample_re- around 2000 as the Waterfall Period. This period saw the empowers its client’s
search_files/chaos_report_1994.pdf. In 1994, the CHAOS rise of software processes. Universities began computer development teams to be
report contends that only 16.2% of software projects are science curriculums. CASE (computer-aided software en- self-sufficient: moving
deemed a success. Success means that the initial promise gineering) tools were developed and taught as being the fast, delivering quality,
of the system was delivered on time, on budget, and with wave of the future where software programs would be and running their systems
the needed level of stability while running the system. programmed automatically by visual specifications of the with confidence.
The Standish Group has continued to catalog projects and required behavior. These two decades saw the right of the
publish reports, roughly every five years, and the 2020 IT project manager and the establishment of the PMBOK, Jeffrey has been recog-
report shows an alarming result. The success rate overall PMI’s Project Management Body of Knowledge with the nized as a Microsoft MVP
is still only 16.2%! Although the report studies a wide first report in 1987 and then the official first version in since 2006 and has spoken
at national conferences
range of IT projects, this is alarming for our industry. For 1996.
such as Microsoft Ignite,
readers interested in the source reports, I encourage the
TechEd, VS Live, and
reading of the Standish Group material. During this period, the mainframe computer introduced
DevTeach. He has founded
segregation into businesses and gave rise to the IT de- and run several software
partment. After all, if a business was going to invest in user groups and is the
one large expensive computer that multiple people could author of several print
Even after 25 years since share, then someone had to be responsible for keeping books, video books, and
it available and in good repair. At this point, program- many articles. A Christian,
the original CHAOS report, mers could concentrate on writing their program and then graduate of Texas A&M
software project success, on running it through a remote terminal. But because any
program had the capability to crash the shared mainframe University (BA), and the
average, struggles to achieve 20%. Jack Welch Management
server, IT administrators began developing restrictions to
protect the many from the few. In larger companies, this Institute (MBA), an Eagle
resulted in IT departments that separated the program- Scout, and an Iraq war
Let’s now take a look at some major trends in the software mer from the server administrator. This segregation would veteran, Jeffrey likes to
industry according to the Standish Group. take the next few decades to undo. spend time with his family
of five camping and riding
dirt bikes.
1960-1980: The Wild West As these departments grew, IT project managers rose as a
Our industry has now topped 60 years since inception. common role in the industry and PMI served as an educat-
Arguably, the late 1950s saw the first business software ing and certifying body to give credibility to the role. The
systems, but the cases to study are few. Fred Brooks is processes espoused by the PMI’s PMBOK are now known as
one of the early computer programmers (before the more Waterfall. Essentially the concept of “gathering require-
common term, software developer or software engineer ments” because the root cause for long project phases
came about). In his book, The Mythical Man-Month, he and rigidity is downstream. In this period, the Standish
writes a series of essays in which he reasons about some Group published its first CHAOS reports showing the world
These figures are for 2020 alone and are for just the Unit-
ed States. If the technology industry as a whole is around
$1.5T, these figures suggest a very upside-down return on John Maxell, author of 70+ books
investment equation. on leadership says: “Everything
Anecdotally, I’ve been involved in many conversations
rises or falls on leadership.”
over 25 years in the profession. These conversations in-
volve software developers lamenting that they spend too
much time fixing bugs or investigating production issues. Over the 60+ years of our software industry, much has been
Others lament that the IT group is seen just as a cost cen- established. Many processes and paradigms have been
ter, only to have hiring freezes and layoffs in a downturn. tried. Many have succeeded. Most of them have failed at
When you look at these statistics, and if they correlate least once. The knowledge of how to succeed is there. Unfor-
to similar things, you might conclude that overall, in the tunately, that knowledge isn’t universally applied. I have a
United States, the IT industry as a whole is, indeed, a hypothesis. It’s because of uneven leadership within orga-
cost center. Now, I haven’t cited statistics on revenues nizations producing and running custom software.
produced by software and IT investments. Those are hard
to come by. Because companies are still aggressively in- What Software Leaders Actually Do
vesting in technology, you can conclude that even with Some people reading this article are currently software
all the mess, it’s still worth it. But from within the indus- leaders. Some aspire to be. Some work under the guidance
try, we must do better. of one. Some wish their organization had one.
It’s not hard to spot a leader. Others around the leader are Software developers are neither soldiers going to war nor
better because of the presence or influence of the leader. sheep producing wool, but in both analogies, the leader
(Continued from 74) Ask yourself if you would accept shoddy work as opposed to differently situated employees,
from a contractor in your home. Of course, that doesn’t absolve executive management
a two-way street because it isn’t in our na- you wouldn’t. Nobody would or should. But from its core oversight duties and responsibili-
ture to dredge up past transgressions to be used we collectively tolerate it in our projects. Re- ties. What often happens is a slow depletion of
against us! Therefore, to be responsible about sponsibility requires oversight and leadership. what I refer to as “organizational knowledge.”
putting people on the path so that they may Consider that Southwest Airlines and the FAA Such knowledge, generally, is at the very least, an
be responsible requires a culture of transpar- have different perspectives but with much in understanding of how things operate. When we
ency. This is the essence of what continuous common. staff things out, over time, organizations often
improvement is all about. Such efforts require defer the requirement for such direct knowledge
egos being checked at the door. Because the Let’s take the FAA scenario first. That situation, in favor of the external entities. Once organiza-
serious fact is this: The work we do matters. It based on news reports, appears to be an over- tions start to lose grip on how their technical
matters to other people because they depend sight/contractor scenario. A public agency, the infrastructure operates, they become captive to
on our technical work. To support such a cul- FAA will need to be transparent about whatever these external entities. It’s the height of irre-
ture, it requires the buy-in and trust of senior after-action report is published. The point here sponsibility on the part of management when it
leadership. That same senior leadership needs is that although the C-suite may find it benefi- causes an organization to lose grip of its internal
to finally listen to the technical staff that has cial to financially and legally organize in a way understanding of IT matters. Despite the differ-
been trying to tell them the facts for years. that uses non-geographically co-located entities ence in organizations, the FAA and Southwest
CODA:
On Responsibility: Part II
As I sit down to write this next CODA installment, the January/February 2023 issue is available for
consumption, as are the events incident to Southwest Airlines and the FAA. In the printed physical
magazine world, content must be assembled in a layout long in advance of the publication date.
That’s how we ensure a quality CODE Magazine sponsible decision is how technical debt may umn, Ted called out to me over the issue of
product. Our magazine analog is not so differ- be quantified. Technical debt is an insidious ethics, the law, and liability. Although I’ve
ent from a software analog where we must also thing because it’s invisible. It isn’t recorded generally written about those things in the
take care as we proceed along the software de- in the financials in the sense that debt is usu- past, with the new Southwest Airlines and FAA
velopment journey for a given project. ally referenced. Nevertheless, you know that context, in conjunction with this once, pres-
technical debt exists. It exists in the corpus ent, and perhaps future topic of responsibility,
The goal for software is the same as with the of our software in how it was designed, built, regardless of what was written in the past, it
magazine, to deliver a quality product. Quality tested, and deployed. To the extent that our bears repeating in a new way with the basic
isn’t some abstract thing in this case, because existing platform isn’t receptive to changes question of how we can improve our condi-
to make some determination regarding quality, that reflect current innovation that competi- tion. How do we become more responsible?
it must be measurable. Quality is real, not an tive, top players in an industry implement, Perhaps it’s a matter of first learning how to
abstract thing. The quality of the software we technical debt exists. One way to quantify be responsible. And before that, we must each
build is entirely dependent on how we build it. technical debt is to equate it to the annual be willing to accept responsibility and be ac-
EBITA from such foregone opportunities. The countable; first to ourselves, and then to the
The recent events of Southwest Airlines and the implication is that every year, an organization team and the organization. It is important to
FAA, and the not-so-distant events with the accrues more technical debt, even if it hasn’t remember that “we” is just a group of indi-
Boeing 737 Max-8, require me to look at this spent another cent on that project. Technical vidual “I”s.
topic anew because although we understand debt is about opportunity costs. Organizations
what quality is, collectively, when confronted that squander corporate opportunities are ir- The difficulty with large systems that have ex-
with scenarios that require attention to es- responsible. isted for a long time (like Southwest’s) is that
tablish, maintain, and enhance quality, we’re many, many people; past, present, and future,
not interested in investing in such support have or will have an impact on that system.
infrastructure. Nevertheless, quality is desired Why and how things are as they are isn’t nearly
because of the benefits that are incident to For an industry that prides as important as the fact that they are this way
quality. It doesn’t seem very responsible to, on itself on its analytical and our collective response to that fact. That
one hand, desire the benefits of quality with- collective response is just an aggregation of
out doing the things necessary to earn a posi- ability and abstract mental individual responses, from a variety of profes-
tive quality designation. Whether something is processing, we often don’t sionals and disciplines that have been brought
of good quality is in the eye of the beholder. to bear on a solution. At a fundamental level,
Quality is a report card. It’s a judgment. And we do a great job applying those individual responses are best served when
can’t be the judges of our own work. That’s what that mental skill to the most they’re in sync with some stated overarching
makes a good measure, because it can’t be tar- principles. Think of these principles as a com-
geted. At its most abstract, quality is simply
important element of pass or North Star to serve as a guide.
the result of something else. the programmer’s tool
chest – ourselves. What if such stated principles exist or exist but
Goals are important. They help a group focus aren’t known? That’s why the most important
on some common thing. But that isn’t enough. tool in the programmer’s toolbox is ourselves.
We can’t just focus on the end. How we get We’re the most important tool because we each
there matters because it has a knock-on effect That’s the reason I decided to scour the CODE possess the ability to act appropriately at the
to what we end up with. It may very well be Magazine archives to get a sense of what’s “last responsible moment.” Whether or not we
that instead of focusing on quality, perhaps we been written before in these pages. I encoun- act appropriately at the last responsible mo-
should focus on those things we have control tered some of my previous work and that of ment is another question. If it seems in your
over that, when practiced, tend to make qual- others. One article that stuck out was from projects that you’re always reacting, and put-
ity enhancement more likely, not less probable. my back-page predecessor Ted Neward. In his ting out fires, this applies to you. To be re-
That’s what taking care is about. Managed Coder Column from May/June 2016 sponsible, we must be responsible about being
(https://www.codemag.com/Article/1605121/ responsible.
With all the bad news recently about South- Managed-Coder-On-Responsibility), he wrote
west Airlines and the FAA and their antiquated On Responsibility wherein he raised the fol- Another twist on that is that serious people
manual process, the first thing I thought of lowing assertion: treat serious matters seriously. This is where
was technical debt. I hold to the notion that rigorous honesty must be embraced. But that’s
every irresponsible decision an IT organization As insightful as that quote was seven years
makes, the hard financial costs with that irre- ago, it’s even more so now. Later in his col- (Continued on page 73)
• Code Audits and Penetration Testing have been performed by a third party Yes No
CODE Security experts can help you identify and correct security vulnerabilities in your products,
services and your application’s architecture. Additionally, CODE Training’s hands-on Secure Coding for
Developers training courses educates developers on how to write secure and robust applications.
Contact us today for a free consultation and details about our services.
codemag.com/security
832-717-4445 ext. 9 • info@codemag.com
OLD
TECH HOLDING
YOU BACK?
Are you being held back by a legacy application that needs to be modernized? We can help.
We specialize in converting legacy applications to modern technologies. Whether your application
is currently written in Visual Basic, FoxPro, Access, ASP Classic, .NET 1.0, PHP, Delphi…
or something else, we can help.
codemag.com/legacy
832-717-4445 ext. 9 • info@codemag.com