Download as pdf or txt
Download as pdf or txt
You are on page 1of 43

Data Management with SciData

Brad Carman
February 21, 2013
How to Keep Organized and Automate Your Data Analysis and Reporting

Contents
1 Example Scenario: Spring Testing with a Compression Test
1.1 How the Sample Data is Organized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Quick Overview of SciData
2.1 Installation . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Communication with Several Mathematical Programs
2.2.1 Scilab - Free . . . . . . . . . . . . . . . . . . .
2.2.2 MathCAD - Easy/Intuitive . . . . . . . . . . .
2.2.3 MATLAB - Professional . . . . . . . . . . . . .
2.3 Data and Math are Separate . . . . . . . . . . . . . .
2.4 Column Types . . . . . . . . . . . . . . . . . . . . . .
2.5 Row Operations vs. Table Operations . . . . . . . . .
2.5.1 Row-by-Row Mode . . . . . . . . . . . . . . . .
2.5.2 Table Mode . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

3
3
4
4
5
5
5
6
7
7
8
9
10

3 Importing Data
11
3.1 Start a New SciData Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Copying Data Into the Data Folder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 The Import Button . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 Scientific Dataset Data File
14
4.1 Test Data File Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Using the Data Import Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3 Running the DataImport.sce Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5 Organizing the Data
18
5.1 Column Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.2 Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.3 Applying Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6 Adding Row-by-Row Calculations
6.1 Calculating the Spring Rate, k . . .
6.1.1 Calculation Script . . . . . .
6.1.2 Tagging Variables for SciData
6.1.3 Filtering Data in Scilab . . .
6.1.4 Advanced Steps . . . . . . . .

. . . . .
. . . . .
Import
. . . . .
. . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

22
23
23
28
30
31

7 Table Operations
7.1 Plotting Springs k Values . . . . .
7.1.1 Using AutoPlot . . . . . . .
7.1.2 Saving an AutoPlot Script .
7.1.3 Editing an AutoPlot Script

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

34
34
35
36
37

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

8 Summary

39

9 Appendix

39

Example Scenario: Spring Testing with a Compression Test

As a way to demonstrate the benefits of Data Management, an example scenario will be applied where a
batch of springs will be tested. The batch consists of New and Used springs and the New springs were tested
at both room temperature and 70 C.

New Springs
Used Springs

25 C

70 C

(b) Test Conditions

(a) Compression Test with Environmental Chamber

1.1

How the Sample Data is Organized

As the data is collected it is stored into folders with the spring name/number and put into a subsequent
folder representing the test temperature, as shown in Fig. 1. Also note that the New springs are numbered
1-10 and the Used springs are numbered like 00##. Note, this is the type of information that is easily lost
as time passes. Clear documentation of your data is key to Data Management, and is therefore one of the
main goals of SciData.

Figure 1: Organization of Data

Quick Overview of SciData

Data Management will be achieved in this example using the program SciData (Fig. 2). This program serves
as a flexible database for organization of almost any type of data. The following sections detail the benefits
of using this program, especially in contrast with a spreadsheet application, such as Excel.

Figure 2: SciData Screenshot

2.1

Installation

SciData is found at:


sourceforge.net/projects/scidata
Notes:
1. Scilab must be installed first before running the setup.exe file for SciData. Please ensure that the
correct version of Scilab is installed, i.e. if youre running a 64-bit machine, you must install the 64-bit
version of Scilab!
2. Do not download the DAQ Flex installer into the same folder as setup.exe, there is an error that
triggers the wrong executable.
Installation Steps:
1. Install Prerequisites (scilab.org)
2. Install SciData (sourceforge.net/projects/scidata)
Software:
Prerequisites
Scilab
Optional
DAQ Flex
Note: DAQ Flex is used to provide integrated data acquisition. SciData DAQ operation works
with any DAQ Flex supported products ( http://www.mccdaq.com/daq-software/DAQFlex.aspx).
Can be downloaded at ftp://ftp.mccdaq.com/downloads/DAQFlex.
4

MathCAD (Supports v14-15)


MATLAB

SDS: Scientific Dataset library and tools


Note: The latest version (1.3) offers an updated dataset viewer (http://research.microsoft.com/en-us/project

2.2

Communication with Several Mathematical Programs

One of the challenges of data management is the difficulty of consuming the data in an application for further
analysis, reporting, etc. By default SciData communicates with Scilab, a free open source mathematical
software which can perform a great majority of the tasks required for most scientific/statistical needs. The
communication with Scilab is done with the click of a button in SciData. Therefore there is no need to
save and structure files, manipulate data, or manually copy/paste to exchange data to and from Scilab. In
addition to Scilab, SciData also works with MathCAD and MATLAB. A quick overview of the different
mathematical packages is given below.
2.2.1

Scilab - Free

Scilab (Fig. 3) is a console based mathematical software which is modeled after MATLAB. Commands can
be typed one by one into the console window, or they can be stored in a script (*.sce file) and executed line
by line. To get started with Scilab, the help menu contains a tutorial. This current tutorial on SciData will
also use Scilab exclusively and will help you get started.
Notes:
- Scilab is required to be installed in order to use SciData.
- Scilab can be downloaded for free from www.scilab.com.

Figure 3: Scilab 5.4.0


2.2.2

MathCAD - Easy/Intuitive

MathCAD (Fig. 4) is a whiteboard style application that displays math in its natural form. From the
MathCAD website:
PTC Mathcad combines the ease and familiarity of an engineering notebook with the powerful
features of dedicated engineering calculations application. Its document-centric, WYSIWYG
5

interface gives you the ability to solve, share, re-use and analyze your calculations without having
to learn a new programming language. The result? You can spend more time engineering and
less time documenting.
Notes:
- MathCAD is not free unfortunately, but is priced well.
- SciData does not work with MathCAD Prime. Instead you will need to use MathCAD v14 or v15.
- MathCAD is not required to use SciData.

Figure 4: MathCAD v14


2.2.3

MATLAB - Professional

MATLAB is fundamentally very similar to Scilab, but unlike Scilab, it is not free and open source, which
has the benefits of a professional software package with good documentation and support. More simply put,
MATLAB is more powerful and robust.
Notes:
- MATLAB is not free and can be expensive depending on the need.
- The MATLAB and Scilab languages are very similar. Scilab can open MATLAB files (scripts and data
files).
- MATLAB is not required to use SciData.

Figure 5: MATLAB 2012b

2.3

Data and Math are Separate

Another benefit of SciData comes form the fact that Data and Math are separated. This allows for a single
math script to be applied across all the datasets in a collection (a set of experimental tests for example) or
a filtered subset of datasets. In contrast, Excel requires formulas to be copied for each additional set of data
included in a workbook. In SciData, edits, fixes, changes, etc. are made in one source, in Excel, a simple
change to the analysis could require many formulas to be edited, which can be difficult to track and often
leads to mistakes.

2.4

Column Types

The Database Table in SciData contains several different column types. The below table (Tbl. 1) describes
their differences. Columns are either Locked, Input, or Result:
Locked Cells cannot be edited. For example ID, this is controlled by SciData
Input Cells which can be edited and are used to describe the row, such as a category or number explaining
a variable test condition
Result Cells which collect information generated form an analysis

Table 1: Column Types


Name

Data Type

Example

Locked, Input, or Result

Default

Text (ID is Number)

Name, Date, Notes

Input (ID is Locked)

Folder

Text

Data\Springs\Test1
F 1 = Springs, F 2 = T est1

Locked

Category

Text

Type=New

Input

Constant

Number

x=1

Input

Result

Number

k=2

Result

Text Result

Text

abc

Result

Array

Array

x = [1;2;3;4;5]

Input & Result

Icon

2.5

Row Operations vs. Table Operations

The fundamental purpose of SciData is to easily send data to memory form the source database. Therefore
you easily filter the database to send the data of interest, but you can also send the data in two different
modes:
Row-by-Row
Table
The SciData Database Table contains several different types of information: numbers, arrays, and text. When
in Row-by-Row mode, the information is sent as is. When in Table mode, the information is sent stacked as
shown in Fig. 6. Numbers and text (items that are in single units) are stack into arrays and information
starting as arrays are then stacked side by side into a matrix.

Figure 6: How Table Data is Sent


In SciData, the last two tabs in the toolbar are Row and Table. These are then for Row-by-Row operations
or for Table operations. We can now explore how these two modes work.
8

2.5.1

Row-by-Row Mode

The Row-by-Row mode is made up of 3 steps:


Step 1 Send data to memory (a row in the Database Table represents a data file which can contain numbers,
text, or arrays)
Step 2 A script (Scilab, MathCAD, or MATLAB) is executed using the data in memory
Step 3 Results are retrieved by SciData and stored in theDatabas Table and in the respective SDS data file
These steps can be visualized by Fig. 7. Each row represents a whole data file, whatever that file contains
is sent to memory. Then the script selected is executed using the data in memory. Any results that are
generated can then be retrieved and stored in SciData. It will be discussed later how to tell SciData what
results to retrieve.

Figure 7: Data is sent row by row to a single math script


The Row tab offers different methods for Row-by-Row memory operations. As can be seen (Fig. 8), there
are the buttons Send Row [1], Process Row [2], and All Filtered Rows [3]. When using Send Row
the information from the currently selected row is sent to memory of the designated program. By default,
Scilab is the designated program and always runs in the background of SciData, but if a different script
file is created and selected [4], then this will designate which of the three programs (Scilab, MathCAD, or
MATLAB) to target. The Process Row button will also target the selected script file, but it will run
through all 3 steps. Finally, the All Filtered Rows will batch process the Database Table (in its filtered
state), processing each row one-by-one.

Figure 8: Buttons of the Row Tab


Fig. 9 illustrates an example of the Row-by-Row mode. By moving to the Row tab [1], selecting a row [2],
and clicking Send Row [3], data is sent to Scilabs memory. Note that all of the values sent to memory
from the Database Table turn pink. We can see in the Console Window [9] that the memory was first cleared
[4b] (controlled by the Clear Memory Pre Calc option [4a]) followed by row ID 1 loading to memory [5].
We can check the information in memory by executing some Scilab commands using the Command Box [8].
By typing Name into the Command Box, we can see its value [6]. Additionally, we can use the size()
function on variable Load to see that its an array of 626 rows [7].

Figure 9: Sending a Row from SciData


2.5.2

Table Mode

The Table tab is shown in Fig. 10, which offers 4 different buttons for memory operations. The table below
the buttons shows their different functions. As can be seen, buttons Send Lite and Process Lite do not
send arrays. This is to conserve memory if needed. In some cases a large Database Table with large arrays
can easily consume all the available memory, therefore if they are not needed, the Lite option is available.
Also, similar to the Row tab, the Process buttons both send data to memory and execute the target script,
but notice that Step 3 is not available in this mode. Any results generated from the Table mode need to be
saved using other means.

10

Figure 10: Buttons of the Table Tab


Fig. 11 illustrates an example of the Table mode. The Database Table is first filtered [1] with ID= 1 & 2.
After clicking Send Full [2] the table will turn pink [3] indicating which values have been loaded to memory.
We can then check how the variables exist in memory using the Command Box. First type Name to see that
it contains an array of text values [4]. Second using the size() function again, we check the Load variable
to see it is an array with 626 rows and 2 columns, representing each row.

Figure 11: Sending the Filtered Database Table from SciData

Importing Data

Now that the basic operation of SciData is understood, the example analysis with the spring data will begin.
The first step to analyzing data is to import it.

3.1

Start a New SciData Project

Before importing data we must create a project. Simply start SciData (Start >SciData), which brings up
the start screen (Fig. 12). Choose Start a New File. Save the file Spring Analysis.sdat.

11

Figure 12: SciData Start Screen


Understand that the structure of SciData is as follows (referring to Fig. 13):
Project Folder [7] Root folder that contains the SciData file (*.sdat) along with the Data and Math
folders. The SciData file must always be shadowed by a Math and Data folder (which are created
automatically).
SciData File [1] The *.sdat file (contains database structure information)
Data Folder [6] Contains the data files that SciData manages
Math Folder [5] Contains the scripts (either Scilab, MathCAD, or MATLAB files)
Row Folder [3] Row-by-Row scripts, intended to process a single row of data
Table Folder [2] Table scripts, intended to process many rows of data
DAQ Folder [4] Scripts for Data Acquisition

Figure 13: SciData File Structure

3.2

Copying Data Into the Data Folder

The first way to import data to SciData for this example is to copy the contents of the Spring folder (Fig. 1)
into the Data folder (note the Open Data Folder Fig. 14 [5] is available to quickly navigate there). When
the copy operation is complete, click the Scan Data Folder [2] button under the Data tab [1]. As can be
seen in [3], 28 rows are imported. With each row that is imported a note is shown in the Console Window
that reads:
>> CSV file is not a SDS Standard file, renaming to ... *.dat
When the Scan Data Folder button is used, SciData searches the Data folder for the following:
Scientific Dataset (SDS) Files A data file standard developed by Microsoft Research
*.csv Comma separated text file
*.nc Binary (NetCFD)
12

*.sod Files Binary (Scilab)


*.dat Files Standard data files
User Defined Files Any additional extension added in the Ext: text box [6]
Folders If Mark Folders is checked, SciData will add a data file for all folders found that do not already
contain a data file. Useful for adding data grouped by folders in several different files.

Figure 14: Scaning the Data Folder


If a *.csv file is found, SciData attempts to read the file as a SDS file. If it fails, the *.csv file is copied to a
*.dat file. A clean shadow SDS file is then created. Therefore, the Data folder ends up with both *.dat and
*.csv files as shown below in Fig. 15. At this point, the *.csv file is empty, in the next step the file will be
populated from the original source.

Figure 15: Shadow File


Note, at any time, additional data can be added to the Data folder. To keep the Database Table in sync with
changes made (either adding or removing), simply run the Scan Data Folder command again.

3.3

The Import Button

It is also possible to use the Import Data button (Fig. 16 [1]). In this case a dialog is presented [2] to copy
selected files and folders into the Data folder followed by an automatic scan.

13

Figure 16: Import Data Dialog

Scientific Dataset Data File

As mentioned, SciData stores information using the Scientific Dataset (SDS) standard developed by Microsoft
Research. Storing data using an established standard has the benefit of making it more easy to share and
access. Furthermore, if used correctly, each data file should be self descriptive and contain all the important
related data. For our example, it will be important to store the two main conditions: temperature and
spring condition (new vs. used). The SDS format allows for this. Furthermore, it should be noted that the
SDS package contains a viewer tool (Fig. 17) which could be useful for sharing your data.

Figure 17: SDS Viewer

14

4.1

Test Data File Structure

The data saved from the experiment is structured as shown below (Fig. 18a). If we were using Excel to
analyze this data, we would need to import each file and define all the data ranges, which is time consuming
since each data file is a little different in length (not to mention all this would need to be done manually).
In contrast, SciData will automate this process and convert all these data files to the SDS format, as shown
in Fig. 18b. Note that the Single Values area contains the extra metadata to make the file self descriptive.
We can see the test temperature is 25 C and the spring condition or Type was Used.

(a) Instron Data File

(b) SDS Data File

Figure 18: Data File Structures

4.2

Using the Data Import Wizard

To properly import the data from the *.dat file to the SDS *.csv file, a wizard is provided to make this
process much easier. The following figures walk through the steps.
Step 1 First go to the Data tab (Fig. 20 [1]) and click the Build an Import Script button [2].
Step 2 Next browse [3] for an example data file. Choose any of the converted data files. The files are
comma delimited, but note this can be changed (Fig. 21a [4]).
Step 3 Now, to import the Time column, click the first cell of the data column [6] and then click OK (Fig.
21b [7]).
Step 4 Specify as a data column [8], name it [9], add the appropriate unit [10] (optional), and click Add
(Fig. 22a [11]).
Step 5 Notice a line is added to the code window (Fig. 22b [12]).
Repeat 3-5 Add the Load and Ext columns by following the same process.
Save Finally, click the save button. A script named DataImport.sce is saved to the Math\Row folder

15

When completed, the script generated should look like shown in Fig. 19.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

if

atomsIsInstalled ( csv readwrite )


fileName = data path + \ ; i = 1;
while e x i s t s ( F + s t r i n g ( i ) )
fileName = fileName + e v s t r ( F + s t r i n g ( i ) ) + \ ;
end
f i l e N a m e = f i l e N a m e + Name ;
f n = mopen ( f i l e N a m e , r t ) ;
dataText = mgetl ( fn ) ;
mclose ( fn ) ;

i=i +1;

else
d i s p ( c s v r e a d w r i t e i s not i n s t a l l e d ,
atomsInstall ( csv readwrite )

i n s t a l l i n g now , p l e a s e w a i t . . . )

end

// I m p o r t V a l u e s . . .
Time = f i n d v a l u e ( 9 , column , 1 , , ) ; // [ s e c ] [ @ ]
Load = f i n d v a l u e ( 9 , column , 2 , , ) ; // [ N ] [ @ ]
E x t = f i n d v a l u e ( 9 , column , 3 , , ) ; // [mm] [ @ ]

Figure 19: Import Script Code

Figure 20: Data Import Script Wizard (Part A)

16

(a) Part B

(b) Part C

Figure 21: Data Import Wizard

(a) Part D

(b) Part E

Figure 22: Data Import Script Wizard

4.3

Running the DataImport.sce Script

After hitting Save (Fig. 22b [13]) on the Import Script Wizard, SciData will add the necessary columns
(Fig. 23 [1]) and display the message [2] to run the script [4]. Moving to the Row tab [3], you can see that
the DataImport.sce file is selected [4]. We simply need to hit the All Filtered Rows [5] button and all the
filtered rows in the Database Table will be batch processed with the selected script, one-by-one.
17

Figure 23: After Using the Import Script Wizard


Fig. 24 shows the result after batch processing the script across all the rows. The array variables Time, Load,
and Ext are populated and what is shown [1] is the number of points in the array. Note that the arrays are
currently only populated in memory, as indicated by the disk icons [2]. After clicking the Save button [3],
the respective SDS data files are populated on disk.

Figure 24: After Running DataImport.sce

Organizing the Data

Now that our data is prepared, it is important that we add some descriptive information. This is helpful for
defining what the data is for future reference, but also it helps us sort and filter the data we want.
18

5.1

Column Notes

There are several spots to add notes to add extra useful information. There is a default Notes column to
allow for row specific notes, such as describing which experimental runs had anomalies, etc. The second
place to add notes is to the columns themselves to describe what they mean. This can be done using the
Column Editor (Fig. 25). All of the columns can have notes added. When using folders in the Data folder, as
a best practice, its a good idea to give notes to what each folder represents. The first folder in the present
example represents the environmental test temperature, the second folder represents the spring designation.
Therefore, click Edit Columns [1], select Folder [2] and add notes [3] to describe F1 and F2.

Figure 25: Column Editor


The notes given to the columns shows up when the mouse is hovered over the column header (Fig. 26 [1]),
or when looking at the column list [2-3].

19

Figure 26: Column Notes

5.2

Categories

It is also possible to add additional columns to describe the data. For the present example, the dataset does
not yet describe which tests were done on New springs and which were Old springs. Therefore, we can add
a category Type as shown in Fig. 27. First, select Category [1] from the column type list. Then type in
the category name Type [2]. Click Add [3] and the column will be added to the list. Category values can
be added for later selection using the text box [4]. Type a value and then click Add [5]. Add New and Old
to the list [6].

Figure 27: Categories

20

To apply the category values, the rows are first sorted by the date column (Fig. 28 [1]), since it is known that
the New springs were tested first, and the Used last. It is also known that the New spring were numbered
1-10, therefore, the last rows after 10 must be the Used spring. These rows are selected [2] and the Used
value is set with a right mouse click [3-4].

Figure 28: Setting Category Values

5.3

Applying Filters

To apply a filter, click the filter button either from the column header (Fig. 29 [1]) or in the column list [3].
Note, to get to the Column List, click the button [2] to expand the control. The filter editor for the particular
column shows at the bottom right [4]. To filter the New springs, simply check the appropriate box [5] and
click OK [6].

21

Figure 29: Applying a Filter


Once a filter is applied, it will show above the Database Table (Fig. 29 [1]). Note that it will then only show
the rows meeting the filter criteria [2]. The filter can be edited by clicking the filter button to show the filter
editor [3].

Figure 30: Applied Filter

Adding Row-by-Row Calculations

SciData allows you to process each row in the data table individually, or the whole Database Table at once.
In our case we need to calculate the spring rate for each test, which would be a Row-by-Row process.

22

6.1

Calculating the Spring Rate, k

The goal of the present example is to compare the spring rate, so we need to calculate this value for each
of the datasets. First, just focus on a single dataset. Plotting the data is a good first step, so click the first
row in the table and click Send Row .
6.1.1

Calculation Script

Previously we used a row-by-row script to import the data. This script was generated automatically so we
did not need to write any code. In this step we will add another script file and write the Scilab code manually
to calculate the spring rate. The first step is to add a new Scilab file by clicking the Add New File (Fig. 31
[1]) button and choosing Add Scilab File [2]. Choose the file name k Calculation [3] and click OK [4].

Figure 31: Adding a Scilab Script


As can be seen in Fig. 32 a script is added [1] to SciData. Included is a list of the available variables from
SciData [2].

23

Figure 32: Scilab Script Editor


Before the script is applied, Scilab must be loaded with data. We can select the first row in the table and
click Send Row (Fig. 33[1]). As can be seen [2], the row will turn pink to indicate it is loaded in memory.
This is also reflected in the Console Window [3].

Figure 33: Sending Data to Memory


To test the script, we can simply add the line plot(Ext, Load) (Fig. 34 [1]). If we execute the file by
clicking the Execute button [2] (after saving the file), a plot of Load vs. Ext will appear [3].

24

Figure 34: Testing the Script


We can continue to edit the file now in SciData, but we also have the option to open the file in Scilab for a
better editing experience. By editing in Scilab we can see more information about errors that occur as well
as a more detailed code editor. By clicking the Open Externally button (Fig. 35 [1]), Scilab will open with
the current memory state. Note, it is possible to launch many Scilab instances with different memory states.
Be careful to keep track of this. Best practice is to close Scilab when you are finished editing a script. The
external Scilab will not affect the memory of SciData.

Figure 35: Opening Scilab Editor


Scilab 5.4.0 offers a docked environment which shows the Console (Fig. 36 [1]), Script Editor [2] (note the
improved syntax highlighting, including coloring variables red [3]), and the Variable Browser [4]. As can be

25

seen in the Variable Browser, Scilab is indeed loaded with the current memory state from SciData.

Figure 36: Scilab Loaded From SciData


We now jump into writing a script that can calculate the spring rate. Fig. 37 shows the appropriate method
to achieve this. We will go through this script now to explain what is being done. The strategy to calculate
the spring rate is to apply a least squares curve fit to the data of the form
y =xk+i
Where y - represents the Load [N]
x - represents the Spring displacement [m]
k - is the spring constant (or the slope of Load vs. Displacement) [N/m]
i - the intercept (acts as an error indicator, since the intercept should be zero) [N]
Scilab has the ability to do a least squares fit to any function, which is great, but the down side is its not
very user friendly. The code below in Fig. 37 provides a clean example of how to apply the least squares fit.
The steps to set this up are as follows:
1. Create a function to apply the curve fit. For this example we name the function line. The function
should have the arguments x, coeffs, and params [lines 1-8].
- x describes the independent variable.
- coeffs is an array of size 1 to n representing the coefficients to be solved
- params is an array for extra parameters used by the function. In the current example this is not used.
2. Create an error function err which calculates the difference between the line() function and the data
[lines 10-12].
3. Prepare the inputs x data and y data by setting them to Ext and Load, respectively. We must also
prepare guess values, which often can be set to zeros. In our case we are solving for 2 coefficients, so
we set guess-coeffs to [0;0]. In cases where the leastsq() fails to solve, better guess values must be
supplied [lines 14-17].
- Note: To find a good guess values and/or a good fit function, try www.zunzun.com.
26

4. Use the leastsq() to calculate the optimal coefficients to the line() function. Note this function
solves for 3 variables, but we are only concerned with coeff opt which holds the optimal solved
coefficients [line 19].
5. The spring coefficient and intercept are extracted from coeff opt [lines 21-22].
6. Finally, the line() is plotted against the data to check the result [lines 24-27].

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

f u n c t i o n y= l i n e ( x , c o e f f s , params )
i = c o e f f s ( 1 ) ; // i n t e r c e p t
k = c o e f f s ( 2 ) ; // s l o p e
y = x k+i ;
endfunction
f u n c t i o n y=e r r ( c o e f f s , x d a t a , y d a t a , params )
y= l i n e ( x d a t a , c o e f f s , params ) y d a t a ;
endfunction
x d a t a = Ext ;
y d a t a = Load ;
params = [ ] ;
guess coeffs = [0;0];
[ f , c o e f f o p t , g ] = l e a s t s q ( l i s t ( e r r , x d a t a , y d a t a , params ) , g u e s s c o e f f s )
k = c o e f f o p t ( 2 ) ; // s o l v e d s p r i n g c o n s t a n t
i = c o e f f o p t ( 1 ) ; // s o l v e d i n t e r c e p t
y t e s t = l i n e ( x data , [ i ; k ] , [ ] ) ;
p l o t ( x data , [ y data y t e s t ] )
legend ( [ data ; fun ] )

Figure 37: Calculating the Spring Constant, k


The result from the solved spring coefficient, k, is shown in Fig. 38. As can be seen, a good curve fit is
found. So now what wed like to do is store the spring coefficient value in SciData so it can be calculated
for all the datasets.

27

Figure 38: Least Squares Fit Script Result


6.1.2

Tagging Variables for SciData Import

In the Row-by-Row mode of SciData, data can be not only sent to Scilab/MathCAD/MATLAB, but it can
also be retrieved and stored. Three types of information can be retrieved:
Single Numbers

This type is tagged in Scilab with #

This type is tagged in Scilab with @

Array

Non Numeric (Strings)

This type is tagged in Scilab with $

After SciData runs a script, it then looks for results to be retrieved. There are 3 different result column
types to match the types listed above with the respective icons. These columns can either be added manually
using the Column Editor, or they can be added automatically by tagging variables in the script.
In the current example, we need to store the spring coefficient, k, which is a Single Number, so we tag
the variable with #. Tagging in Scilab is done with a comment after the variable, structured as shown in
Fig. 39.
1

v a r i a b l e = . . . ; // [ o p t i o n a l u n i t ] [ # ]

Figure 39: Variable Tagging Structure


Therefore, the code is modified as shown in Fig. 40 [1]. After this is done, clicking the Scan File button
[2] will import k and i to SciData. This will add the columns to the Database Table [3]. Information about
the scanning process will also be shown in the log window [4].

28

Figure 40: Scanning for Tagged Variables


Now that the columns are added to SciData, we can select the first row (Fig. 41 [1]) and click the Process Row
button [2]. As can be seen, the columns k and i will be populated [3 & 4].

Figure 41: Processing a Row


We can now click the All Filtered Rows button (Fig. 42 [1]). As can be seen, the whole filtered Database
Table will be processed [2]. In this way, SciData is always structured and ready to batch process your
analysis. There are many benefits that come from batch processing an analysis:
Fast and Efficient
Automatic, for more time consuming analysis files, the computer can continue to process the data
without needing any manual input
29

Ensures the same consistent analysis across all the data. Furthermore, changes to the analysis can
then easily be refreshed consistently across all the data.

Figure 42: Processing all Filtered Rows


6.1.3

Filtering Data in Scilab

We can see from each of the data files that the ends of the Ext vs. Load are not very linear (Fig. 43). If
we are after the general spring constant and want to ignore the effects at the ends of the curve, then we can
easily filter the data first. What we will do then is modify our k Calculation script file to first filter the
arrays Ext and Load so that the first and last 2mm are removed. Then the curve fit will be applied to the
filtered arrays. Furthermore, for later use, the filtered arrays will be stored in SciData.

Figure 43: Ext vs. Load


To accomplish this, the k Calculation script is modified as shown in Fig. 44 [1]. As can be seen, line 14
stores the filtered rows in variable rws using the convenient Scilab/MATLAB function find(). Then the
rows of rws are extracted from Ext and Load as shown in lines 15 and 16. Finally, the variables x data and
y data are updated with the new filtered arrays. We then tag the new filtered arrays as shown in [2]. After
30

the changes are made the file must be saved [3]. After clicking Scan File , the new array columns are added
as shown in [5]. To check the new script, Process Row is clicked and the results are updated. Notice that
k went from 0.144 to 0.142. Also notice that the filtered arrays are populated [5], and the number of rows
is 5830, compared with the original array size of 7522 rows.

Figure 44: Filtering and Storing Arrays


6.1.4

Advanced Steps

Saving the Plot It is possible to use the xs2png() function to save the generated plot to a picture file.
An example is shown here where the plot is saved to file along side the data file. Several changes are made
to the code (Fig. 53) as listed below:
line 30 xdel() is used to clear any previous plots
line 34 To thin out the data points plotted, we plot every 50th data point using the call 1:50:$, which is
interpreted as index/row 1, spaced by every 50th index/row, till the last row ($).
line 34 & 35 Note the call to plot() formats the lines using the extra inputs o and --, in addition
to the LineWidth and Color inputs. The Color argument is supplied a [R,G,B] array.
line 36 & 37 The x-axis and y-axis are labeled
line 39 Using msscanf(), the spring description from F2 is decomposed.
line 40 The Spring Number is extracted and stored.
line 41 Using the msprintf() the plot title is built with the Spring Number, Temperature, Type, and k
value.
line 43 The title is added to the plot
line 47 The file name and path are created using the automatic data path variable, F1, F2, and Name
line 49 The plot is printed to file using xs2png(). The call to gcf() grabs a reference to the current plot.

31

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49

f u n c t i o n y= l i n e ( x , c o e f f s , params )
b = c o e f f s ( 1 ) ; // i n t e r c e p t
m = c o e f f s ( 2 ) ; // s l o p e
y = m x+b ;
endfunction
f u n c t i o n y=e r r ( c o e f f s , x d a t a , y d a t a , params )
y= l i n e ( x d a t a , c o e f f s , params ) y d a t a ;
endfunction
rws = f i n d ( Ext > 2 & Ext < 16) ;
E x t f = E x t ( r w s ) ; // [mm] [ @ ]
L o a d f = Load ( r w s ) ; // [ N ] [ @ ]
x data = Ext f ;
y data = Load f ;
params = [ ] ;
guess coeff = [0;0];
[ f , c o e f f o p t , g ] = l e a s t s q ( l i s t ( e r r , x d a t a , y d a t a , params ) , g u e s s c o e f f )
// s o l v e d
k = coeff
// s o l v e d
i = coeff

spring constant
o p t ( 2 ) ; // [ N/mm] [ # ]
intercept
o p t ( 1 ) ; // [mm] [ # ]

x d e l ( w i n s i d ( ) ) // c l e a r p r e v i o u s p l o t s
x test = 0:1:18;
y test = line ( x test , [ i ; k ] , [ ] ) ;
p l o t ( x d a t a ( 1 : 5 0 : $ ) , y d a t a ( 1 : 5 0 : $ ) , o )
p l o t ( x t e s t , y t e s t , , L i n e W i d t h , 2 , C o l o r , b l a c k )
x l a b e l ( E x t e n s i o n [mm] )
y l a b e l ( Load [ N ] )
F 2 P a r t s = m s s c a n f ( F2 , %s%i ) ; // E x t r a c t S p r i n g D e s c r i p t i o n P a r t s
SpringNum = F 2 P a r t s ( 2 ) ;
S p r i n g T i t l e = m s p r i n t f ( S p r i n g=%i , T e m p e r a t u r e=%s , Type=%s , k=%0 . 3 f [ N/mm] ,
SpringNum , F1 , Type , k )
t i t l e ( SpringTitle )
legend ( [ data ; fun ] )
// f i l e p a t h and name
p l o t p a t h = d a t a p a t h + \ + F1 + \ + F2 + \ + Name + . png ;
// w r i t e t h e p l o t t o a p i c t u r e f i l e
xs2png ( g c f ( ) , p l o t p a t h ) ;

Figure 45: Advanced Edits to k Calculation.sce


By running this script across all the filtered rows, plots will be generated and saved for the respective
datasets, creating a whole set of plots.
Writing a LaTex Document Test Summary It could be useful to have all the plots together in a grid
for reference. This can be done using LaTex, created using a Scilab Table Operation script (see next section).
Images can be included in a LaTex document using the \includegraphics{filename} command. Therefore
we create a script in Scilab that creates the paths to all the plot images we just created and output the
commands to a LaTex file. This file is read into a master LaTex document using the \include command.
The code to write the LaTex files is shown in Fig. 46.
The code begins with a function which helps write out the LaTex code. The function create figures() is
used to create an array of LaTex code lines. The hot and cold rows are filtered using the find() function
and the respective LaTex code is written to file. Therefore, two separate LaTex files are created. These files
are then inserted into a master document, as shown in Fig. 47. The resulting document is shown in the
32

appendix.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

f u n c t i o n y= c r e a t e f i g u r e s ( r w s )
n = l e n g t h ( rws ) ;
tex = [ ] ;
for

i = 1:n
tex
tex
tex
tex

=
=
=
=

[
[
[
[

tex
tex
tex
tex

tex = [ tex ;
Name ( r w s (
tex = [ tex ;
tex = [ tex ;

;
;
;
;

\ i n c l u d e g r a p h i c s [ h e i g h t =2.5 i n , ] ;
t y p e=png , ] ;
e x t =. png , ] ;
r e a d =. png ] ] ;

m s p r i n t f ( { . . / Data /%s/%s/%s } , F1 ( r w s ( i ) ) , F2 ( r w s ( i ) ) ,
i )))
];
];
];

y = tex ;
end
endfunction
// W r i t e t h e 25C F i l e
r w s = f i n d ( F1 == 25C )
t e x = c r e a t e f i g u r e s ( rws ) ;
file name = data path + \..\ tex\plots 25C . tex ;
mdelete ( f i l e n a m e ) ;
write ( file name , tex ) ;
// W r i t e t h e 70C F i l e
r w s = f i n d ( F1 == 70C )
c r e a t e f i g u r e s ( rws ) ;
file name = data path + \..\ tex\plots 70C . tex ;
mdelete ( f i l e n a m e ) ;
write ( file name , tex ) ;

Figure 46: Code to Create LaTex Files

33

\documentclass[10pt,letterpaper]{article}
\usepackage{fullpage}
\usepackage{graphicx}
\usepackage{multicol}
\author{Brad Carman}
\title{Test Summary}
\begin{document}
\maketitle
\pagebreak
\section{New Springs}
\subsection{Temperature = 25$^{\circ}$C}
\begin{multicols}{2}
\input{plots_25C.tex}
\end{multicols}
\pagebreak
\subsection{Temperature = 70$^{\circ}$C}
\begin{multicols}{2}
\input{plots_70C.tex}
\end{multicols}
\end{document}
Figure 47: Code to Create LaTex Image Grid

7
7.1

Table Operations
Plotting Springs k Values

Now that we have calculated all the k values for all the datasets, we can plot them and make comparisons.
First, move to the Table tab (Fig. 48 [1]) and click Send Full [2]. Note that the whole table turns pink,
representing that everything was sent to memory. Single values are stacked and sent as arrays and arrays
are stacked (by column) into matrixes, as shown in Fig. 6.

34

Figure 48: Sending Table Data


As a quick visual of what is in memory, type plot(k) [3] into the Command Box or simply k [4] and hit
enter. As can be seen, k is plotted [2] and printed in the Console Window [5].
7.1.1

Using AutoPlot

Our goal now is to show a plot of k values to show the comparison of Hot vs. Cold. There are two options:
1.) write a script that can filter and plot the data, or 2.) use the SciData feature AutoPlot. To test out
AutoPlot, simply click the AutoPlot button (Fig. 49 [3]). As can be seen, an AutoPlot sheet will be added
[4]. The column k can be selected for the y value [5]. If we want to see a comparison of the temperature
we know that the datasets were grouped by the folder, F1. Therefore, we choose F1 in the Group 1 By
selection box [6]. Replace the Group 1 Header with Temperature [7]. Now click the Test button [8].
You will see the plot now shows the k values separated by temperature [9]. Note that the plot automatically
labels the y-axis and legend. Compared to how this would be done in Excel, this is a major time saver.

35

Figure 49: Using AutoPlot


7.1.2

Saving an AutoPlot Script

Once you have the AutoPlot set to your liking, you can save it as a script. Simply add a title (Fig. 50 [1])
then click the Save button [2].

Figure 50: Saving an AutoPlot


When the AutoPlot is saved, it will open another script, as shown in Fig. 51 [1]. This script can then be
edited to make additional changes to the plot.

36

Figure 51: Saved AutoPlot Script


7.1.3

Editing an AutoPlot Script

If we want to add a few custom items to the plot generated from AutoPlot, it is possible by editing the script.
As can be seen in Fig. 52 [1], a few lines of code have been added with the goal of drawing a line at the
mean of the hot and cold groups. We can use the find() function again to get the rows of the hot and cold
groups. Then we can plot a line from the min to max x-axis values, as shown in lines 44 and 47. The min
and max are found from the a structure variable. By opening and running the script in Scilab it can be seen
what information is stored in a (as shown in Fig. 53). Notice that we have the axis limits available, along
with all the plot settings.

Figure 52: Editing an AutoPlot Script

37

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

>a
a =
y : [ 2 0 x1 c o n s t a n t ]
g r o u p 1 : [ 2 0 x1 s t r i n g ]
group1Header : Temperature
g 1 t : m
group2Header :
g2t : c
name : k P l o t
lgnd pos : in upper right
x t e x t :
y t e x t : k [ N/mm]
z t e x t :
font size : 3
mark size : 5
g r i d : [ 1 , 1 , 1]
l o g f l a g s : nnn
show legend : 1
s h o w l i n e s : on
s h o w m a r k e r s : on
bar : 0
f i g u r e s i z e : [600 ,600]
p l t : [ 1 x1 h a n d l e ]
l g n d : [ 2 x1 s t r i n g ]
lgndH : [ 2 x1 h a n d l e ]
x min : 1
y min : 0.1351258
x max : 10
y max : 0 . 1 4 3 0 3 9 8

Figure 53: AutoPlot Structured Variable a

38

Summary

In summary, this tutorial shows the follow benefits of Data Management combined with Script Based Analysis:
A flexible database of experiments/data files/folders/information that clearly documents the stored
data
Filtering capability to quickly find and process information
Separated Data and Math. One source to edit, keeping analysis and results in-sync.
Script based plotting - one script can quickly create many plots, no manual time consuming work
needed.
Combined with LaTex, reports can be automated

Appendix

39

1
1.1

New Springs
Temperature = 25 C

1.2

Temperature = 70 C

You might also like