Professional Documents
Culture Documents
Data Management With SciData
Data Management With SciData
Brad Carman
February 21, 2013
How to Keep Organized and Automate Your Data Analysis and Reporting
Contents
1 Example Scenario: Spring Testing with a Compression Test
1.1 How the Sample Data is Organized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Quick Overview of SciData
2.1 Installation . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Communication with Several Mathematical Programs
2.2.1 Scilab - Free . . . . . . . . . . . . . . . . . . .
2.2.2 MathCAD - Easy/Intuitive . . . . . . . . . . .
2.2.3 MATLAB - Professional . . . . . . . . . . . . .
2.3 Data and Math are Separate . . . . . . . . . . . . . .
2.4 Column Types . . . . . . . . . . . . . . . . . . . . . .
2.5 Row Operations vs. Table Operations . . . . . . . . .
2.5.1 Row-by-Row Mode . . . . . . . . . . . . . . . .
2.5.2 Table Mode . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
4
4
5
5
5
6
7
7
8
9
10
3 Importing Data
11
3.1 Start a New SciData Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Copying Data Into the Data Folder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 The Import Button . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 Scientific Dataset Data File
14
4.1 Test Data File Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Using the Data Import Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3 Running the DataImport.sce Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5 Organizing the Data
18
5.1 Column Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.2 Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.3 Applying Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6 Adding Row-by-Row Calculations
6.1 Calculating the Spring Rate, k . . .
6.1.1 Calculation Script . . . . . .
6.1.2 Tagging Variables for SciData
6.1.3 Filtering Data in Scilab . . .
6.1.4 Advanced Steps . . . . . . . .
. . . . .
. . . . .
Import
. . . . .
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
22
23
23
28
30
31
7 Table Operations
7.1 Plotting Springs k Values . . . . .
7.1.1 Using AutoPlot . . . . . . .
7.1.2 Saving an AutoPlot Script .
7.1.3 Editing an AutoPlot Script
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
34
34
35
36
37
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8 Summary
39
9 Appendix
39
As a way to demonstrate the benefits of Data Management, an example scenario will be applied where a
batch of springs will be tested. The batch consists of New and Used springs and the New springs were tested
at both room temperature and 70 C.
New Springs
Used Springs
25 C
70 C
1.1
As the data is collected it is stored into folders with the spring name/number and put into a subsequent
folder representing the test temperature, as shown in Fig. 1. Also note that the New springs are numbered
1-10 and the Used springs are numbered like 00##. Note, this is the type of information that is easily lost
as time passes. Clear documentation of your data is key to Data Management, and is therefore one of the
main goals of SciData.
Data Management will be achieved in this example using the program SciData (Fig. 2). This program serves
as a flexible database for organization of almost any type of data. The following sections detail the benefits
of using this program, especially in contrast with a spreadsheet application, such as Excel.
2.1
Installation
2.2
One of the challenges of data management is the difficulty of consuming the data in an application for further
analysis, reporting, etc. By default SciData communicates with Scilab, a free open source mathematical
software which can perform a great majority of the tasks required for most scientific/statistical needs. The
communication with Scilab is done with the click of a button in SciData. Therefore there is no need to
save and structure files, manipulate data, or manually copy/paste to exchange data to and from Scilab. In
addition to Scilab, SciData also works with MathCAD and MATLAB. A quick overview of the different
mathematical packages is given below.
2.2.1
Scilab - Free
Scilab (Fig. 3) is a console based mathematical software which is modeled after MATLAB. Commands can
be typed one by one into the console window, or they can be stored in a script (*.sce file) and executed line
by line. To get started with Scilab, the help menu contains a tutorial. This current tutorial on SciData will
also use Scilab exclusively and will help you get started.
Notes:
- Scilab is required to be installed in order to use SciData.
- Scilab can be downloaded for free from www.scilab.com.
MathCAD - Easy/Intuitive
MathCAD (Fig. 4) is a whiteboard style application that displays math in its natural form. From the
MathCAD website:
PTC Mathcad combines the ease and familiarity of an engineering notebook with the powerful
features of dedicated engineering calculations application. Its document-centric, WYSIWYG
5
interface gives you the ability to solve, share, re-use and analyze your calculations without having
to learn a new programming language. The result? You can spend more time engineering and
less time documenting.
Notes:
- MathCAD is not free unfortunately, but is priced well.
- SciData does not work with MathCAD Prime. Instead you will need to use MathCAD v14 or v15.
- MathCAD is not required to use SciData.
MATLAB - Professional
MATLAB is fundamentally very similar to Scilab, but unlike Scilab, it is not free and open source, which
has the benefits of a professional software package with good documentation and support. More simply put,
MATLAB is more powerful and robust.
Notes:
- MATLAB is not free and can be expensive depending on the need.
- The MATLAB and Scilab languages are very similar. Scilab can open MATLAB files (scripts and data
files).
- MATLAB is not required to use SciData.
2.3
Another benefit of SciData comes form the fact that Data and Math are separated. This allows for a single
math script to be applied across all the datasets in a collection (a set of experimental tests for example) or
a filtered subset of datasets. In contrast, Excel requires formulas to be copied for each additional set of data
included in a workbook. In SciData, edits, fixes, changes, etc. are made in one source, in Excel, a simple
change to the analysis could require many formulas to be edited, which can be difficult to track and often
leads to mistakes.
2.4
Column Types
The Database Table in SciData contains several different column types. The below table (Tbl. 1) describes
their differences. Columns are either Locked, Input, or Result:
Locked Cells cannot be edited. For example ID, this is controlled by SciData
Input Cells which can be edited and are used to describe the row, such as a category or number explaining
a variable test condition
Result Cells which collect information generated form an analysis
Data Type
Example
Default
Folder
Text
Data\Springs\Test1
F 1 = Springs, F 2 = T est1
Locked
Category
Text
Type=New
Input
Constant
Number
x=1
Input
Result
Number
k=2
Result
Text Result
Text
abc
Result
Array
Array
x = [1;2;3;4;5]
Icon
2.5
The fundamental purpose of SciData is to easily send data to memory form the source database. Therefore
you easily filter the database to send the data of interest, but you can also send the data in two different
modes:
Row-by-Row
Table
The SciData Database Table contains several different types of information: numbers, arrays, and text. When
in Row-by-Row mode, the information is sent as is. When in Table mode, the information is sent stacked as
shown in Fig. 6. Numbers and text (items that are in single units) are stack into arrays and information
starting as arrays are then stacked side by side into a matrix.
2.5.1
Row-by-Row Mode
Table Mode
The Table tab is shown in Fig. 10, which offers 4 different buttons for memory operations. The table below
the buttons shows their different functions. As can be seen, buttons Send Lite and Process Lite do not
send arrays. This is to conserve memory if needed. In some cases a large Database Table with large arrays
can easily consume all the available memory, therefore if they are not needed, the Lite option is available.
Also, similar to the Row tab, the Process buttons both send data to memory and execute the target script,
but notice that Step 3 is not available in this mode. Any results generated from the Table mode need to be
saved using other means.
10
Importing Data
Now that the basic operation of SciData is understood, the example analysis with the spring data will begin.
The first step to analyzing data is to import it.
3.1
Before importing data we must create a project. Simply start SciData (Start >SciData), which brings up
the start screen (Fig. 12). Choose Start a New File. Save the file Spring Analysis.sdat.
11
3.2
The first way to import data to SciData for this example is to copy the contents of the Spring folder (Fig. 1)
into the Data folder (note the Open Data Folder Fig. 14 [5] is available to quickly navigate there). When
the copy operation is complete, click the Scan Data Folder [2] button under the Data tab [1]. As can be
seen in [3], 28 rows are imported. With each row that is imported a note is shown in the Console Window
that reads:
>> CSV file is not a SDS Standard file, renaming to ... *.dat
When the Scan Data Folder button is used, SciData searches the Data folder for the following:
Scientific Dataset (SDS) Files A data file standard developed by Microsoft Research
*.csv Comma separated text file
*.nc Binary (NetCFD)
12
3.3
It is also possible to use the Import Data button (Fig. 16 [1]). In this case a dialog is presented [2] to copy
selected files and folders into the Data folder followed by an automatic scan.
13
As mentioned, SciData stores information using the Scientific Dataset (SDS) standard developed by Microsoft
Research. Storing data using an established standard has the benefit of making it more easy to share and
access. Furthermore, if used correctly, each data file should be self descriptive and contain all the important
related data. For our example, it will be important to store the two main conditions: temperature and
spring condition (new vs. used). The SDS format allows for this. Furthermore, it should be noted that the
SDS package contains a viewer tool (Fig. 17) which could be useful for sharing your data.
14
4.1
The data saved from the experiment is structured as shown below (Fig. 18a). If we were using Excel to
analyze this data, we would need to import each file and define all the data ranges, which is time consuming
since each data file is a little different in length (not to mention all this would need to be done manually).
In contrast, SciData will automate this process and convert all these data files to the SDS format, as shown
in Fig. 18b. Note that the Single Values area contains the extra metadata to make the file self descriptive.
We can see the test temperature is 25 C and the spring condition or Type was Used.
4.2
To properly import the data from the *.dat file to the SDS *.csv file, a wizard is provided to make this
process much easier. The following figures walk through the steps.
Step 1 First go to the Data tab (Fig. 20 [1]) and click the Build an Import Script button [2].
Step 2 Next browse [3] for an example data file. Choose any of the converted data files. The files are
comma delimited, but note this can be changed (Fig. 21a [4]).
Step 3 Now, to import the Time column, click the first cell of the data column [6] and then click OK (Fig.
21b [7]).
Step 4 Specify as a data column [8], name it [9], add the appropriate unit [10] (optional), and click Add
(Fig. 22a [11]).
Step 5 Notice a line is added to the code window (Fig. 22b [12]).
Repeat 3-5 Add the Load and Ext columns by following the same process.
Save Finally, click the save button. A script named DataImport.sce is saved to the Math\Row folder
15
When completed, the script generated should look like shown in Fig. 19.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
if
i=i +1;
else
d i s p ( c s v r e a d w r i t e i s not i n s t a l l e d ,
atomsInstall ( csv readwrite )
i n s t a l l i n g now , p l e a s e w a i t . . . )
end
// I m p o r t V a l u e s . . .
Time = f i n d v a l u e ( 9 , column , 1 , , ) ; // [ s e c ] [ @ ]
Load = f i n d v a l u e ( 9 , column , 2 , , ) ; // [ N ] [ @ ]
E x t = f i n d v a l u e ( 9 , column , 3 , , ) ; // [mm] [ @ ]
16
(a) Part B
(b) Part C
(a) Part D
(b) Part E
4.3
After hitting Save (Fig. 22b [13]) on the Import Script Wizard, SciData will add the necessary columns
(Fig. 23 [1]) and display the message [2] to run the script [4]. Moving to the Row tab [3], you can see that
the DataImport.sce file is selected [4]. We simply need to hit the All Filtered Rows [5] button and all the
filtered rows in the Database Table will be batch processed with the selected script, one-by-one.
17
Now that our data is prepared, it is important that we add some descriptive information. This is helpful for
defining what the data is for future reference, but also it helps us sort and filter the data we want.
18
5.1
Column Notes
There are several spots to add notes to add extra useful information. There is a default Notes column to
allow for row specific notes, such as describing which experimental runs had anomalies, etc. The second
place to add notes is to the columns themselves to describe what they mean. This can be done using the
Column Editor (Fig. 25). All of the columns can have notes added. When using folders in the Data folder, as
a best practice, its a good idea to give notes to what each folder represents. The first folder in the present
example represents the environmental test temperature, the second folder represents the spring designation.
Therefore, click Edit Columns [1], select Folder [2] and add notes [3] to describe F1 and F2.
19
5.2
Categories
It is also possible to add additional columns to describe the data. For the present example, the dataset does
not yet describe which tests were done on New springs and which were Old springs. Therefore, we can add
a category Type as shown in Fig. 27. First, select Category [1] from the column type list. Then type in
the category name Type [2]. Click Add [3] and the column will be added to the list. Category values can
be added for later selection using the text box [4]. Type a value and then click Add [5]. Add New and Old
to the list [6].
20
To apply the category values, the rows are first sorted by the date column (Fig. 28 [1]), since it is known that
the New springs were tested first, and the Used last. It is also known that the New spring were numbered
1-10, therefore, the last rows after 10 must be the Used spring. These rows are selected [2] and the Used
value is set with a right mouse click [3-4].
5.3
Applying Filters
To apply a filter, click the filter button either from the column header (Fig. 29 [1]) or in the column list [3].
Note, to get to the Column List, click the button [2] to expand the control. The filter editor for the particular
column shows at the bottom right [4]. To filter the New springs, simply check the appropriate box [5] and
click OK [6].
21
SciData allows you to process each row in the data table individually, or the whole Database Table at once.
In our case we need to calculate the spring rate for each test, which would be a Row-by-Row process.
22
6.1
The goal of the present example is to compare the spring rate, so we need to calculate this value for each
of the datasets. First, just focus on a single dataset. Plotting the data is a good first step, so click the first
row in the table and click Send Row .
6.1.1
Calculation Script
Previously we used a row-by-row script to import the data. This script was generated automatically so we
did not need to write any code. In this step we will add another script file and write the Scilab code manually
to calculate the spring rate. The first step is to add a new Scilab file by clicking the Add New File (Fig. 31
[1]) button and choosing Add Scilab File [2]. Choose the file name k Calculation [3] and click OK [4].
23
24
25
seen in the Variable Browser, Scilab is indeed loaded with the current memory state from SciData.
4. Use the leastsq() to calculate the optimal coefficients to the line() function. Note this function
solves for 3 variables, but we are only concerned with coeff opt which holds the optimal solved
coefficients [line 19].
5. The spring coefficient and intercept are extracted from coeff opt [lines 21-22].
6. Finally, the line() is plotted against the data to check the result [lines 24-27].
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
f u n c t i o n y= l i n e ( x , c o e f f s , params )
i = c o e f f s ( 1 ) ; // i n t e r c e p t
k = c o e f f s ( 2 ) ; // s l o p e
y = x k+i ;
endfunction
f u n c t i o n y=e r r ( c o e f f s , x d a t a , y d a t a , params )
y= l i n e ( x d a t a , c o e f f s , params ) y d a t a ;
endfunction
x d a t a = Ext ;
y d a t a = Load ;
params = [ ] ;
guess coeffs = [0;0];
[ f , c o e f f o p t , g ] = l e a s t s q ( l i s t ( e r r , x d a t a , y d a t a , params ) , g u e s s c o e f f s )
k = c o e f f o p t ( 2 ) ; // s o l v e d s p r i n g c o n s t a n t
i = c o e f f o p t ( 1 ) ; // s o l v e d i n t e r c e p t
y t e s t = l i n e ( x data , [ i ; k ] , [ ] ) ;
p l o t ( x data , [ y data y t e s t ] )
legend ( [ data ; fun ] )
27
In the Row-by-Row mode of SciData, data can be not only sent to Scilab/MathCAD/MATLAB, but it can
also be retrieved and stored. Three types of information can be retrieved:
Single Numbers
Array
After SciData runs a script, it then looks for results to be retrieved. There are 3 different result column
types to match the types listed above with the respective icons. These columns can either be added manually
using the Column Editor, or they can be added automatically by tagging variables in the script.
In the current example, we need to store the spring coefficient, k, which is a Single Number, so we tag
the variable with #. Tagging in Scilab is done with a comment after the variable, structured as shown in
Fig. 39.
1
v a r i a b l e = . . . ; // [ o p t i o n a l u n i t ] [ # ]
28
Ensures the same consistent analysis across all the data. Furthermore, changes to the analysis can
then easily be refreshed consistently across all the data.
We can see from each of the data files that the ends of the Ext vs. Load are not very linear (Fig. 43). If
we are after the general spring constant and want to ignore the effects at the ends of the curve, then we can
easily filter the data first. What we will do then is modify our k Calculation script file to first filter the
arrays Ext and Load so that the first and last 2mm are removed. Then the curve fit will be applied to the
filtered arrays. Furthermore, for later use, the filtered arrays will be stored in SciData.
the changes are made the file must be saved [3]. After clicking Scan File , the new array columns are added
as shown in [5]. To check the new script, Process Row is clicked and the results are updated. Notice that
k went from 0.144 to 0.142. Also notice that the filtered arrays are populated [5], and the number of rows
is 5830, compared with the original array size of 7522 rows.
Advanced Steps
Saving the Plot It is possible to use the xs2png() function to save the generated plot to a picture file.
An example is shown here where the plot is saved to file along side the data file. Several changes are made
to the code (Fig. 53) as listed below:
line 30 xdel() is used to clear any previous plots
line 34 To thin out the data points plotted, we plot every 50th data point using the call 1:50:$, which is
interpreted as index/row 1, spaced by every 50th index/row, till the last row ($).
line 34 & 35 Note the call to plot() formats the lines using the extra inputs o and --, in addition
to the LineWidth and Color inputs. The Color argument is supplied a [R,G,B] array.
line 36 & 37 The x-axis and y-axis are labeled
line 39 Using msscanf(), the spring description from F2 is decomposed.
line 40 The Spring Number is extracted and stored.
line 41 Using the msprintf() the plot title is built with the Spring Number, Temperature, Type, and k
value.
line 43 The title is added to the plot
line 47 The file name and path are created using the automatic data path variable, F1, F2, and Name
line 49 The plot is printed to file using xs2png(). The call to gcf() grabs a reference to the current plot.
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
f u n c t i o n y= l i n e ( x , c o e f f s , params )
b = c o e f f s ( 1 ) ; // i n t e r c e p t
m = c o e f f s ( 2 ) ; // s l o p e
y = m x+b ;
endfunction
f u n c t i o n y=e r r ( c o e f f s , x d a t a , y d a t a , params )
y= l i n e ( x d a t a , c o e f f s , params ) y d a t a ;
endfunction
rws = f i n d ( Ext > 2 & Ext < 16) ;
E x t f = E x t ( r w s ) ; // [mm] [ @ ]
L o a d f = Load ( r w s ) ; // [ N ] [ @ ]
x data = Ext f ;
y data = Load f ;
params = [ ] ;
guess coeff = [0;0];
[ f , c o e f f o p t , g ] = l e a s t s q ( l i s t ( e r r , x d a t a , y d a t a , params ) , g u e s s c o e f f )
// s o l v e d
k = coeff
// s o l v e d
i = coeff
spring constant
o p t ( 2 ) ; // [ N/mm] [ # ]
intercept
o p t ( 1 ) ; // [mm] [ # ]
x d e l ( w i n s i d ( ) ) // c l e a r p r e v i o u s p l o t s
x test = 0:1:18;
y test = line ( x test , [ i ; k ] , [ ] ) ;
p l o t ( x d a t a ( 1 : 5 0 : $ ) , y d a t a ( 1 : 5 0 : $ ) , o )
p l o t ( x t e s t , y t e s t , , L i n e W i d t h , 2 , C o l o r , b l a c k )
x l a b e l ( E x t e n s i o n [mm] )
y l a b e l ( Load [ N ] )
F 2 P a r t s = m s s c a n f ( F2 , %s%i ) ; // E x t r a c t S p r i n g D e s c r i p t i o n P a r t s
SpringNum = F 2 P a r t s ( 2 ) ;
S p r i n g T i t l e = m s p r i n t f ( S p r i n g=%i , T e m p e r a t u r e=%s , Type=%s , k=%0 . 3 f [ N/mm] ,
SpringNum , F1 , Type , k )
t i t l e ( SpringTitle )
legend ( [ data ; fun ] )
// f i l e p a t h and name
p l o t p a t h = d a t a p a t h + \ + F1 + \ + F2 + \ + Name + . png ;
// w r i t e t h e p l o t t o a p i c t u r e f i l e
xs2png ( g c f ( ) , p l o t p a t h ) ;
appendix.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
f u n c t i o n y= c r e a t e f i g u r e s ( r w s )
n = l e n g t h ( rws ) ;
tex = [ ] ;
for
i = 1:n
tex
tex
tex
tex
=
=
=
=
[
[
[
[
tex
tex
tex
tex
tex = [ tex ;
Name ( r w s (
tex = [ tex ;
tex = [ tex ;
;
;
;
;
\ i n c l u d e g r a p h i c s [ h e i g h t =2.5 i n , ] ;
t y p e=png , ] ;
e x t =. png , ] ;
r e a d =. png ] ] ;
m s p r i n t f ( { . . / Data /%s/%s/%s } , F1 ( r w s ( i ) ) , F2 ( r w s ( i ) ) ,
i )))
];
];
];
y = tex ;
end
endfunction
// W r i t e t h e 25C F i l e
r w s = f i n d ( F1 == 25C )
t e x = c r e a t e f i g u r e s ( rws ) ;
file name = data path + \..\ tex\plots 25C . tex ;
mdelete ( f i l e n a m e ) ;
write ( file name , tex ) ;
// W r i t e t h e 70C F i l e
r w s = f i n d ( F1 == 70C )
c r e a t e f i g u r e s ( rws ) ;
file name = data path + \..\ tex\plots 70C . tex ;
mdelete ( f i l e n a m e ) ;
write ( file name , tex ) ;
33
\documentclass[10pt,letterpaper]{article}
\usepackage{fullpage}
\usepackage{graphicx}
\usepackage{multicol}
\author{Brad Carman}
\title{Test Summary}
\begin{document}
\maketitle
\pagebreak
\section{New Springs}
\subsection{Temperature = 25$^{\circ}$C}
\begin{multicols}{2}
\input{plots_25C.tex}
\end{multicols}
\pagebreak
\subsection{Temperature = 70$^{\circ}$C}
\begin{multicols}{2}
\input{plots_70C.tex}
\end{multicols}
\end{document}
Figure 47: Code to Create LaTex Image Grid
7
7.1
Table Operations
Plotting Springs k Values
Now that we have calculated all the k values for all the datasets, we can plot them and make comparisons.
First, move to the Table tab (Fig. 48 [1]) and click Send Full [2]. Note that the whole table turns pink,
representing that everything was sent to memory. Single values are stacked and sent as arrays and arrays
are stacked (by column) into matrixes, as shown in Fig. 6.
34
Using AutoPlot
Our goal now is to show a plot of k values to show the comparison of Hot vs. Cold. There are two options:
1.) write a script that can filter and plot the data, or 2.) use the SciData feature AutoPlot. To test out
AutoPlot, simply click the AutoPlot button (Fig. 49 [3]). As can be seen, an AutoPlot sheet will be added
[4]. The column k can be selected for the y value [5]. If we want to see a comparison of the temperature
we know that the datasets were grouped by the folder, F1. Therefore, we choose F1 in the Group 1 By
selection box [6]. Replace the Group 1 Header with Temperature [7]. Now click the Test button [8].
You will see the plot now shows the k values separated by temperature [9]. Note that the plot automatically
labels the y-axis and legend. Compared to how this would be done in Excel, this is a major time saver.
35
Once you have the AutoPlot set to your liking, you can save it as a script. Simply add a title (Fig. 50 [1])
then click the Save button [2].
36
If we want to add a few custom items to the plot generated from AutoPlot, it is possible by editing the script.
As can be seen in Fig. 52 [1], a few lines of code have been added with the goal of drawing a line at the
mean of the hot and cold groups. We can use the find() function again to get the rows of the hot and cold
groups. Then we can plot a line from the min to max x-axis values, as shown in lines 44 and 47. The min
and max are found from the a structure variable. By opening and running the script in Scilab it can be seen
what information is stored in a (as shown in Fig. 53). Notice that we have the axis limits available, along
with all the plot settings.
37
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
>a
a =
y : [ 2 0 x1 c o n s t a n t ]
g r o u p 1 : [ 2 0 x1 s t r i n g ]
group1Header : Temperature
g 1 t : m
group2Header :
g2t : c
name : k P l o t
lgnd pos : in upper right
x t e x t :
y t e x t : k [ N/mm]
z t e x t :
font size : 3
mark size : 5
g r i d : [ 1 , 1 , 1]
l o g f l a g s : nnn
show legend : 1
s h o w l i n e s : on
s h o w m a r k e r s : on
bar : 0
f i g u r e s i z e : [600 ,600]
p l t : [ 1 x1 h a n d l e ]
l g n d : [ 2 x1 s t r i n g ]
lgndH : [ 2 x1 h a n d l e ]
x min : 1
y min : 0.1351258
x max : 10
y max : 0 . 1 4 3 0 3 9 8
38
Summary
In summary, this tutorial shows the follow benefits of Data Management combined with Script Based Analysis:
A flexible database of experiments/data files/folders/information that clearly documents the stored
data
Filtering capability to quickly find and process information
Separated Data and Math. One source to edit, keeping analysis and results in-sync.
Script based plotting - one script can quickly create many plots, no manual time consuming work
needed.
Combined with LaTex, reports can be automated
Appendix
39
1
1.1
New Springs
Temperature = 25 C
1.2
Temperature = 70 C