Download as pdf or txt
Download as pdf or txt
You are on page 1of 136

FOSS(Free and Open Source Software)

INTRODUCTION
• Free and open-source software (FOSS) is software that
can be classified as both free software(freedom to users)
and open-source software
• Its a combination of Free Software & Open Source s/w
• Anyone is freely licensed to use, copy, study, and change
the software in any way, and the source code is openly
shared so that people are encouraged to voluntarily
improve the design of the software.
WHY FOSS
• S/w users have freedom to run, copy, distribute, study,
change and improve the software.
• Decreased software costs.
• Personal control, customizability and freedom.
• Privacy and security.
• Quality, collaboration and efficiency
• giving users more control over their own hardware
FREE SOFTWARE
• Richard Stallman's Free Software Definition, adopted by
the Free Software Foundation (FSF), defines free
software as a matter of liberty not price.
• Proposed in 1986
• It upholds the Four Essential Freedoms.
• (0) to run the program, (1) to study and change the
program in source code form, (2) to redistribute exact
copies, and (3) to distribute modified versions.
FREEDOM 0
• The freedom to run the program as you wish, for any
purpose
• It means the freedom for any kind of person or
organization to use it on any kind of computer system, for
any kind of overall job without being required to
communicate about it with the developer.
• It is the user's purpose that matters, not the developer's
purpose.
• The Functionality is not disturbed.
FREEDOM 1
• The freedom to study how the program works, and
change it so it does your computing as you wish.
• Access to the source code is a precondition for this.
• Freedom 1 includes the freedom to use your changed
version in place of the original.
• One important way to modify a program is by merging in
available free subroutines and modules.
• If the program's license says that you cannot merge in a
suitably licensed existing module , if it requires you to be
the copyright holder of any code you add — then the
license is too restrictive to qualify as free.
FREEDOM 2
• The freedom to redistribute copies so you can help others
• The freedom to redistribute copies must include binary or
executable forms of the program, as well as source code,
for both modified and unmodified versions.
FREEDOM 3
• The freedom to distribute copies of your modified versions
to others.
• Freedom 3 includes the freedom to release your modified
versions as free software. A free license may also permit
other ways of releasing them; in other words, it does not
have to be a copyleft license.
• By doing this you can give the whole community a chance
to benefit from your changes.
OPEN SOURCE SOFTWARE
• Open-source software (OSS) is a type of computer
software in which source code is released under a license
in which the copyright holder grants users the rights to
use, study, change, and distribute the software to anyone
and for any purpose.
GNU HISTORY
• The name “GNU” is a recursive acronym for “GNU's Not
Unix!”
• GNU was launched by Richard Stallman (rms) in 1983.
• The Free Software Foundation was founded in October
1985, initially to raise funds to help develop GNU
• The primary and continuing goal of GNU is to offer a Unix-
compatible system that would be 100% free software.
• Unlike Unix, GNU gives its users freedom.
• GNU packages include user-oriented applications,
utilities, tools, libraries, even games—all the programs
that an operating system can usefully offer to its users.
• The GNU packages have been designed to work together
so we could have a functioning GNU system.
• The ultimate goal is to provide free software to do all of
the jobs computer users want to do
• By the 1980s, almost all software was proprietary, which
means that it had owners who forbid and prevent
cooperation by users. This made the GNU Project
necessary.
• Every computer user needs an operating system; if there
is no free operating system, then you can't even get
started using a computer without resorting to proprietary
software. So the first item on the free software agenda
obviously had to be a free operating system.
• We decided to make the operating system compatible
with Unix because the overall design was already proven
and portable, and because compatibility makes it easy for
Unix users to switch from Unix to GNU.
• By 1990 we had either found or written all the major
components except one—the kernel.
• Then Linux, a Unix-like kernel, was developed by Linus
Torvalds in 1991 and made free software in 1992.
• Combining Linux with the almost-complete GNU system
resulted in a complete operating system: the GNU/Linux
system.
• The principal version of Linux now contains non-free
firmware “blobs”; free software activists now maintain a
modified free version of Linux, called Linux-libre.
• We aim to provide a whole spectrum of software,
whatever many users want to have. This includes
application software. See the Free Software Directory for
a catalogue of free software application programs.
• We also want to provide software for users who are not
computer experts. Therefore we developed a graphical
desktop (called GNOME) to help beginners use the GNU
system.
LICENSING FREE SOFTWARE
• The GNU General Public License (GNU GPL or simply
GPL) is a series of widely-used free software licenses that
guarantee end users the freedom to run, study, share,
and modify the software.
• The licenses were originally written by Richard Stallman,
former head of the Free Software Foundation (FSF), for
the GNU Project, and grant the recipients of a computer
program the rights of the Free Software Definition
• The GPL series are all copyleft licenses, which means
that any derivative work must be distributed under the
same or equivalent license terms.
COPYLEFT LICENCES
• Copyleft is the practice of granting the right to freely
distribute and modify intellectual property with the
requirement that the same rights be preserved in
derivative works created from that property
• It is a protective license that Grants use rights, forbids
proprietization
VARIOUS LINUX DISTRIBUTIONS
• There are commercially-backed distributions, such as
Fedora (Red Hat), openSUSE (SUSE) and Ubuntu
(Canonical Ltd.), and entirely community-driven
distributions, such as Debian, Slackware, Gentoo and
Arch Linux.
UBUNTU BASED
RED HAT BASED
UNIT 2
• GNU and linux installation – Boot process, Commands
Using bash features, The man pages, files and file
systems, File security, Partitions, Processes, Managing
processes, I/O redirection, Graphical environment,
Installing software, Backup techniques.
SYSTEM ADMINISTRATION
Boot process, Commands Using bash features
• To boot a computer is to load an operating system into
the computer's main memory or random access memory
(RAM).This simply means to reload the operating system
• The boot sequence starts when the computer is turned
on and is completed when the kernel is initialized, and
system is launched.
• The startup process then takes over and finishes the
task of getting the Linux computer into an operational
state.
• A bootloader is a vendor-proprietary image responsible
for bringing up the kernel on a device.
• It guards the device state and is responsible for initializing
the Trusted Execution Environment (TEE) and binding its
root of trust.
• The GRUB (Grand Unified Bootloader) is
a bootloader available from the GNU project.
• A bootloader is very important as it is impossible to start
an operating system without it. It is the first program which
starts when the program is switched on.
• The bootloader transfers the control to the
operating system kernel.
• The bootloader is comprised of many things including
splash screen.
• bootup sequence using the GRUB2
bootloader and the startup sequence as performed
by the systemd initialization system.
• Once your Linux system is installed, rebooting the system
is generally straightforward.
• There are several possibilities for configuring your boot
process. The most common choices are:
1. Boot Linux from a bootable disk, most likely a CD or an
installation CD/DVD, leaving another operating system to
boot from the hard drive.
2. Use the Linux Loader, LILO. This used to be the
traditional method of booting and lets you boot both
Linux and other operating systems.
3. Use GRUB (Grand Unified Bootloader), the GNU
graphical boot loader and command shell. Like LILO, GRUB
lets you boot both Linux and other operating systems.
GRUB, which has additional functionality not found in LILO,
is now the de facto Linux boot loader.
• the first sector of every hard disk is known as the boot
sector and contains the partition table for that disk and
possibly also code for booting an operating system.
• The boot sector of the first hard disk is known as the
master boot record (MBR), because when you boot the
system, the BIOS transfers control to a program that lives
on that sector along with the partition table.
LILO (Linux Loader)
• LILO can boot other operating systems, such as Windows
or any of the BSD systems.
• During installation, some Linux distributions provide the
opportunity to install LILO (most now install GRUB by
default). LILO can also be installed later if necessary.
• LILO can be installed on the MBR of your hard drive or
as a secondary boot loader on the Linux partition.
• LILO consists of several pieces, including the boot loader
itself, a configuration file (/etc/ lilo.conf), a map file
(/boot/map) containing the location of the kernel, and
the lilo command (/sbin/lilo), which reads the configuration
file and uses the information to create or update the map
file and to install the files LILO needs.
• One thing to remember about LILO is that it has two
aspects: the boot loader and the lilo command.
• The lilo command configures and installs the boot loader
and updates it as necessary.
• The boot loader is the code that executes at system boot
time and boots Linux or another operating system.
• You can make a rescue CD for LILO with the LILO
command mkrescue --iso to make an image that can be
burned to CD.
• Use mkrescue by itself or with other options to make a
rescue floppy disk. See the mkrescue manpage for more
information.
The lilo Command
• You need to run the lilo command to install the LILO boot
loader and to update it whenever the kernel changes or to
reflect changes to /etc/lilo.conf. Note that if you replace
your kernel image without rerunning lilo, your system may
be unable to boot.
• The path to the lilo command is usually /sbin/lilo. The
syntax of the command is:.
– lilo [options]
LILO COMMAND OPTIONS
-C config-file
• Specify an alternative to the default configuration file
(/etc/lilo.conf). lilo uses the configuration file to determine which
files to map when it installs LILO.
-I label
• Print the path to the kernel specified by label to standard output,
or an error message if no matching label is found. For example:
• $ lilo -I linux
• /boot/vmlinuz-2.0.34-0.6
-q
• List the currently mapped files. lilo maintains a file (/boot/map by
default) containing the name and location of the kernel(s) to boot.
Running lilo with this option prints the names of the files in the
map file to standard output, as in this example (the asterisk
indicates that linux is the default):
• lilo -q
-r root-directory
• Specify that before doing anything else, lilo should chroot to the
indicated directory. Used for repairing a setup from a boot CD or
floppy; you can boot from that disk but have lilo use the boot files
from the hard drive. For example, if you issue the following
commands, lilo will get the files it needs from the hard drive:
• $ mount /dev/hda2 /mnt
-R command-lineSet the default command for the boot loader the
next time it executes. The command executes once and then is
removed by the boot loader. This option typically is used in reboot
scripts, just before calling shutdown -r.
• -tIndicate that this is a test--do not really write a new boot sector
or map file. Can be used with -v to find out what lilo would do
during a normal run.
• -u device-nameUninstall lilo by restoring the saved boot sector
from /boot/boot.nnnn, after validating it against a
timestamp. device-name is the name of the device on which LILO
is installed, such as /dev/hda2.
• -U device-nameLike -u, but do not check the timestamp.
• -VPrint the lilo version number.
LILO BOOT ERRORS
• As LILO loads itself, it displays the letters of the word LILO, one at
a time as it proceeds. Once LILO is correctly loaded, you’ll see
the full word printed on the screen. If nothing prints, then LILO has
not been loaded at all; most likely LILO isn’t installed or it is
installed, but on a partition that is not active.
• If LILO started loading, but there was a problem, you can see how
far it got by how many letters printed:
L
• The first stage boot loader is loaded and running, but it can’t load
the second stage. There should be an error code indicating the
type of problem; usually the problem is a media failure or bad disk
parameters. See the LILO User’s Guide for the meaning of the
error codes.
LI
• The first stage boot loader loaded the second stage but was not
able to run it. The problem is most likely bad disk parameters or
the file /boot/boot.b (the boot sector) was moved but
the lilo command wasn’t run.
LIL
• The second stage boot loader was run, but it couldn’t load the
descriptor table from the map file. This is usually caused by a
media failure or bad disk parameters.
LIL?
• The second stage boot loader was loaded at an incorrect address,
probably because of bad disk parameters or
by moving /boot/boot.b without running lilo.
LIL-
• The descriptor table is corrupt. The problem is probably bad disk
parameters or moving /boot/map without running lilo.LILO
LILO
• was successfully loaded.
GRUB
• Like LILO, the GRUB boot loader can load other operating
systems in addition to Linux. GRUB has become the default
bootloader for most Linux variants.
• It was written by Erich Boleyn to boot operating systems on PC-
based hardware and is now developed and maintained by the
GNU project.
• GRUB can boot directly into Linux, FreeBSD, OpenBSD, and
NetBSD. It can also boot other operating systems such as
Microsoft Windows indirectly, through the use of a chainloader.
The chainloader loads an intermediate file, and that file loads the
operating system’s boot loader.
• GRUB provides a graphical menu interface. It also provides a
command interface that is accessible both while the system is
booting (the native command environment) and from the
command line once Linux is running.
ADVANTAGES OVER LILO
• The graphical menu interface shows you exactly what your
choices are for booting, so you don’t have to remember them. It
also lets you easily edit an entry on the fly, or drop down into the
command interface.
• If you are using the menu interface and something goes wrong,
GRUB automatically puts you into the command interface so you
can attempt to recover and boot manually.
• Another advantage of GRUB is that if you install a new kernel or
update the configuration file, that’s all you have to do; with LILO,
you also have to remember to rerun the lilo command to reinstall
the boot loader.
• A GRUB installation consists of at least two and sometimes three
executables, known as stages. The stages are:
• Stage 1
• Stage 1 is the piece of GRUB that resides in the MBR or the boot
sector of another partition or drive. Since the main portion of
GRUB is too large to fit into the 512 bytes of a boot sector, Stage
1 is used to transfer control to the next stage, either Stage 1.5 or
Stage 2.
• Stage 1.5
• Stage 1.5 is loaded by Stage 1 only if the hardware requires
it. Stage 1.5 is filesystem-specific; that is, there is a different
version for each filesystem that GRUB can load. The name of
the filesystem is part of the filename
(e2fs_stage1_5, fat_stage1_5, etc.). Stage 1.5 loads Stage 2.
• Stage 2
• Stage 2 runs the main body of the GRUB code. It displays
the menu, lets you select the operating system to be run, and
starts the system you’ve chosen.
• Files are specified either by the filename or by blocklist, which is
used to specify files such as chainloaders that aren’t part of a
filesystem. A filename looks like a standard Unix path specification
with the GRUB device name prepended; for example:
• (hd0,0)/grub/grub.conf
• When you use blocklist notation, you tell GRUB which blocks on
the disk contain the file you want. Each section of a file is
specified as the offset on the partition where the block begins plus
the number of blocks in the section. The offset starts at 0 for the
first block on the partition. The syntax for blocklist notation is:
• [device][offset+length[,offset]+length...
• the device name is optional for a file on the root device. With
blocklist notation, you can also omit the offset if it is 0.
• A typical use of blocklist notation is when using a chainloader to
boot Windows.
• If GRUB is installed in the MBR, you can chainload Windows by
setting the root device to the partition that has the Windows boot
loader, making it the active partition, and then using
the chainloader command to read the Windows boot sector:
• rootnoverify (hd0,0)
• makeactive
• chainloader +1
• GRUB also includes a device map. The device map is an ASCII
file, usually /boot/grub/device.map. Since the operating system
isn’t loaded yet when you use GRUB to boot Linux (or any other
operating system), GRUB knows only the BIOS drive names. The
purpose of the device map is to map the BIOS drives to Linux
devices. For example:
• (fd0) /dev/fd0
• (hd0) /dev/hda
BASH SHELL
• Short for "Bourne-Again Shell," bash is a Unix shell. Originally
released in 1989 as a free replacement for the Bourne
Shell, bash is part of the GNU project.
• The shell is a program that acts as a buffer between you and the
operating system. In its role as a command interpreter, it should
(for the most part) act invisibly.
• There are three main uses for the shell: interactive use;
customizing your Linux session by defining variables and startup
files; and programming, by writing and executing shell scripts.
Shell Prompt
• The prompt, $, which is called the command prompt, is issued
by the shell. While the prompt is displayed, you can type a
command.
• Shell reads your input after you press Enter. It determines the
command you want executed by looking at the first word of your
input. A word is an unbroken set of characters. Spaces and tabs
separate words.
• Following is a simple example of the date command, which
displays the current date and time −
• $date
• Thu Jun 25 08:30:19 MST 2009
Shell Types
In Unix, there are two major types of shells −
• Bourne shell − If you are using a Bourne-type shell,
the $ character is the default prompt.
• C shell − If you are using a C-type shell, the % character is the
default prompt.
The Bourne Shell has the following subcategories −
• Bourne shell (sh)
• Korn shell (ksh)
• Bourne Again shell (bash)
• POSIX shell (sh)
The different C-type shells follow
• C shell (csh)
BASH
• Bash is an sh-compatible command language interpreter
that executes commands read from the standard input or
from a file. Bash also incorporates useful features from
the Korn and C shells (ksh and csh).
• Bash is intended to be a conformant implementation of
the Shell and Utilities portion of the IEEE POSIX
specification (IEEE Standard 1003.1). Bash can be
configured to be POSIX-conformant by default.
Overview of Features
The Bash shell provides the following features:
• Input/output redirection
• Wildcard characters (metacharacters) for filename
abbreviation
• Shell variables and options for customizing your
environment
• A built-in command set for writing shell programs • Shell
functions, for modularizing tasks within a shell program •
Job control
• Command-line editing (using the command syntax of
either vi or emacs)
• Access to previous commands (command history) •
Integer arithmetic
• Arrays and arithmetic expressions
• Command-name abbreviation (aliasing)
• Upward compliance with POSIX
• Internationalization facilities
• An arithmetic for loop
• More ways to substitute variables
Invoking the Shell
• The command interpreter for the Bash shell (bash) can
be invoked as follows:
• bash [options] [arguments]
• Bash can execute commands from a terminal, from a file
(when the first argument is an executable script), or from
standard input (if no arguments remain or if -s is
specified). Bash automatically prints prompts if standard
input is a terminal, or if -i is given on the command line.
BASH COMMANDS
1.ls — List directory contents
• ls is probably the most common command. A lot of times, you’ll be working in a directory and you’ll
need to know what files are located there. The ls command allows you to quickly view all files within the
specified directory.
• Syntax: ls [option(s)] [file(s)]
• Common options: -a, -l
2. echo — Prints text to the terminal window
• echo prints text to the terminal window and is typically used in shell scripts and batch files to output
status text to the screen or a computer file. Echo is also particularly useful for showing the values of
environmental variables, which tell the shell how to behave as a user works at the command line or in
scripts.
• Syntax: echo [option(s)] [string(s)]
• Common options: -e, -n
3. exit — Exit out of a directory
• The exit command will close a terminal window, end the execution of a shell script, or log you out of
an SSH remote access session.
• Syntax: exit
3. touch — Creates a file
• touch is going to be the easiest way to create new files, but it can also be used to change timestamps
on files and/or directories. You can create as many files as you want in a single command without
worrying about overwriting files with the same name.
• Syntax: touch [option(s)] file_name(s)
• Common options: -a, -m, -r, -d
4. mkdir — Create a directory
mkdir is a useful command you can use to create directories. Any number of directories can be created
simultaneously which can greatly speed up the process.
• Syntax: mkdir [option(s)] directory_name(s)
• Common options: -m, -p, -v
5. pwd — Print working directory
pwd is used to print the current directory you’re in. As an example, if you have multiple terminals going and
you need to remember the exact directory you’re working within, then pwd will tell you.
• Syntax: pwd [option(s)]
6. cd — Change directory
cd will change the directory you’re in so that you can get info, manipulate, read, etc. the different files and
directories in your system.
• Syntax: cd [option(s)] directory
7. mv — Move or rename directory
• mv is used to move or rename directories. Without this command, you would have to individually
rename each file which is tedious. mv allows you to do batch file renaming which can save you loads of
time.
• Syntax: mv [option(s)] argument(s)
• Common options: -i, -b
8. rmdir — Remove directory
• rmdir will remove empty directories. This can help clean up space on your computer and keep files
and folders organized. It’s important to note that there are two ways to remove directories: rm and
rmdir. The distinction between the two is that rmdir will only delete empty directories, whereas rm will
remove directories and files regardless if they contain data or not.
• Syntax: rmdir [option(s)] directory_names
• Common options: -p
9. locate — Locate a specific file or directory
This is by far the simplest way to find a file or directory. You can keep your search broad if you don’t know
what exactly it is you’re looking for, or you can narrow the scope by using wildcards or regular
expressions.
• Syntax: locate [option(s)] file_name(s)
• Common options: -q, -n, -i
FEATURES OF BASH
• Bash is sh-compatible as it derived from the original UNIX Bourne Shell. It is incorporated with the
best and useful features of the Korn and C shell like directory manipulation, job control, aliases, etc.
• Bash can be invoked by single-character command line options (-a, -b, -c, -i, -l, -r, etc. ) as well as by
multi-character command line options also like --debugger, --help, --login, etc.
• Bash Start-up files are the scripts that Bash reads and executes when it starts. Each file has its
specific use, and the collection of these files is used to help create an environment.
• Bash consists of Key bindings by which one can set up customized editing key sequences.
• Bash contains one-dimensional arrays using which you can easily reference and manipulate the lists
of data.
• Bash comprised of Control Structures like the select construct that specially used
for menu generation.
• Directory Stack in Bash specifies the history of recently-visited directories within a
list. Example: pushd built in is used to add the directory to the stack, popd is to remove directory from
the stack and dirs built in is to display content of the directory stack.
• Bash also comprised of restricted mode for the environment security. A shell gets restricted if bash
starts with name rbash, or the bash --restricted, or bash -r option passed at invocation.
FILE SYSTEMS AND FILE PERMISSIONS
• The Filesystem is a kind of structure organized with the collection of files or folders. It determines
control over data
• Linux Filesystem is a tree-like structure comprised of lots of directories. These directories are just the
files containing the list of other files.
• Linux makes no difference between the files and directories. All the files in Linux filesystem are known
as directories, and these files are categorized as follows:
Ordinary files that contain data, text, images, program instructions.
Special files that give access to hardware devices.
Directories that contain both the ordinary and special files.
•The first column represents the file type and file permissions. Every file row begins with the file type and then
specifies the access permissions associated with the files. These are the following types of files with their
specific characters:
•Regular file (-)
•Directory (d)
•Link (l)
•Special File (c)
•Socket (s)
•Named pipe (p)
•Block device (b)
•The second column represents the number of memory blocks.
•The third column represents the owner of the file or the superuser, who has the administrating power.
•The fourth column represents the group of owner/superuser.
•The fifth column represents the file size.
•The sixth column represents the date and time when the file was created or lastly modified.
•The last column represents the name of the file or the directory.
FILE PERMISSIONS
• Linux-based Operating System requires file permissions to secure its filesystem, as there are file
permission based issues that occur when a user assigns improper permissions to the files and
directories. These issues may cause malicious or accidental tampering to the filesystem. So Linux
secures its filesystem with two authorization attributes as follows:

1. Permissions
There are three types of permissions associated with the files as follows:
• Read (r) permission by which you can view the content of the file.
• Write (w) permission by which you can modify the file content.
• Execute (x) permission by which one can run the programming file or script.

2. Ownership
There are three types of Linux users as follows:
• Owner is the superuser who creates the file. He can access all the permissions associated with a file
that includes reading, modifying, and running the file.
• Group is known as a set of users or multi-users. The superuser creates it. Every member in a group
has the same access permissions associated with a file.
• Other users, i.e., the third-party users can be anybody else who doesn't belong to the
Superuser/Group members. They use the permissions associated with any file or directory which are
created or owned by the Superuser/Group members.
•The first one slot (-) represents a file named by aa.sh
•Next three slots (rw-) specify the permissions used by the assigned owner.
These permissions include read and write. Here, execute permission is
denied.
•Next three slots (rw-) specify the permissions used by the group members
who own the directory. These permissions include read and write, but do
not include execute permission.
•Next three slots (r--) specify the permissions used by the third=party
users. These permissions include read permission only. Here, read and
write both the permissions have been denied.
CHANGING PERMISSIONS
You can alter the file permissions for each class (user/group/others) by
using chmod command. The basic form to remove or add any permission for any class is:

1. chmod [class][operator][permission] file_name


2. chmod [ugoa][+or-][rwx] file_name
where
• class is represented by the indicators - u, g, o, and a such that u for the user, g for the
group, o for the other, and a for all the classes.
• operator ( + or - ) is used to add or remove the permission.
• permission is represented by the indicators r, w, x to allow access for reading,
modifying, or running the script respectively.
SORTS OF FILES
• Most files are just files, called regular files; they contain normal
data, for example text files, executable files or programs, input for
or output from a program and so on.
• While it is reasonably safe to suppose that everything you
encounter on a Linux system is a file, there are some exceptions.
• Directories: files that are lists of other files.​
• Special files: the mechanism used for input and output. Most
special files are in /dev.​
• Links: a system to make a file or directory visible in multiple parts
of the system's file tree. ​
• (Domain) sockets: a special file type, similar to TCP/IP sockets,
providing inter-process networking protected by the file system's
access control.​
• Named pipes: act more or less like sockets and form a way for
processes to communicate with each other, without using network
socket semantics.​
FILE PARTITIONING
• One of the goals of having different partitions is to achieve higher data
security in case of disaster.
• By dividing the hard disk in partitions, data can be grouped and separated.
When an accident occurs, only the data in the partition that got the hit will be
damaged, while the data on the other partitions will most likely survive.
• This principle dates from the days when Linux didn't have journaled file
systems and power failures might have lead to disaster.
• The use of partitions remains for security and robustness reasons, so a
breach on one part of the system doesn't automatically mean that the whole
computer is in danger. This is currently the most important reason for
partitioning.
• A simple example: a user creates a script, a program or a
web application that starts filling up the disk. If the disk contains
only one big partition, the entire system will stop functioning if the
disk is full. If the user stores the data on a separate partition, then
only that (data) partition will be affected, while the system
partitions and possible other data partitions keep functioning.
• Mind that having a journaled file system only provides data
security in case of power failure and sudden disconnection of
storage devices. This does not protect your data against bad
blocks and logical errors in the file system. In those cases, you
should use a RAID (Redundant Array of Inexpensive Disks)
solution.
PARTITION TYPES AND LAYOUT
There are two kinds of major partitions on a Linux system:
• data partition: normal Linux system data, including the root
partition containing all the data to start up and run the system;
and
• swap partition: expansion of the computer's physical memory,
extra memory on hard disk.
• Most systems contain a root partition, one or more data
partitions and one or more swap partitions. Systems in mixed
environments may contain partitions for other system data,
such as a partition with a FAT or VFAT file system for MS
Windows data.
I/O REDIRECTION
• This feature of the command line enables you to redirect the input
and/or output of commands from and/or to files, or join multiple
commands together using pipes to form what is known as a
“command pipeline”.
All the commands that we run fundamentally produce two kinds of
output:
• the command result – data the program is designed to produce,
and
• the program status and error messages that informs a user of the
program execution details.
• In Linux and other Unix-like systems, there are three default files
named below which are also identified by the shell using file
descriptor numbers:
• stdin or 0 – it’s connected to the keyboard, most programs read
input from this file.
• stdout or 1 – it’s attached to the screen, and all programs send
their results to this file and
• stderr or 2 – programs send status/error messages to this file
which is also attached to the screen.
• Therefore, I/O redirection allows you to alter the input source of a
command as well as where its output and error messages are
sent to. And this is made possible by the “<” and “>” redirection
operators.
Standard Input
• The standard input stream typically carries data from a user to a
program. Programs that expect standard input usually receive input
from a device, such as a keyboard. Standard input is terminated by
reaching EOF (end-of-file). As described by its name, EOF indicates that
there is no more data to be read.
• To see standard input in action, run the cat program. Cat stands for
concatenate, which means to link or combine something. It is commonly
used to combine the contents of two files. When run on its own, cat
opens a looping prompt.
• Cat
After opening cat, type a series of numbers as it is running.
•1
2
3
ctrl-d
• When you type a number and press enter, you are
sending standard input to the running cat program, which
is expecting said input. In turn, the cat program is sending
your input back to the terminal display as standard output.
• EOF can be input by the user by pressing ctrl-d. After the
cat program receives EOF, it stops.
Standard Output
• Standard output writes the data that is generated by a program. When
the standard output stream is not redirected, it will output text to the
terminal. Try the following example:
• echo
• Sent to the terminal through standard output When used without
any additional options, the echo command displays any argument that
is passed to it on the command line. An argument is something that is
received by a program.
• Run echo without any arguments:
• echo It will return an empty line, since there are no arguments.
Standard Error
• Standard error writes the errors generated by a program that has
failed at some point in its execution. Like standard output, the
default destination for this stream is the terminal display.
• When a program’s standard error stream is piped to a second
program, the piped data (consisting of program errors) is
simultaneously sent to the terminal as well.
• When run without an argument, ls lists the contents within the
current directory. If ls is run with a directory as an argument, it will
list the contents of the provided directory.
• ls %
• Since % is not an existing directory, this will send the following
text to standard error:
• ls: cannot access %: No such file or directory
• Stream Redirection
• Linux includes redirection commands for each stream. These
commands write standard output to a file. If a non-existent file
is targetted (either by a single-bracket or double-bracket
command), a new file with that name will be created prior to
writing.
• Commands with a single bracket overwrite the destination’s
existing contents.
• Overwrite
• > - standard output
• < - standard input
• 2> - standard error
• cat > write_to_me.txt
• a
• b
• c
• ctrl-d
• Here, cat is being used to write to a file, which is created as a result of the loop.
• View the contents of writetome.txt using cat:
• cat write_to_me.txt
• Redirect cat to writetome.txt again, and enter three numbers.
• cat > write_to_me.txt
• 1
• 2
• 3
• ctrl-d
• The prior contents are no longer there, as the file was overwritten by the single-
bracket command.
Do one more cat redirection, this time using double brackets:
cat >> write_to_me.txt
a
b
c
ctrl-d
Pipes
• Pipes are used to redirect a stream from one program to another.
When a program’s standard output is sent to another through a pipe,
the first program’s data, which is received by the second program, will
not be displayed on the terminal. Only the filtered data returned by
the second program will be displayed.
• The Linux pipe is represented by a vertical bar.
• *|* An example of a command using a pipe:
• ls | less This takes the output of ls, which displays the contents of
your current directory, and pipes it to the less program. less displays
the data sent to it one line at a time.
• ls normally displays directory contents across multiple rows. When
you run it through less, each entry is placed on a new line.
Filters
• Filters are commands that alter piped redirection and output.
Note that filter commands are also standard Linux commands
that can be used without pipes.
• find - Find returns files with filenames that match the
argument passed to find.
• grep - Grep returns text that matches the string pattern
passed to grep.
• tee - Tee redirects standard input to both standard output and
one or more files.
• tr - tr finds-and-replaces one string with another.
• wc - wc counts characters, lines, and words.
LINUX BACKUP TECHNIQUES
Full Backups
Hence the name, full backups make a complete copy of all the data on your system.
Some Linux admins do a full backup by default for smaller folders or data sets that don’t
eat up a lot of storage space. Because they tend to require a significant amount of
space, admins responsible for larger sets of data usually only run full backups
periodically. The problem with this approach is that it can create lengthy gaps that put
your data at greater risk.
Full Linux Backup
Pros Cons

Backup operations are slower as you continue to


All data is centralized in one backup set
execute full backups and accumulate more data

Readily available data makes recovery operations


Requires the most storage space
fast and easy
Makes inefficient use of resources as the same files
Version control is easy to manage
are continuously copied.
Incremental Backups
Incremental backups record all data that has changed since performing your last backup –
full or incremental. If you perform a full backup on Sunday evening, you can run an
incremental backup on Tuesday evening to hit all the files that changed since that first job.
Then on Thursday, you run a job that copies all changes made since Tuesday, so on and
so forth. In a nutshell, the incremental method creates a chain of backups. These backups
are stacked in order from your original starting point.

Incremental Backups
Pros Cons
The need to restore all recorded changes results
Takes up considerably less space than full backups
in slower recovery operations
Using less space results in leaner backup images and The need to search multiple backup sets results
faster backup operations in slower recovery of individual files

Aids retention efforts by creating multiple versions of Initial full backup and all incremental backups
the same files thereafter are needed for complete recovery
Differential Backups
Differential backups record all changes made since your last full backup. So let’s say you run a
full backup Sunday night. Then on the following Tuesday, you run a differential backup to
record all the changes made since Sunday’s job. The job you run on Thursday only records
changes made since Sunday and the cycle continues until running your next full backup. You
can call this method a middle ground between full and incremental backups.

Differential Backups

Pros Cons

Makes the most efficient use of storage space Backup process is slower than incremental backups

Performs backups faster than full backups Recovery process is slower than full backups

Initial full backup and all differential backups thereafter


Recovers data faster than incremental backups
are needed for complete recovery
Network Backups
Network backups use the client-server model to send data across the network to backup
destinations. In a networked configuration, multiple computers can act as clients and backup
data to one centralized server or multiple servers. You can easily manage network backups
with a comprehensive disaster recovery solution. For example, an organization can purchase
10 licenses for ShadowProtect SPX and provide access to each user with a single
registration key. From there system admins can install the software on all 10 machines and
backup each individual system accordingly.

Network Backups

Pros Cons

Can be deployed for onsite and offsite backup Presents additional network-related management
operations alike challenges

Compatible with full, incremental, and differential Can be a costly operation when backing up large
backup technologies sets up of data

Reliability may be dependent on Internet


Supports a wide variety of storage mediums
connection and third-party infrastructures
FTP Backups
FTP backups leverage the client-server architecture to facilitate backups over the Internet via
File Transfer Protocol. This method can play an integral role in your data protection strategy by
allowing you to transfer mission-critical data to an offsite facility. Many web hosting providers
offer FTP capabilities. Linux provides convenient access to a number of free FTP clients from
the software repositories bundled in numerous distributions.

FTP Backups

Pros Cons

Performs backup and recovery operations in easy and FTP’s lack of encryption makes security a
affordable fashion concern

Helps protect data from fire, floods, vandalism, and


Backups are confined to file size limitations
other onsite disasters

Supports a large number of users on a single FTP Speed and reliability of backup and recovery
account operations depend on the Internet connection
Linux Backup Tools
• Tar
• Dump
• dd
• cpio
GCC
Name : gcc
Synopsis : gcc [options] files
• GNU Compiler Collection. gcc, formerly known as the
GNU C Compiler, compiles multiple languages (C, C++,
Objective-C, Ada, FORTRAN, and Java) to machine
code.
• Here we document its use to compile C, C++, or
Objective-C code. gcc compiles one or more
programming source files; for example, C source files
(file.c), assembler source files (file.s), or preprocessed C
source files (file.i).
• If the file suffix is not recognizable, assume that the file is
an object file or library.
• gcc normally invokes the C preprocessor, compiles the
process code to assemble language code, assembles it,
and then links it with the link editor.
• This process can be stopped at one of these stages
using the -c, -S, or -E option. The steps may also differ
depending on the language being compiled.
• By default, output is placed in a.out. In some
cases, gcc generates an object file having a .o suffix and
a corresponding root name.
GNU DEBUGGER
• A debugger is a program that runs other programs,
allowing the user to exercise control over these programs,
and to examine variables when problems arise.
• GNU Debugger, which is also called gdb, is the most
popular debugger for UNIX systems to debug C and C++
programs.
• GNU Debugger helps you in getting information about
the following:
• If a core dump happened, then what statement
or expression did the program crash on?​
• If an error occurs while executing a function, what line
of the program contains the call to that function, and
what are the parameters?​
• What are the values of program variables at a
particular point during execution of the program?​
• What is the result of a particular expression in a program?​
How GDB Debugs?
• GDB allows you to run the program up to a certain point,
then stop and print out the values of certain variables at
that point, or step through the program one line at a time
and print out the values of each variable after executing
each line.
• GDB uses a simple command line interface.
Points to Note
• Even though GDB can help you in finding out memory
leakage related bugs, but it is not a tool to detect memory
leakages.
• GDB cannot be used for programs that compile with
errors and it does not help in fixing those errors.
GDP INSTALLATION
• Before you go for installation, check if you already have gdb installed on your
Unix system by issuing the following command:
• $gdb -help
• If GDB is installed, then it will display all the available options within your
GDB. If GDB is not installed, then proceed for a fresh installation.
• You can install GDB on your system by following the simple steps discussed
below.
step 1: Make sure you have the prerequisites for installing gdb:
• An ANSI-compliant C compiler (gcc is recommended - note that gdb can
debug codes generated by other compilers)
• 115 MB of free disk space is required on the partition on which you're going
to build gdb.
• 20 MB of free disk space is required on the partition on which you're going to
install gdb.
step 2: Use the following command to
install gdb on linux machine.
• $ sudo apt-get install libc6-dbg gdb valgrind
• step 3: Now use the following command to find the help
information.
• $gdb -help
• You now have gdb installed on your system and it is ready
to use.
• A Debugging Symbol Table maps instructions in the compiled
binary program to their corresponding variable, function, or line in
the source code. This mapping could be something like:
• Program instruction ⇒ item name, item type, original file, line
number defined.
• Symbol tables may be embedded into the program or stored as a
separate file. So if you plan to debug your program, then it is
required to create a symbol table which will have the required
information to debug the program.
• We can infer the following facts about symbol tables:
• A symbol table works for a particular version of the program – if
the program changes, a new table must be created.
• Debug builds are often larger and slower than retail (non-debug)
builds; debug builds contain the symbol table and other ancillary
information.
• If you wish to debug a binary program you did not compile
yourself, you must get the symbol tables from the author.
• To let GDB be able to read all that information line by line
from the symbol table, we need to compile it a bit
differently. Normally we compile our programs as:
• gcc hello.cc -o hello
• Instead of doing this, we need to compile with the -g flag
as shown below:
• gcc -g hello.cc -o hello
GDB offers a big list of commands, however the following
commands are the ones used most frequently:
• b main - Puts a breakpoint at the beginning of the
program
• b - Puts a breakpoint at the current line
• b N - Puts a breakpoint at line N
• b +N - Puts a breakpoint N lines down from the current
line
• b fn - Puts a breakpoint at the beginning of function "fn"
• d N - Deletes breakpoint number N
• info break - list breakpoints
• r - Runs the program until a breakpoint or error
• c - Continues running the program until the next breakpoint or
error
• f - Runs until the current function is finished
• s - Runs the next line of the program
• s N - Runs the next N lines of the program
• n - Like s, but it does not step into functions
• u N - Runs until you get N lines in front of the current line
• p var - Prints the current value of the variable "var"
• bt - Prints a stack trace
• u - Goes up a level in the stack
• d - Goes down a level in the stack
• q - Quits gdb
VERSION CONTROL
• Version control is a way to keep a track of the changes in the
code so that if something goes wrong, we can make
comparisons in different code versions and revert to any
previous version that we want.
• It is very much required where multiple developers are
continuously working on /changing the source code.
• version control (also known as revision control, source
control, or source code management) is a class of systems
responsible for managing changes to computer programs,
documents, large web sites, or other collections of
information.
VERSIONING TOOLS
• CVS. CVS may very well be where version
control systems started. ...
• SVN. ...
• GIT. ...
• Mercurial. ...
• Bazaar.
Git
• Git is one of the best version control tools that is available in the
present market.
Features
• Provides strong support for non-linear development.
• Distributed repository model.
• Compatible with existing systems and protocols like HTTP, FTP,
ssh.
• Capable of efficiently handling small to large sized projects.
• Cryptographic authentication of history.
• Pluggable merge strategies.
• Toolkit-based design.
• Periodic explicit object packing.
• Garbage accumulates until collected.
Pros
• Super-fast and efficient performance.
• Cross-platform
• Code changes can be very easily and clearly tracked.
• Easily maintainable and robust.
• Offers an amazing command line utility known as git bash.
• Also offers GIT GUI where you can very quickly re-scan,
state change, sign off, commit & push the code quickly with just a
few clicks.
Cons
• Complex and bigger history log become difficult to understand.
• Does not support keyword expansion and
timestamp preservation.
• Open Source: Yes
GIT REFERENCE LINKS
• https://git-scm.com/video/what-is-version-control
• https://git-scm.com/docs/user-manual
• https://git-scm.com/docs/git#_git_commands
• https://developer.ibm.com/technologies/web-
development/tutorials/d-learn-workings-git/
--version
• Prints the Git suite version that the git program came from.
--help
• Prints the synopsis and a list of the most commonly used commands. If
the option --all or -a is given then all available commands are printed. If
a Git command is named this option will bring up the manual page for that
command.
-C <path>
• Run as if git was started in <path> instead of the current working
directory. When multiple -C options are given, each subsequent non-
absolute -C <path> is interpreted relative to the preceding -C <path>.
If <path> is present but empty, e.g. -C "", then the current working
directory is left unchanged.
--exec-path[=<path>]
• Path to wherever your core Git programs are installed. This can also be
controlled by setting the GIT_EXEC_PATH environment variable. If no path is
given, git will print the current setting and then exit.
--html-path
• Print the path, without trailing slash, where Git’s HTML documentation is
installed and exit.
--man-path
• Print the manpath (see man(1)) for the man pages for this version of Git and
exit.
--info-path
• Print the path where the Info files documenting this version of Git are
installed and exit.
-p
--paginate
• Pipe all output into less (or if set, $PAGER) if standard output is a
terminal. This overrides the pager.<cmd> configuration options (see
the "Configuration Mechanism" section below).
-P
--no-pager
• Do not pipe Git output into a pager.
--git-dir=<path>
• Set the path to the repository (".git" directory). This can also be
controlled by setting the GIT_DIR environment variable. It can be an
absolute path or relative path to current working directory.
GIT COMMANDS
• We divide Git into high level ("porcelain") commands and
low level ("plumbing") commands.
High-level commands (porcelain)
• We separate the porcelain commands into the main
commands and some ancillary user utilities.
• Main porcelain commands
• git-add[1]Add file contents to the index
• git-am[1]Apply a series of patches from a mailbox
• git-archive[1]Create an archive of files from a named tree
• git-bisect[1]Use binary search to find the commit
that introduced a bug
• git-branch[1]List, create, or delete branches
• git-bundle[1]Move objects and refs by archive
• git-checkout[1]Switch branches or restore working
tree files
• git-cherry-pick[1]Apply the changes introduced by
some existing commits
• git-citool[1]Graphical alternative to git-commit
• git-clean[1]Remove untracked files from the working tree
• git-clone[1]Clone a repository into a new directory
• git-commit[1]Record changes to the repository
• git-describe[1]Give an object a human readable name based on an
available ref
• git-diff[1]Show changes between commits, commit and working tree,
etc
• git-fetch[1]Download objects and refs from another repository
• git-format-patch[1]Prepare patches for e-mail submission
• git-gc[1] Cleanup unnecessary files and optimize the local repository
• git-grep[1] Print lines matching a pattern
ANCILLARY COMMANDS IN GIT
Manipulators:
• git-config[1]Get and set repository or global options
• git-fast-export[1]Git data exporter
• git-fast-import[1]Backend for fast Git data importers
• git-filter-branch[1]Rewrite branches
• git-mergetool[1]Run merge conflict resolution tools to resolve merge
conflicts
• git-pack-refs[1]Pack heads and tags for efficient repository access
• git-prune[1]Prune all unreachable objects from the object database
• git-reflog[1]Manage reflog information
• git-remote[1]Manage set of tracked repositories
• git-repack[1]Pack unpacked objects in a repository
• git-replace[1]Create, list, delete refs to replace objects
Interrogators:
• git-annotate[1]Annotate file lines with commit information.
• git-blame[1]Show what revision and author last modified each line of
a file
• git-bugreport[1]Collect information for user to file a bug report
• git-count-objects[1]Count unpacked number of objects and their disk
consumption
• git-difftool[1]Show changes using common diff tools
• git-fsck[1]Verifies the connectivity and validity of the objects in the
database
• git-help[1]Display help information about Git
• git-instaweb[1]Instantly browse your working repository in gitweb
• git-show-branch[1]Show branches and their commits
• git-verify-commit[1]Check the GPG signature of commits
• git-verify-tag[1]Check the GPG signature of tags
• gitweb[1]Git web interface (web frontend to Git repositories)
• git-whatchanged[1]Show logs with difference each commit
introduces
• git-merge-tree[1]Show three-way merge without touching
index
• git-rerere[1]Reuse recorded resolution of conflicted merges
Reset, restore and revert
• There are three commands with similar names: git reset, git
restore and git revert.
• git-revert[1] is about making a new commit that reverts the
changes made by other commits.
• git-restore[1] is about restoring files in the working tree from
either the index or another commit. This command does not
update your branch. The command can also be used to restore
files in the index from another commit.
• git-reset[1] is about updating your branch, moving the tip in order
to add or remove commits from the branch. This operation
changes the commit history.
• git reset can also be used to restore the index, overlapping with git
restore.
LOW LEVEL COMMANDS
Manipulation commands
• git-apply[1]Apply a patch to files and/or to the index
• git-checkout-index[1]Copy files from the index to the working tree
• git-commit-graph[1]Write and verify Git commit-graph files
• git-commit-tree[1]Create a new commit object
• git-hash-object[1]Compute object ID and optionally creates a blob from a
file
• git-index-pack[1]Build pack index file for an existing packed archive
• git-merge-file[1]Run a three-way file merge
• git-merge-index[1]Run a merge for files needing merging
• git-multi-pack-index[1]Write and verify multi-pack-indexes
• git-mktag[1]Creates a tag object
• git-mktree[1]Build a tree-object from ls-tree formatted text
• git-pack-objects[1]Create a packed archive of objects
• git-prune-packed[1]Remove extra objects that are already in
pack files
• git-read-tree[1]Reads tree information into the index
• git-symbolic-ref[1]Read, modify and delete symbolic refs
• git-unpack-objects[1]Unpack objects from a packed archive
• git-update-index[1]Register file contents in the working tree
to the index
Interrogation commands
• git-cat-file[1]Provide content or type and size information for
repository objects
• git-cherry[1]Find commits yet to be applied to upstream
• git-diff-files[1]Compares files in the working tree and the index
• git-diff-index[1]Compare a tree to the working tree or index
• git-diff-tree[1]Compares the content and mode of blobs found via two
tree objects
• git-for-each-ref[1]Output information on each ref
• git-get-tar-commit-id[1]Extract commit ID from an archive created
using git-archive
• git-ls-files[1]Show information about files in the index and the working
tree
• git-ls-remote[1]List references in a remote repository
• git-ls-tree[1]List the contents of a tree object
SOURCE CODE MANAGEMENT
• Two popular source code management systems for Linux: Subversion (SVN)
and Git.
• Source code management systems let you store and retrieve multiple
versions of a file.
• While originally designed for program source code, they can be used for any
kind of file: source code, documentation, configuration files, and so on.
Modern systems allow you to store binary files as well, such as image or
audio data.
• Source code management systems let you compare different versions of a
file, as well as do “parallel development.”
• In other words, you can work on two different versions of a file at the same
time, with the source code management system storing both versions. You
can then merge changes from two versions into a third version.
TERMINOLOGIES
Repository
• A repository is where the source code management system stores its copy of
your file. Usually one file in the source code management system is used to
hold all the different versions of a source file. Each source code management
system uses its own format to allow it to retrieve different versions easily and
to track who made what changes, and when.
Sandbox
• A sandbox is your personal, so-called “working copy” of the program or set of
documents under development. You edit your private copy of the file in your
own sandbox, returning changes to the source code management system
when you’re satisfied with the new version.
Check in, check out
• You “check out” files from the repository, edit them, and then “check them in”
• There are several source code management systems
used in the Unix community:
• SCCS : The Source Code Control System
• RCS : The Revision Control System.
• CVS : The Concurrent Versions System
• Arch
• Codeville
• CSSC
SVN
• Subversion is a free/open source version control system. That is,
Subversion manages files and directories, and the changes made
to them, over time.
• This allows you to recover older versions of your data or examine
the history of how your data changed. In this regard, many people
think of a version control system as a sort of “time machine.”
• Subversion can operate across networks, which allows it to be
used by people on different computers. At some level, the ability
for various people to modify and manage the same set of data
from their respective locations fosters collaboration.
• Progress can occur more quickly without a single conduit through
which all modifications must occur.
• Some version control systems are also software configuration management
(SCM) systems.
• These systems are specifically tailored to manage trees of source code and
have many features that are specific to software development—such as
natively understanding programming languages, or supplying tools for
building software. Subversion, however, is not one of these systems.
• Subversion is a version-control system. It lets you track changes to an entire
project directory tree. Every change made to the tree is recorded and can be
retrieved.
WHY SUBVERSION?
• If you need to archive old versions of files and directories, possibly resurrect
them, or examine logs of how they’ve changed over time, then Subversion is
exactly the right tool for you.
• If you need to collaborate with people on documents (usually over a
network) and keep track of who made which changes, then Subversion is
also appropriate.
DIFFERENCES BETWEEN GIT AND SVN
• Git version control is a distributed. SVN is centralized. There are
also key differences in repositories, branching, and more.
1. Server Architecture
• SVN has a separate server and client. Only the files a developer
is working on are kept on the local machine, and the developer
must be online, working with the server. Users check out files and
commit changes back to the server.
• Git software is installed on a workstation and acts as a client and
a server. Every developer has a local copy of the full version
history of the project on their individual machine.
• Git changes happen locally. So, the developer doesn’t have to be
connected all the time. Once all the files are downloaded to the
developer’s workstation, local operations are faster
2.Branching
• SVN branches are created as directories inside a repository.
This directory structure is the core pain point with SVN
branching. When the branch is ready, you commit back to the
trunk.
• Git branches are only references to a certain commit. They
are lightweight — yet powerful. You can create, delete, and
change a branch at any time, without affecting the commits.
• If you need to test out a new feature or you find a bug, you
can make a branch, make the changes, push the commit to
the central repo, and then delete the branch.
3.Storage Requirements
• Storage is similar in Git and SVN. The disk space usage
is equal for both Git and SVN repositories. The difference
is what type of files can be stored in the repositories.
• Git repositories can't handle large binary files.
• SVN repositories can handle large binary files, in
addition to code. Storing large binary files in SVN would
take up less space than in Git.
4.Commands
SUBVERSION ARCHITECTURE
• Subversion has one Repository, which act like server stores data,
provide interface for client. Client can be a GUI or a simple CLI,
based on a subversion library for transferring data.
• Subversion usually uses structure with 3 folders:
• trunk: contains latest source code, which is on development
• tags: contains snapshot of project. For example: Project A
releases version 1.0, all sources inside trunk will be tagged into
1.0 tag, and later, when we need to build/deploy or review version
1.0 again, we will get the tag 1.0
• branches: contains different branches of project. Developers can
work on multiple, simultaneous features without affecting others.
Branches can be merged later after feature has been
implemented
Definition within SVN
• repository: server which contain source and subversion
• HEAD: top position of SVN
• master: default, main branch which should be created
when create SVN
• change: change between a revision, which previous
revision
• working copy: copy of all subversion sources at local
computer
• conflict: when many people work simultaneously with a same
working copy, for example: A and B checkout revision 40 of
file config.js, then A updates function update(), commits it to
SVN to make revision 41 while B is updating the same
function, when B checkout SVN to local working copy, SVN is
unable to know what version is latest: A’s version or B’s
version, and we call that situtation is conflict
• resolve: now B should review code inside function (which will
be marked as conflict inside file config.js), to keep A’s code or
update it to work with B’s source, then B will
mark the conflict as resolved
• commit: when developer changes code, architecture, structure,
documentation,… and puts it on server, this is called commit,
the commit should contains a message which describe the change of
that commit, every commit will increase revision of SVN by 1
• checkout: user who has credential to access to SVN and get content of
the SVN at a specific revision, usually latest one, this revision is
called HEAD – top position of SVN structure.
• revert: after made some changes, developers think it was a mistake
and want to clear all the changes they had made, they will revert –
restore state of 1 or many documents to a specific revision, usually
they will revert to current revision which they are working on.
• merge: merging code from different branch to one, the second branch
will be redirected to first branch.
• update: when we update working copy will take current working copy to
selected revision

You might also like