Professional Documents
Culture Documents
HR Manage
HR Manage
ACKNOLEDGEMENT PAGE.NO
CONTENTS
SYNOPSIS
1.INTRODUCTION
1 SYSTEM SPECIFICATION
2.SYSTEM STUDY
2.2.1 FEATURES
3.LOGICAL DESIGN
FUTURE ENHANCEMENT
6.BIBLIOGRAPHY
7.APPENDICES
B.TABLE STRUCTURE
C.SAMPLE CODING
D.SAMPLE INPUT
E.SAMPLE OUTPUT
SYNOPSIS
In this paper describes project that focused Human Resource Management in the
marketing industry .The paper deals with the possibilities and aspects to support HR via
mobile services. The key profiles of mobile communication are Interactive Broadband
Protocols, Location Based Services and devitalized/Personalized Services mainly based on
Multimedia-Information. These profiles are embedded in a three-layer communication model.
This paper is to provide HR which not only provide Order Processing but also can track the
move of Sales person to the customer using Geo Tracking System, so that no sales person can
cheat the manager by not visiting to customers. This paper also Provide a HR order processing
module for distributor/sales person. In this manner, the new mobile HR solutions can
represent a specific kind of collaborative application. In this paper we start with an overview
of some basic concepts. We then present our approach of web-based project management that
can be used for various forms of collaborative activities over the Internet, including customer
relationship management. We have successfully used our approach for projects of a digital
photography and a middle-class marketing agency. Currently, we are working on an extended
platform for virtual enterprises in the area of facility management with many customers as
well as sub-contractors.
.
1. INTRODUCTION
The Internet allows new ways of communication among companies as well as between
clients and companies, e.g., business-to-business (b2b) market places. New ways of
collaboration make virtual enterprises possible. Customer relationship management includes
relations of companies to their clients and activities for the consequent support of client and
service processes. Todays communication and collaboration facilities allow clients to
participate in the creative and inventive development of individual products. This opens up
new doors for effective HR on a project basis, since the key factors of successful HR are
interaction with and identification of the customer. Both is guaranteed in modern project
management platforms, linked with the voluntary cooperation of the client and his ambition to
design a customized product.
1.1.1HARDWARE SPECIFICATION
RAM : 64 MB TO 256 MB
1.1.2SOFTWARE SPECIFICATION
Front End
OpenERP
Modules / components
The main OpenERP components are the Open Object framework, about 30 core
modules (also called official modules) and more than 3000 community modules.
Educational use
OpenERP has been used as a component of university courses, and it became a
compulsory subject for the baccalaureate in France, just like Word, Excel and PowerPoint. A
study on experiential learning suggested that OpenERP provides a suitable alternative to
proprietary systems to supplement teaching. OpenERP also offers a completely free
programmer called OpenERP Education, which allows teachers and/or students to create an
OpenERP database for academic purposes.
Software & architecture
OpenERP uses Python scripting and Postgres database. A development repo is
on GitHub.
Vendor support. The 3 last LTS version are supported in parallel. This means that
when a new LTS version is released, an older version reaches its end-of-life, and is not
supported anymore. As an example, 8.0 LTS will be supported along with 9.0 LTS and 10.0
LTS, but will reach end-of-life when 11.0 LTS is released.
In 2005, Fabien Pinckaers, the founder and current CEO of OpenERP, started to
develop his first software product, TinyERP. His dream was for his product and company to
become a major player in the enterprise world with a cool, innovative, open source product.
However, three years later he came to realize that having the word tiny in the product name
was not the right approach if he wanted to change the enterprise world. The name was then
changed to OpenERP. The company started to evolve quickly and in 2010, OpenERP had
become a 100+ employee company. The OpenERP product was powerful, but Fabien
Pinckaers felt that they had become too distracted by providing services to customers that the
product had suffered and become unattractive. He wanted to make sure that the product came
first in order to be able to offer an exceptional product. Therefore, the decision was made to
redirect the companys main focus towards software publishing rather than services, and the
business model changed accordingly, with increased focus on building a strong partner
network and maintenance offers.
Schemas
In PostgreSQL, a schema holds all objects (with the exception of roles and table
spaces). Schemas effectively act like namespaces, allowing objects of the same name to co-
exist in the same database. By default, newly created databases have a schema called "public",
but any additional schemas can be added, and the public schema isn't mandatory.
A "search path" setting determines the order in which PostgreSQL checks schemas for
unqualified objects (those without a prefixed schema). By default, it is set to "$user, public"
($user refers to the currently connected database user). This default can be set on a database or
role level, but as it is a session parameter, it can be freely changed (even multiple times)
during a client session, affecting that session only.
Non-existent schemas listed in search path are silently skipped during objects lookup.
New objects are created in whichever valid schema (one that presently exists) appears first in
the search path.
Data types
A wide variety of native data types are supported, including:
Boolean
Arbitrary precision numerics
Character (text, varchar, char)
Binary
Date/time (timestamp/time with/without timezone, date, interval)
Money
Enum
Bit strings
Text search type
Composite
HStore (an extension enabled key-value store within PostgreSQL)
Arrays (variable length and can be of any data type, including text and composite
types) up to 1 GB in total storage size
Geometric primitives
IPv4 and IPv6 addresses
CIDR blocks and MAC addresses
XML supporting XPath queries
UUID
In addition, users can create their own data types which can usually be made fully
indexable via PostgreSQL's indexing infrastructures GiST, GIN, SP-GiST. Examples of
these include the geographic information system (GIS) data types from the PostGIS project
for PostgreSQL.
There is also a data type called a "domain", which is the same as any other data type
but with optional constraints defined by the creator of that domain. This means any data
entered into a column using the domain will have to conform to whichever constraints were
defined as part of the domain.
Starting with PostgreSQL 9.2, a data type that represents a range of data can be used
which are called range types. These can be discrete ranges (e.g. all integer values 1 to 10) or
continuous ranges (e.g. any point in time between 10:00 am and 11:00 am). The built-in range
types available include ranges of integers, big integers, decimal numbers, time stamps (with
and without time zone) and dates.
Custom range types can be created to make new types of ranges available, such as IP
address ranges using the inet type as a base, or float ranges using the float data type as a base.
Range types support inclusive and exclusive range boundaries using the [] and () characters
respectively. (e.g. '[4,9)' represents all integers starting from and including 4 up to but not
including 9.) Range types are also compatible with existing operators used to check for
overlap, containment, right of etc.
User-defined objects
New types of almost all objects inside the database can be created, including:
Casts
Conversions
Data types
Domains
Functions, including aggregate functions and window functions
Indexes including custom indexes for custom types
Operators (existing ones can be overloaded)
Procedural languages
Inheritance
Tables can be set to inherit their characteristics from a "parent" table. Data in child
tables will appear to exist in the parent tables, unless data is selected from the parent table
using the ONLY keyword, i.e. SELECT * FROM ONLY parent_table;. Adding a column in
the parent table will cause that column to appear in the child table.
Inheritance can be used to implement table partitioning, using either triggers or rules
to direct inserts to the parent table into the proper child tables.
As of 2010, this feature is not fully supported yet in particular, table constraints are not
currently inheritable. All check constraints and not-null constraints on a parent table are
automatically inherited by its children. Other types of constraints (unique, primary key, and
foreign key constraints) are not inherited.
Inheritance provides a way to map the features of generalization hierarchies depicted
in entity relationship diagrams (ERDs) directly into the PostgreSQL database.
Other storage feature
Referential integrity constraints including foreign key constraints, column constraints,
and row checks
Binary and textual large-object storage
Table spaces
Per-column collation
Online backup
Point-in-time recovery, implemented using write-ahead logging
In-place upgrades with pg_upgrade for less downtime (supports upgrades from 8.3.x
and later)
Control and connectivity
Foreign data wrappers
PostgreSQL can link to other systems to retrieve data via foreign data wrappers
(FDWs). These can take the form of any data source, such as a file system, another RDBMS,
or a web service. This means that regular database queries can use these data sources like
regular tables, and even join multiple data-sources together.
Interfaces
PostgreSQL has several interfaces available and is also widely supported among
programming language libraries. Built-in interfaces include libpq (PostgreSQL's official C
application interface) and ECPG (an embedded C system). External interfaces include:
libpqxx: C++ interface
PostgresDAC: PostgresDAC (for Embarcadero RadStudio/Delphi/CBuilder XE-XE3)
DBD::Pg: Perl DBI driver
JDBC: JDBC interface
Lua: Lua interface
Npgsql: .NET data provider
ST-Links SpatialKit: Link Tool to ArcGIS
Procedural languages
Procedural languages allow developers to extend the database with
custom subroutines (functions), often called stored procedures. These functions can be used to
build triggers (functions invoked upon modification of certain data) and custom aggregate
functions. Procedural languages can also be invoked without defining a function, using the
"DO" command at SQL level.
Languages are divided into two groups: "Safe" languages are sandboxed and can be
safely used by any user. Procedures written in "unsafe" languages can only be created
by superusers, because they allow bypassing the database's security restrictions, but can also
access sources external to the database. Some languages like Perl provide both safe and unsafe
versions.
PostgreSQL has built-in support for three procedural languages:
Plain SQL (safe). Simpler SQL functions can get expanded inline into the calling
(SQL) query, which saves function call overhead and allows the query optimizer to "see
inside" the function.
PL/pgSQL (safe), which resembles Oracle's PL/SQL procedural language
and SQL/PSM.
C (unsafe), which allows loading custom shared libraries into the database. Functions
written in C offer the best performance, but bugs in code can crash and potentially corrupt the
database. Most built-in functions are written in C.
In addition, PostgreSQL allows procedural languages to be loaded into the database
through extensions. Three language extensions are included with PostgreSQL to
support Perl, Python and Tcl. There are external projects to add support for many other
languages, including Java, JavaScript (PL/V8), R, Ruby, and others.
Triggers
Triggers are events triggered by the action of SQL DML statements. For example,
an INSERT statement might activate a trigger that checks if the values of the statement are
valid. Most triggers are only activated by either INSERT or UPDATE statements.
Triggers are fully supported and can be attached to tables. Triggers can be per-column and
conditional, in that UPDATE triggers can target specific columns of a table, and triggers can
be told to execute under a set of conditions as specified in the trigger's WHERE clause.
Triggers can be attached to views by using the INSTEAD OF condition. Multiple triggers are
fired in alphabetical order. In addition to calling functions written in the native PL/pgSQL,
triggers can also invoke functions written in other languages like PL/Python or PL/Perl.
Asynchronous notifications
PostgreSQL provides an asynchronous messaging system that is accessed through the
NOTIFY, LISTEN and UNLISTEN commands. A session can issue a NOTIFY command,
along with the user-specified channel and an optional payload, to mark a particular event
occurring. Other sessions are able to detect these events by issuing a LISTEN command,
which can listen to a particular channel. This functionality can be used for a wide variety of
purposes, such as letting other sessions know when a table has updated or for separate
applications to detect when a particular action has been performed. Such a system prevents the
need for continuous polling by applications to see if anything has yet changed, and reducing
unnecessary overhead. Notifications are fully transactional, in that messages are not sent until
the transaction they were sent from is committed. This eliminates the problem of messages
being sent for an action being performed which is then rolled back.
Many of the connectors for PostgreSQL provide support for this notification system
(including libpq, JDBC, Npgsql, psycopg and node.js) so it can be used by external
applications.
Rules
Rules allow the "query tree" of an incoming query to be rewritten. Rules, or more
properly, "Query Re-Write Rules", are attached to a table/class and "Re-Write" the incoming
DML (select, insert, update, and/or delete) into one or more queries that either replace the
original DML statement or execute in addition to it. Query Re-Write occurs after DML
statement parsing, but before query planning.
Other querying features
Transactions
Full text search
Views
Materialized views
Updateable views
Recursive views
Inner, outer (full, left and right), and cross joins
Sub-selects
Correlated sub-queries
Regular expressions
Common table expressions and writable common table expressions
Encrypted connections via TLS (current versions do not use vulnerable SSL, even with
that configuration option
Domains
Savepoints
Two-phase commit
TOAST (The Oversized-Attribute Storage Technique) is used to transparently store
large table attributes (such as big MIME attachments or XML messages) in a separate area,
with automatic compression.
Embedded SQL is implemented using preprocessor. SQL code is first written
embedded into C code. Then code is run through ECPG preprocessor, which replaces SQL
with calls to code library. Then code can be compiled using a C compiler. Embedding works
also with C++ but it does not recognize all C++ constructs.
Security
PostgreSQL manages its internal security on a per-role basis. A role is generally
regarded to be a user (a role that can log in), or a group (a role of which other roles are
members). Permissions can be granted or revoked on any object down to the column level,
and can also allow/prevent the creation of new objects at the database, schema or table levels.
PostgreSQL's SECURITY LABEL feature (extension to SQL standards), allows for additional
security; with a bundled loadable module that supports label-based mandatory access
control (MAC) based on SELinux security policy.
PostgreSQL natively supports a broad number of external authentication mechanisms,
including:
password (either MD5 or plain-text)
GSSAPI
SSPI
Kerberos
ident (maps O/S user-name as provided by an ident server to database user-name)
peer (maps local user name to database user name)
LDAP
Active Directory
RADIUS
certificate
PAM
2. SYSTEM STUDY
The present system is all the work are handle manually and have to be noted down in some
register and also taking are of that documentation. They are arranged meeting by call and if any update
occurred then again call the client and update the meeting schedule, its wasting time and as well as
money
2.1.1 DRAWBACKS
Proposed system focused Mobile Customer Relationship Management (HR) in the marketing
industry .The paper deals with the possibilities and aspects to support CRM via mobile
services. The key profiles of mobile communication are Interactive Broadband Protocols,
Location Based Services and dividualized/Personalized Services mainly based on
Multimedia-Information. These profiles are embedded in a three-layer communication model.
This paper is to provide HR which not only provide Order Processing but also can track the
move of Sales person to the customer using Geo Tracking System, so that no sales person can
cheat the manager by not visiting to customers. This paper also Provide a HR order processing
module for distributor/sales person. In this manner, the new mobile HR solutions can
represent a specific kind of collaborative application.
2.2.1 FEATURES
The files are been protected using the username and password as key and by using the
transposition cipher technique where the content of the file will be changed into cipher data
which cannot be understood even by the administrator. By doing this the data is completely
secured. Only after decrypting the file the data can be understood by the administrator by
using the same key and password.
3.2INPUT DESIGN
1st Draft,
Proposal
Comment
Agency Customer
2nd Draft,
Final
The model of collaboration among companies differs from the client integrated case in
such a way that the participants are meant to agitate with equal rights. Therefore, all actors are
equipped with equivalent access rights and contribute to the project by developing parts of the
project, see Fig. 2. They do not necessarily interact in all segments of the project. As in the
previously mentioned b2c case, project participants have the latest information and the history
of the ongoing project at their fingertips, at any time and at any place, as long as there is an
Internet connection available.
At first sight, a b2b collaboration setting does not require any customer-relationship
management, because there are no customers involved. However, if both the virtual
enterprise and its customers use the same Internet platform, individual companies can access
HR data both about the customers and about other participating companies.
that they are responsible for. This is achieved by security mechanisms and the distribution of
different access rights. Due to reasons of acceptance it is appropriate to store project
information at independent service providers or a superordinate association which may
provide the needed infrastructure.
8
Prime
1
Request PC Contractor
7 2
3
Proposal SC1
Sub
6
Proposal SC2 Contractors I
3
5
4
Proposal SC21 Sub
Proposal
4
SC Contractors II
22
17
involved in communication with their subcontractors, of which they are the clients, and with
their prime contractor, which is their client. HR is complex in this scenario with nobody
having full view to the complete project structure and to the complete HR information that is
available. In this case HR principles can be applied on any stage, as any company, except the
ones residing on the bottom, can act as a client for a subordinated company in the hierarchy.
18
3.5 SYSTEM DEVELOPMENT
Login module
Customer module
Reports module
Login Module
It is used for logging in the Leave management system. It is used for verifying the
user. Once the user is authenticated, they can access the system. For viewing and modifying
the details stored in proposed system, user should have privileges. Once the user logins, all
privileges like view, add, delete and modify operations will be granted. Only the admin has
authority to do those, admin may be the staff or root user who is handling the system.
First module is admin which has right for creating space for new batch. Any entry of
new faculty or updating in subject is necessary and is done by the admin. Sending notice is
also possible. Admin can access entire system. Students can only view the student reports and
attendance report. Second module is handled by the user which can be a faulty or an operator.
Staff has a right of making daily attendance, is done in attendance module. User verification
is done using login module.
Customer Module
This Module used to descirbes the customer details like customer name,E-mail-
id,phone no.The administrator interact with customer through this module.customer can
easily raise the issue and easily get the solution.
19
After the system is implemented and Conversion is completed, a review of the personal is good. They
are Satisfied with this Software facility. Less man power, provide information timely. Save data entry
and duplication work. Timing and also resources allocation for data entry, it fills the gap between data
entry. Provide the lock system and password protection so it is reliable
TESTING
Software Validation
Validation is process of examining whether or not the software satisfies the user
requirements. It is carried out at the end of the SDLC. If the software matches
requirements for which it was made, it is validated.
Validation ensures the product under development is as per the user requirements.
Validation answers the question "Are we developing the product which attempts all
that user needs from this software ?".
Software Verification
20
Target of the test are -
Errors - These are actual coding mistakes made by developers. In addition, there is a
difference in output of software and desired output, is considered as an error.
Fault - When error exists fault occurs. A fault, also known as a bug, is a result of an
error which can cause system to fail.
Failure - failure is said to be the inability of the system to perform the desired task.
Failure occurs when fault exists in the system.
Manual - This testing is performed without taking help of automated testing tools. The
software tester prepares test cases for different sections and levels of the code,
executes the tests and reports the result to the manager.
Manual testing is time and resource consuming. The tester needs to confirm whether or
not right test cases are used. Major portion of testing involves manual testing.
Automated This testing is a testing procedure done with aid of automated testing
tools. The limitations with manual testing can be overcome using automated test tools.
A test needs to check if a webpage can be opened in Internet Explorer. This can be
easily done with manual testing. But to check if the web-server can take the load of 1
million users, it is quite impossible to test manually.
There are software and hardware tools which helps tester in conducting load testing,
stress testing, regression testing.
Testing Approaches
Functionality testing
Implementation testing
21
When functionality is being tested without taking the actual implementation in
concern it is known as black-box testing. The other side is known as white-box testing
where not only functionality is tested but the way it is implemented is also
analyzed.Exhaustive tests are the best-desired method for a perfect testing. Every single
possible value in the range of the input and output values is tested. It is not possible to test
each and every value in real world scenario if the range of values is large.
Black-box testing
It is carried out to test functionality of the program. It is also called Behavioral testing.
The tester in this case, has a set of input values and respective desired results. On
providing input, if the output matches with the desired results, the program is tested ok,
and problematic otherwise.
In this testing method, the design and structure of the code are not known to the tester,
and testing engineers and end users conduct this test on the software.
Equivalence class - The input is divided into similar classes. If one element of a class
passes the test, it is assumed that all the class is passed.
Boundary values - The input is divided into higher and lower end values. If these
values pass the test, it is assumed that all values in between may pass too.
Cause-effect graphing - In both previous methods, only one input value at a time is
tested. Cause (input) Effect (output) is a testing technique where combinations of
input values are tested in a systematic way.
22
State-based testing - The system changes state on provision of input. These systems
are tested based on their states and input.
White-box testing
In this testing method, the design and structure of the code are known to the tester.
Programmers of the code conduct this test on the code.
Control-flow testing - The purpose of the control-flow testing to set up test cases
which covers all statements and branch conditions. The branch conditions are tested
for both being true and false, so that all statements can be covered.
Data-flow testing - This testing technique emphasis to cover all the data variables
included in the program. It tests where the variables were declared and defined and
where they were used or changed.
TEST CASE
23
match with the
database
To check
TC_02 username and Username:xxx Should not Username
Password does x login or Pass
not match with Password:yyy password
the database y is
incorrect
To check
TC_03 username Username:abc Should not Username
match and Password:yyy login or
Password does y password Pass
not match with is
the database incorrect
To check Username
TC_04 username does Username:xxx Should not or
not match and x login password Pass
Password Password:123 is
match with the incorrect
database
SCREEN NAME:REGISTRATION
24
password is Password: Should Move to PASS
integer type 12345 move to next line
next line
TC_03 To check that
username is Username: text Should Move to PASS
unique move to next line
next line
TC_04 To check
password is Password: Should Move to PASS
unique 12345 move to next line
next line
TC_05 To check that Email: Should
the Email ID abc@gmail.co accept the Accept PASS
has @ symbol m Email Email ID
TC_06 To check the Email : Should not Shows
Email ID is not Abc1gmail.co accept the Error in PASS
with @symbol m Email Email ID
5.CONCLUSION
Human resources information system can help both employer and employee in order to do
their job. It can help organization going smoothly using technology. Organization can
improve their management system from traditional approach to a modern approach that using
a technology base. In addition, organization can take advantage in competition when their
organization more advances.
25
There are some benefits of implementing HRIS:
1. Standardization
An HRIS provides uniformity through templates and predetermined procedures for uploading
data and downloading reports. It also means that data retrieved and viewed is in a format that
is easily identifiable and user friendly.
2. Knowledge management
Lastly, I enjoy this subject that can make me understand about human resources information
system. I can use this knowledge for my future. As human resources practitioner knowledge
in information system are important in order to manage employee.
6.BIBLIOGRAPHY
BOOKS
1. Book - Single Author.
Adler, N.J. (1991). International dimensions of organizational behavior. Boston:
PWS-Kent Publishing Company.
26
2. Book - Multiple Authors, Second or Subsequent Editions.
Aron, A., & Aron, E.N. (1999). Statistics for psychology. (2nd ed.). New Jersey:
Prentice-Hall International, Inc.
3. Chapter in Edited Book.
Hartmann, L.C. (1998). The impact of trends in labour-force participation in
Australia. In M. Patrickson & L. Hartmann (Eds.), Managing an ageing
workforce (3-25). Warriewood, Australia: Woodslane Pty Limited.
4. Chapter in Edited Book, Several Volumes.
Adams, J.S. (1965). Inequity in social exchange. In L. Berkowitz (Ed.), Advances in
experimental social psychology (Vol. 2, 267-299). New York: Academic Press.
5. Chapter in Edited Book - Two Authors, Second or Subsequent Edition.
Forteza, J.A., & Prieto, J.M. (1994). Aging and work behaviour. In H.C. Triandis,
D. Dunnette, & L.M. Hough (Eds.), Handbook of industrial and organizational
psychology. (2nd ed., Vol. 4, 447-483). Palo Alto, CA: Consulting Psychologists
Press.
6. Edited Book - One or more Authors.
Hewstone, M., & Brown, R. (Eds.). (1986). Contact and conflict in intergroup
encounters. Oxford: Basil Blackwell Ltd.
JOURNALS
1. Journal Article.
Kawakami, K., & Dovidio, J.F. (2001). The reliability of implicit
stereotyping. Personality and Social Psychology Bulletin, 27(2), 212-225.
2. Journal Article - No Volume Number.
Schizas, C.L. (1999). Capitalizing on a generation gap. Management Review,
(June), 62-63.
7.APPENDICES
27
A data flow diagram is graphical tool used to describe and analyze movement of data
through a system. These are the central tool and the basis from which the other components
are developed. The transformation of data from input to output, through processed, may be
described logically and independently of physical components associated with the system.
These are known as the logical data flow diagrams. The physical data flow diagrams show
the actual implements and movement of data between people, departments and workstations.
A full description of a system actually consists of a set of data flow diagrams. Using two
familiar notations Yourdon, Gane and Sarson notation develops the data flow diagrams.
The idea behind the explosion of a process into more process is that understanding at
one level of detail is exploded into greater detail at the next level. This is done until further
explosion is necessary and an adequate amount of detail is described for analyst to understand
the process. Larry Constantine first developed the DFD as a way of expressing system
requirements in a graphical from, this lead to the modular design. A DFD is also known as a
bubble Chart has the purpose of clarifying system requirements and identifying major
transformations that will become programs in system design. So it is the starting point of the
design to the lowest level of detail. A DFD consists of a series of bubbles joined by data
flows in the system.
DFD symbols
An arrow identifies data flow. It is the pipeline through which the information
flows
28
A circle or a bubble represents a process that transforms incoming data flow into
outgoing data flows.
Data flow
Data Store
Constructing a DFD
Process should be named and numbered for an easy reference. Each name should be
representative of the process.
The direction of flow is from top to bottom and from left to right. Data traditionally flow
from source to the destination although they may flow back to the source. One way to
indicate this is to draw long flow line back to a source.
An alternative way is to repeat the source symbol as a destination. Since it is used more
than once in the DFD it is marked with a short diagonal.
When a process is exploded into lower level details, they are numbered.
29
The names of data stores and destinations are written in capital letters. Process and
dataflow names have the first letter of each work capitalized.
A DFD typically shows the minimum contents of data store. Each data store should contain
all the data elements that flow in and out. Questionnaires should contain all the data elements
that flow in and out. Missing interfaces redundancies and like is then accounted for often
through interviews.
The DFD shows flow of data, not of control loops and decision are controlled
considerations do not appear on a DFD.
The DFD does not indicate the time factor involved in any process whether the dataflow
take place daily, weekly, monthly or yearly.
Current physical
In current physical DFD process label include the name of people or their positions or
the names of computer systems that might provide some of the overall system-processing
label includes an identification of the technology used to process the data. Similarly data
flows and data stores are often labels with the names of the actual physical media on which
data are stored such as file folders, computer files, business forms or computer tapes.
New logical
This is exactly like a current logical model if the user were completely happy with the
user were completely happy with the functionality of the current system but had problems
with how it was implemented typically through the new logical model will differ from current
logical model while having additional functions, absolute function removal and inefficient
flows recognized.
New physical
30
The new physical represents only the physical implementation of the new system.
Current logical
The physical aspects at the system are removed as much as possible so that the current
system is reduced to its essence to the data and the processors that transforms them regardless
of actual physical form.
Process
No process can have only outputs. No process can have only inputs. If an object has
only inputs than it must be a sink. A process has a verb phrase label.
Data store
Data cannot move directly from one data store to another data store, a process must
move data. Data cannot move directly from an outside source to a data store, a process, which
receives, must move data from the source and place the data into data store. A data store has a
noun phrase label. Source is the origin or destination of data. Data cannot move direly from a
source to sink it must be moved by a process. A source or sink has a noun phrase land.
LEVEL 0
HR management
Register Complaint Check
Complaint
CUSTOMECUSTOMER
R SERVICE
PayMent Check
Status
31
LEVEL 1
1.0
CUSTOME CUSTOMER
Raise Issue SERVICE
R
2.0
Rectify
4.0 3.0
Payment Update
Status
B.TABLE DESIGN
Customer Registration
32
2. Customer_Name Text Name of the teacher
C.SAMPLE CODING
_logger = logging.getLogger(__name__)
class hr_employee_category(osv.osv):
33
_name = "hr.employee.category"
_description = "Employee Category"
_columns = {
'name': fields.char("Category", size=64, required=True),
'complete_name': fields.function(_name_get_fnc, type="char", string='Name'),
'parent_id': fields.many2one('hr.employee.category', 'Parent Category', select=True),
'child_ids': fields.one2many('hr.employee.category', 'parent_id', 'Child Categories'),
'employee_ids': fields.many2many('hr.employee', 'employee_category_rel',
'category_id', 'emp_id', 'Employees'),
}
_constraints = [
(_check_recursion, 'Error! You cannot create recursive Categories.', ['parent_id'])
]
hr_employee_category()
class hr_job(osv.osv):
34
def _no_of_employee(self, cr, uid, ids, name, args, context=None):
res = {}
for job in self.browse(cr, uid, ids, context=context):
nb_employees = len(job.employee_ids or [])
res[job.id] = {
'no_of_employee': nb_employees,
'expected_employees': nb_employees + job.no_of_recruitment,
}
return res
_name = "hr.job"
_description = "Job Description"
_inherit = ['mail.thread']
_columns = {
'name': fields.char('Job Name', size=128, required=True, select=True),
'expected_employees': fields.function(_no_of_employee, string='Total Forecasted
Employees',
help='Expected number of employees for this job position after new recruitment.',
store = {
'hr.job': (lambda self,cr,uid,ids,c=None: ids, ['no_of_recruitment'], 10),
'hr.employee': (_get_job_position, ['job_id'], 10),
},
35
multi='no_of_employee'),
'no_of_employee': fields.function(_no_of_employee, string="Current Number of
Employees",
help='Number of employees currently occupying this job position.',
store = {
'hr.employee': (_get_job_position, ['job_id'], 10),
},
multi='no_of_employee'),
'no_of_recruitment': fields.float('Expected in Recruitment', help='Number of new
employees you expect to recruit.'),
'employee_ids': fields.one2many('hr.employee', 'job_id', 'Employees',
groups='base.group_user'),
'description': fields.text('Job Description'),
'requirements': fields.text('Requirements'),
'department_id': fields.many2one('hr.department', 'Department'),
'company_id': fields.many2one('res.company', 'Company'),
'state': fields.selection([('open', 'No Recruitment'), ('recruit', 'Recruitement in
Progress')], 'Status', readonly=True, required=True,
help="By default 'In position', set it to 'In Recruitment' if recruitment process is
going on for this job position."),
}
_defaults = {
'company_id': lambda self,cr,uid,c:
self.pool.get('res.company')._company_default_get(cr, uid, 'hr.job', context=c),
'state': 'open',
}
_sql_constraints = [
('name_company_uniq', 'unique(name, company_id)', 'The name of the job position
must be unique per company!'),
]
36
def on_change_expected_employee(self, cr, uid, ids, no_of_recruitment,
no_of_employee, context=None):
if context is None:
context = {}
return {'value': {'expected_employees': no_of_recruitment + no_of_employee}}
hr_job()
class hr_employee(osv.osv):
_name = "hr.employee"
_description = "Employee"
_inherits = {'resource.resource': "resource_id"}
37
def _set_image(self, cr, uid, id, name, value, args, context=None):
return self.write(cr, uid, [id], {'image': tools.image_resize_image_big(value)},
context=context)
_columns = {
#we need a related field in order to be able to sort the employee by name
'name_related': fields.related('resource_id', 'name', type='char', string='Name',
readonly=True, store=True),
'country_id': fields.many2one('res.country', 'Nationality'),
'birthday': fields.date("Date of Birth"),
'ssnid': fields.char('SSN No', size=32, help='Social Security Number'),
'sinid': fields.char('SIN No', size=32, help="Social Insurance Number"),
'identification_id': fields.char('Identification No', size=32),
'otherid': fields.char('Other Id', size=64),
'gender': fields.selection([('male', 'Male'),('female', 'Female')], 'Gender'),
'marital': fields.selection([('single', 'Single'), ('married', 'Married'), ('widower',
'Widower'), ('divorced', 'Divorced')], 'Marital Status'),
'department_id':fields.many2one('hr.department', 'Department'),
'address_id': fields.many2one('res.partner', 'Working Address'),
'address_home_id': fields.many2one('res.partner', 'Home Address'),
'bank_account_id':fields.many2one('res.partner.bank', 'Bank Account Number',
domain="[('partner_id','=',address_home_id)]", help="Employee bank salary account"),
'work_phone': fields.char('Work Phone', size=32, readonly=False),
'mobile_phone': fields.char('Work Mobile', size=32, readonly=False),
'work_email': fields.char('Work Email', size=240),
'work_location': fields.char('Office Location', size=32),
'notes': fields.text('Notes'),
'parent_id': fields.many2one('hr.employee', 'Manager'),
'category_ids': fields.many2many('hr.employee.category', 'employee_category_rel',
'emp_id', 'category_id', 'Tags'),
38
'child_ids': fields.one2many('hr.employee', 'parent_id', 'Subordinates'),
'resource_id': fields.many2one('resource.resource', 'Resource', ondelete='cascade',
required=True),
'coach_id': fields.many2one('hr.employee', 'Coach'),
'job_id': fields.many2one('hr.job', 'Job'),
# image: all image fields are base64 encoded and PIL-supported
'image': fields.binary("Photo",
help="This field holds the image used as photo for the employee, limited to
1024x1024px."),
'image_medium': fields.function(_get_image, fnct_inv=_set_image,
string="Medium-sized photo", type="binary", multi="_get_image",
store = {
'hr.employee': (lambda self, cr, uid, ids, c={}: ids, ['image'], 10),
},
help="Medium-sized photo of the employee. It is automatically "\
"resized as a 128x128px image, with aspect ratio preserved. "\
"Use this field in form views or some kanban views."),
'image_small': fields.function(_get_image, fnct_inv=_set_image,
string="Small-sized photo", type="binary", multi="_get_image",
store = {
'hr.employee': (lambda self, cr, uid, ids, c={}: ids, ['image'], 10),
},
help="Small-sized photo of the employee. It is automatically "\
"resized as a 64x64px image, with aspect ratio preserved. "\
"Use this field anywhere a small image is required."),
'passport_id':fields.char('Passport No', size=64),
'color': fields.integer('Color Index'),
'city': fields.related('address_id', 'city', type='char', string='City'),
'login': fields.related('user_id', 'login', type='char', string='Login', readonly=1),
'last_login': fields.related('user_id', 'date', type='datetime', string='Latest Connection',
readonly=1),
39
}
_order='name_related'
40
def onchange_address_id(self, cr, uid, ids, address, context=None):
if address:
address = self.pool.get('res.partner').browse(cr, uid, address, context=context)
return {'value': {'work_phone': address.phone, 'mobile_phone': address.mobile}}
return {'value': {}}
41
def _get_default_image(self, cr, uid, context=None):
image_path = addons.get_module_resource('hr', 'static/src/img', 'default_image.png')
return tools.image_resize_image_big(open(image_path, 'rb').read().encode('base64'))
_defaults = {
'active': 1,
'image': _get_default_image,
'color': 0,
}
_constraints = [
(_check_recursion, 'Error! You cannot create recursive hierarchy of Employee(s).',
['parent_id']),
]
hr_employee()
class hr_department(osv.osv):
_description = "Department"
42
_inherit = 'hr.department'
_columns = {
'manager_id': fields.many2one('hr.employee', 'Manager'),
'member_ids': fields.one2many('hr.employee', 'department_id', 'Members',
readonly=True),
}
class res_users(osv.osv):
_name = 'res.users'
_inherit = 'res.users'
43
data_id = data_obj._get_id(cr, uid, 'hr', 'ir_ui_view_sc_employee')
view_id = data_obj.browse(cr, uid, data_id, context=context).res_id
self.pool.get('ir.ui.view_sc').copy(cr, uid, view_id, default = {
'user_id': user_id}, context=context)
except:
# Tolerate a missing shortcut. See product/product.py for similar code.
_logger.debug('Skipped meetings shortcut for user "%s".',
data.get('name','<new'))
return user_id
_columns = {
'employee_ids': fields.one2many('hr.employee', 'user_id', 'Related employees'),
}
res_users()
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
D.SAMPLE INPUT
44
E.SAMPLE OUTPUT
45
46
47