Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 47

CONTENTS

ACKNOLEDGEMENT PAGE.NO
CONTENTS
SYNOPSIS

1.INTRODUCTION

1 SYSTEM SPECIFICATION

1.1.1 HARDWARE CONFIGURATION

1.1.2 SOFTWARE SPECIFICATION

2.SYSTEM STUDY

2.1 EXISTING SYSTEM

2.1.1 DRAW BACKS

2.2 PROPOSED SYSTEM

2.2.1 FEATURES

3.LOGICAL DESIGN

3.1 FILE DESIGN

3.2 INPUT DESIGN

3.3 OUTPUT DESIGN

3.4 DATABASE DESIGN

3.5 SYSTEM DEVELOPMENT

3.5.1 DESCRIPTION OF MODULES

4.TESTING AND IMPLEMENTATION


5.CONCLUSION

FUTURE ENHANCEMENT

6.BIBLIOGRAPHY

7.APPENDICES

A.DATA FLOW DIAGRAM

B.TABLE STRUCTURE

C.SAMPLE CODING

D.SAMPLE INPUT

E.SAMPLE OUTPUT

SYNOPSIS
In this paper describes project that focused Human Resource Management in the
marketing industry .The paper deals with the possibilities and aspects to support HR via
mobile services. The key profiles of mobile communication are Interactive Broadband
Protocols, Location Based Services and devitalized/Personalized Services mainly based on
Multimedia-Information. These profiles are embedded in a three-layer communication model.
This paper is to provide HR which not only provide Order Processing but also can track the
move of Sales person to the customer using Geo Tracking System, so that no sales person can
cheat the manager by not visiting to customers. This paper also Provide a HR order processing
module for distributor/sales person. In this manner, the new mobile HR solutions can
represent a specific kind of collaborative application. In this paper we start with an overview
of some basic concepts. We then present our approach of web-based project management that
can be used for various forms of collaborative activities over the Internet, including customer
relationship management. We have successfully used our approach for projects of a digital
photography and a middle-class marketing agency. Currently, we are working on an extended
platform for virtual enterprises in the area of facility management with many customers as
well as sub-contractors.

.
1. INTRODUCTION

The Internet allows new ways of communication among companies as well as between
clients and companies, e.g., business-to-business (b2b) market places. New ways of
collaboration make virtual enterprises possible. Customer relationship management includes
relations of companies to their clients and activities for the consequent support of client and
service processes. Todays communication and collaboration facilities allow clients to
participate in the creative and inventive development of individual products. This opens up
new doors for effective HR on a project basis, since the key factors of successful HR are
interaction with and identification of the customer. Both is guaranteed in modern project
management platforms, linked with the voluntary cooperation of the client and his ambition to
design a customized product.

In this paper we describe a web-based project management system that enables


companies to plan and execute business processes both with other companies and with clients.
In Section 2 we describe basic concepts. Section 3 outlines our approach of web-based project
management. In Sections 4 and 5, we introduce customer relationship management and depict
consequences to virtual enterprises, respectively. In Section 6, we describe new collaborative
HR approaches. An example is presented in Section 7. In Section 8 we describe a prototype.
In Section 9, we draw conclusions and outline future work.
1.1SYSTEM SPECIFICATION

1.1.1HARDWARE SPECIFICATION

PROCESSOR : INTEL P-III BASED SYSTEM

PROCESSOR SPEED : 250 MHz TO 833 MHz

RAM : 64 MB TO 256 MB

HARD DISK : 2GB to 30GB

KEYBOARD : 104 KEYS

1.1.2SOFTWARE SPECIFICATION

FRONT END : Open ERP 7.0

BACK END : PostgreSql

OPERATING SYSTEM : WINDOWS NT/95/98/2000


SOFTWARE DESCRIPTION

Front End

OpenERP

OpenERP is an all-in-one management software that offers a range of business


applications that form a complete suite of enterprise management applications. The OpenERP
solution is ideal for SMEs, but fits both small and large companies alike. OpenERP is an all-
in-one business software capable of covering all business needs, including HR, Website/e-
Commerce, billing, accounting, manufacturing, warehouse- and project management, and
inventory, all seamlessly integrated.
OpenERP offers three separate versions of the solution; OpenERP Enterprise,
OpenERP Online SaaS (Software as a Service) version, and the OpenERP Community
version. The Enterprise version is self-hosted, it includes all the apps, and the pricing starts at
$360 per user per year, with a minimum of 5 users. The OpenERP Online version is hosted on
a cloud, and the first app is offered for free as a standalone app for unlimited users. After the
first app, there is a fixed monthly subscription fee for the apps used and the number of
users. The Community version is the open source version. The Source code for the Open
Object framework and core ERP (enterprise resource planning) modules is curated by the
Belgium-based OpenERP S.A. The last fully featured Open Source version was 8.0 (LTS)
Source code model
From inception, OpenERP S.A / OpenERP S.A have released software as Open
Source but starting with the V9.0 release, the company has transitioned to an open core model
which provides subscription-based proprietary Enterprise software and cloud-hosted Software
as a service, and a cut-down community version.
Community & network
In 2013, the not-for-profit OpenERP Community Association was formed to promote
the widespread use of OpenERP and to support the collaborative development of OpenERP
features. This organisation has over 150 members who are a mix of individuals and
organisations. However, there are over 20,000 people that contribute to the OpenERP
community.
OpenERP S.A. switched its focus from being a service company to focus more on
software publishing and the SaaS business. Customized programming, support, and other
services, are provided by an active global community and a network of over 700
official partners and integrators.

Modules / components
The main OpenERP components are the Open Object framework, about 30 core
modules (also called official modules) and more than 3000 community modules.
Educational use
OpenERP has been used as a component of university courses, and it became a
compulsory subject for the baccalaureate in France, just like Word, Excel and PowerPoint. A
study on experiential learning suggested that OpenERP provides a suitable alternative to
proprietary systems to supplement teaching. OpenERP also offers a completely free
programmer called OpenERP Education, which allows teachers and/or students to create an
OpenERP database for academic purposes.
Software & architecture
OpenERP uses Python scripting and Postgres database. A development repo is
on GitHub.
Vendor support. The 3 last LTS version are supported in parallel. This means that
when a new LTS version is released, an older version reaches its end-of-life, and is not
supported anymore. As an example, 8.0 LTS will be supported along with 9.0 LTS and 10.0
LTS, but will reach end-of-life when 11.0 LTS is released.
In 2005, Fabien Pinckaers, the founder and current CEO of OpenERP, started to
develop his first software product, TinyERP. His dream was for his product and company to
become a major player in the enterprise world with a cool, innovative, open source product.
However, three years later he came to realize that having the word tiny in the product name
was not the right approach if he wanted to change the enterprise world. The name was then
changed to OpenERP. The company started to evolve quickly and in 2010, OpenERP had
become a 100+ employee company. The OpenERP product was powerful, but Fabien
Pinckaers felt that they had become too distracted by providing services to customers that the
product had suffered and become unattractive. He wanted to make sure that the product came
first in order to be able to offer an exceptional product. Therefore, the decision was made to
redirect the companys main focus towards software publishing rather than services, and the
business model changed accordingly, with increased focus on building a strong partner
network and maintenance offers.

Back end PostgreSQL


PostgreSQL, often simply Postgres, is an object-relational database (ORDBMS) i.e.
a RDBMS, with additional (optional use) "object" features with an emphasis on extensibility
and standards compliance. As a database server, its primary function is to store data securely,
and to allow for retrieval at the request of other software applications. It can handle workloads
ranging from small single-machine applications to large Internet-facing applications (or
for data warehousing) with many concurrent users; on macOS Server, PostgreSQL is the
default database; and it is also available for Microsoft Windows and Linux (supplied in most
distributions).
PostgreSQL is developed by the PostgreSQL Global Development Group, a diverse
group of many companies and individual contributors. It is free and open-source software,
released under the terms of the PostgreSQL License, a permissive free-software license.
Indexes
PostgreSQL includes built-in support for regular B-tree and hash indexes, and four
index access methods: generalized search trees (GiST), generalized inverted indexes (GIN),
Space-Partitioned GiST (SP-GiST) and Block Range Indexes (BRIN). Hash indexes are
implemented, but discouraged because they cannot be recovered after a crash or power loss.
In addition, user-defined index methods can be created, although this is quite an involved
process. Indexes in PostgreSQL also support the following features:
Expression indexes can be created with an index of the result of an expression or
function, instead of simply the value of a column.
Partial indexes, which only index part of a table, can be created by adding
a WHERE clause to the end of the CREATE INDEX statement. This allows a smaller index to
be created.
The planner is capable of using multiple indexes together to satisfy complex queries,
using temporary in-memory bitmap index operations (useful in data warehousing applications
for joining a large fact table to smaller dimension tables such as those arranged in a star
schema).
k-nearest neighbors (k-NN) indexing (also referred to KNN-GiS provides efficient
searching of "closest values" to that specified, useful to finding similar words, or close objects
or locations with geospatial data. This is achieved without exhaustive matching of values.
In PostgreSQL 9.2 and later, index-only scans often allow the system to fetch data
from indexes without ever having to access the main table.

Schemas
In PostgreSQL, a schema holds all objects (with the exception of roles and table
spaces). Schemas effectively act like namespaces, allowing objects of the same name to co-
exist in the same database. By default, newly created databases have a schema called "public",
but any additional schemas can be added, and the public schema isn't mandatory.
A "search path" setting determines the order in which PostgreSQL checks schemas for
unqualified objects (those without a prefixed schema). By default, it is set to "$user, public"
($user refers to the currently connected database user). This default can be set on a database or
role level, but as it is a session parameter, it can be freely changed (even multiple times)
during a client session, affecting that session only.
Non-existent schemas listed in search path are silently skipped during objects lookup.
New objects are created in whichever valid schema (one that presently exists) appears first in
the search path.
Data types
A wide variety of native data types are supported, including:
Boolean
Arbitrary precision numerics
Character (text, varchar, char)
Binary
Date/time (timestamp/time with/without timezone, date, interval)
Money
Enum
Bit strings
Text search type
Composite
HStore (an extension enabled key-value store within PostgreSQL)
Arrays (variable length and can be of any data type, including text and composite
types) up to 1 GB in total storage size
Geometric primitives
IPv4 and IPv6 addresses
CIDR blocks and MAC addresses
XML supporting XPath queries
UUID

In addition, users can create their own data types which can usually be made fully
indexable via PostgreSQL's indexing infrastructures GiST, GIN, SP-GiST. Examples of
these include the geographic information system (GIS) data types from the PostGIS project
for PostgreSQL.
There is also a data type called a "domain", which is the same as any other data type
but with optional constraints defined by the creator of that domain. This means any data
entered into a column using the domain will have to conform to whichever constraints were
defined as part of the domain.
Starting with PostgreSQL 9.2, a data type that represents a range of data can be used
which are called range types. These can be discrete ranges (e.g. all integer values 1 to 10) or
continuous ranges (e.g. any point in time between 10:00 am and 11:00 am). The built-in range
types available include ranges of integers, big integers, decimal numbers, time stamps (with
and without time zone) and dates.
Custom range types can be created to make new types of ranges available, such as IP
address ranges using the inet type as a base, or float ranges using the float data type as a base.
Range types support inclusive and exclusive range boundaries using the [] and () characters
respectively. (e.g. '[4,9)' represents all integers starting from and including 4 up to but not
including 9.) Range types are also compatible with existing operators used to check for
overlap, containment, right of etc.
User-defined objects
New types of almost all objects inside the database can be created, including:
Casts
Conversions
Data types
Domains
Functions, including aggregate functions and window functions
Indexes including custom indexes for custom types
Operators (existing ones can be overloaded)
Procedural languages
Inheritance
Tables can be set to inherit their characteristics from a "parent" table. Data in child
tables will appear to exist in the parent tables, unless data is selected from the parent table
using the ONLY keyword, i.e. SELECT * FROM ONLY parent_table;. Adding a column in
the parent table will cause that column to appear in the child table.
Inheritance can be used to implement table partitioning, using either triggers or rules
to direct inserts to the parent table into the proper child tables.
As of 2010, this feature is not fully supported yet in particular, table constraints are not
currently inheritable. All check constraints and not-null constraints on a parent table are
automatically inherited by its children. Other types of constraints (unique, primary key, and
foreign key constraints) are not inherited.
Inheritance provides a way to map the features of generalization hierarchies depicted
in entity relationship diagrams (ERDs) directly into the PostgreSQL database.
Other storage feature
Referential integrity constraints including foreign key constraints, column constraints,
and row checks
Binary and textual large-object storage
Table spaces
Per-column collation
Online backup
Point-in-time recovery, implemented using write-ahead logging
In-place upgrades with pg_upgrade for less downtime (supports upgrades from 8.3.x
and later)
Control and connectivity
Foreign data wrappers
PostgreSQL can link to other systems to retrieve data via foreign data wrappers
(FDWs). These can take the form of any data source, such as a file system, another RDBMS,
or a web service. This means that regular database queries can use these data sources like
regular tables, and even join multiple data-sources together.
Interfaces
PostgreSQL has several interfaces available and is also widely supported among
programming language libraries. Built-in interfaces include libpq (PostgreSQL's official C
application interface) and ECPG (an embedded C system). External interfaces include:
libpqxx: C++ interface
PostgresDAC: PostgresDAC (for Embarcadero RadStudio/Delphi/CBuilder XE-XE3)
DBD::Pg: Perl DBI driver
JDBC: JDBC interface
Lua: Lua interface
Npgsql: .NET data provider
ST-Links SpatialKit: Link Tool to ArcGIS

Procedural languages
Procedural languages allow developers to extend the database with
custom subroutines (functions), often called stored procedures. These functions can be used to
build triggers (functions invoked upon modification of certain data) and custom aggregate
functions. Procedural languages can also be invoked without defining a function, using the
"DO" command at SQL level.
Languages are divided into two groups: "Safe" languages are sandboxed and can be
safely used by any user. Procedures written in "unsafe" languages can only be created
by superusers, because they allow bypassing the database's security restrictions, but can also
access sources external to the database. Some languages like Perl provide both safe and unsafe
versions.
PostgreSQL has built-in support for three procedural languages:
Plain SQL (safe). Simpler SQL functions can get expanded inline into the calling
(SQL) query, which saves function call overhead and allows the query optimizer to "see
inside" the function.
PL/pgSQL (safe), which resembles Oracle's PL/SQL procedural language
and SQL/PSM.
C (unsafe), which allows loading custom shared libraries into the database. Functions
written in C offer the best performance, but bugs in code can crash and potentially corrupt the
database. Most built-in functions are written in C.
In addition, PostgreSQL allows procedural languages to be loaded into the database
through extensions. Three language extensions are included with PostgreSQL to
support Perl, Python and Tcl. There are external projects to add support for many other
languages, including Java, JavaScript (PL/V8), R, Ruby, and others.
Triggers
Triggers are events triggered by the action of SQL DML statements. For example,
an INSERT statement might activate a trigger that checks if the values of the statement are
valid. Most triggers are only activated by either INSERT or UPDATE statements.
Triggers are fully supported and can be attached to tables. Triggers can be per-column and
conditional, in that UPDATE triggers can target specific columns of a table, and triggers can
be told to execute under a set of conditions as specified in the trigger's WHERE clause.
Triggers can be attached to views by using the INSTEAD OF condition. Multiple triggers are
fired in alphabetical order. In addition to calling functions written in the native PL/pgSQL,
triggers can also invoke functions written in other languages like PL/Python or PL/Perl.
Asynchronous notifications
PostgreSQL provides an asynchronous messaging system that is accessed through the
NOTIFY, LISTEN and UNLISTEN commands. A session can issue a NOTIFY command,
along with the user-specified channel and an optional payload, to mark a particular event
occurring. Other sessions are able to detect these events by issuing a LISTEN command,
which can listen to a particular channel. This functionality can be used for a wide variety of
purposes, such as letting other sessions know when a table has updated or for separate
applications to detect when a particular action has been performed. Such a system prevents the
need for continuous polling by applications to see if anything has yet changed, and reducing
unnecessary overhead. Notifications are fully transactional, in that messages are not sent until
the transaction they were sent from is committed. This eliminates the problem of messages
being sent for an action being performed which is then rolled back.
Many of the connectors for PostgreSQL provide support for this notification system
(including libpq, JDBC, Npgsql, psycopg and node.js) so it can be used by external
applications.
Rules
Rules allow the "query tree" of an incoming query to be rewritten. Rules, or more
properly, "Query Re-Write Rules", are attached to a table/class and "Re-Write" the incoming
DML (select, insert, update, and/or delete) into one or more queries that either replace the
original DML statement or execute in addition to it. Query Re-Write occurs after DML
statement parsing, but before query planning.
Other querying features
Transactions
Full text search
Views
Materialized views
Updateable views
Recursive views
Inner, outer (full, left and right), and cross joins
Sub-selects
Correlated sub-queries
Regular expressions
Common table expressions and writable common table expressions
Encrypted connections via TLS (current versions do not use vulnerable SSL, even with
that configuration option
Domains
Savepoints
Two-phase commit
TOAST (The Oversized-Attribute Storage Technique) is used to transparently store
large table attributes (such as big MIME attachments or XML messages) in a separate area,
with automatic compression.
Embedded SQL is implemented using preprocessor. SQL code is first written
embedded into C code. Then code is run through ECPG preprocessor, which replaces SQL
with calls to code library. Then code can be compiled using a C compiler. Embedding works
also with C++ but it does not recognize all C++ constructs.
Security
PostgreSQL manages its internal security on a per-role basis. A role is generally
regarded to be a user (a role that can log in), or a group (a role of which other roles are
members). Permissions can be granted or revoked on any object down to the column level,
and can also allow/prevent the creation of new objects at the database, schema or table levels.
PostgreSQL's SECURITY LABEL feature (extension to SQL standards), allows for additional
security; with a bundled loadable module that supports label-based mandatory access
control (MAC) based on SELinux security policy.
PostgreSQL natively supports a broad number of external authentication mechanisms,
including:
password (either MD5 or plain-text)
GSSAPI
SSPI
Kerberos
ident (maps O/S user-name as provided by an ident server to database user-name)
peer (maps local user name to database user name)
LDAP
Active Directory
RADIUS
certificate
PAM
2. SYSTEM STUDY

2.1 EXISTING SYSTEM

The present system is all the work are handle manually and have to be noted down in some
register and also taking are of that documentation. They are arranged meeting by call and if any update
occurred then again call the client and update the meeting schedule, its wasting time and as well as
money

2.1.1 DRAWBACKS

The existing system is manual.


The manual system is more error prone.
Immediate response to the queries is difficult and time consuming.

2.2 PROPOSED SYSTEM

Proposed system focused Mobile Customer Relationship Management (HR) in the marketing
industry .The paper deals with the possibilities and aspects to support CRM via mobile
services. The key profiles of mobile communication are Interactive Broadband Protocols,
Location Based Services and dividualized/Personalized Services mainly based on
Multimedia-Information. These profiles are embedded in a three-layer communication model.
This paper is to provide HR which not only provide Order Processing but also can track the
move of Sales person to the customer using Geo Tracking System, so that no sales person can
cheat the manager by not visiting to customers. This paper also Provide a HR order processing
module for distributor/sales person. In this manner, the new mobile HR solutions can
represent a specific kind of collaborative application.

2.2.1 FEATURES

To provide computerised data storage facility.


We can search easily any record.
The new system requires less time for completion of any work.
All the stock of medicine is update automatically in the new system.
3. LOGICAL DESIGN

3.1 FILE DESIGN

The files are been protected using the username and password as key and by using the
transposition cipher technique where the content of the file will be changed into cipher data
which cannot be understood even by the administrator. By doing this the data is completely
secured. Only after decrypting the file the data can be understood by the administrator by
using the same key and password.

3.2INPUT DESIGN

Various forms of project collaboration exist, e.g. business-to-consumer (b2c) between


companies and clients, business-to-business (b2b) among companies organized, for example,
as virtual enterprises, and vertical hierarchic b2b between a prime contractor and its legally
independent subcontractors.

The pure act of commercial transaction in typical b2c markets is short.


Communication is usually reduced to filling out online forms. In the context of projects,
where the product is particularly designed for the client, the need for exchange of information
is considerably higher. Possible fields of applications in this area include all forms of digital
products. The clients can be interactively integrated in the product development cycle via the
Internet, due to its foremost characteristics like availability, low costs, speed, audio and visual
capabilities and ease of use. The agency will publish its work to their clients, whereas the
clients just contribute to the process by giving comments and proposals, see Fig. 1. Agencies
state that they do not want the clients to change drafts by themselves, but rather want the
clients to submit change proposals.

1st Draft,
Proposal

Comment

Agency Customer
2nd Draft,
Final

Figure 1: Customer Integration

This form of interaction between agency and customer enhances and


simultaneously documents the creation process of the desired product. Incoming drafts and
comments as well as the complete history and progress of the development process are being
documented. Thus, we not only have collaborative HR by using a communication center and
communication 7 bridges between company and customer. We additionally have integrated
the customer in the production process, which further enhances analytical HR providing
crucial customer data.

The model of collaboration among companies differs from the client integrated case in
such a way that the participants are meant to agitate with equal rights. Therefore, all actors are
equipped with equivalent access rights and contribute to the project by developing parts of the
project, see Fig. 2. They do not necessarily interact in all segments of the project. As in the
previously mentioned b2c case, project participants have the latest information and the history
of the ongoing project at their fingertips, at any time and at any place, as long as there is an
Internet connection available.
At first sight, a b2b collaboration setting does not require any customer-relationship
management, because there are no customers involved. However, if both the virtual
enterprise and its customers use the same Internet platform, individual companies can access
HR data both about the customers and about other participating companies.

Hierarchical b2b is characterized by precisely defined contracts with definite


structures of responsibilities and dependencies. Tasks are passed on top-down. The flow of
execution reports from the bottom up. This arrangement usually occurs in the context of
proposal requests, e.g., for the planning of construction projects. In the example in Fig. 3, a
prime contractor initiates an invitation to bid for several possible subcontracts. Potential
subcontractors have the possibility to submit their proposal or pass a complete or part of the
requested workload to further subcontractors. The prime contractor does not have insight into
the complete process. After receiving all proposals from potential direct subcontractors, the
prime contractor accepts favored proposals and the contracts may be signed. It is important to
ensure that all contractors are just exposed to the part of the process and the project structure

that they are responsible for. This is achieved by security mechanisms and the distribution of
different access rights. Due to reasons of acceptance it is appropriate to store project
information at independent service providers or a superordinate association which may
provide the needed infrastructure.

8
Prime
1
Request PC Contractor

7 2

3
Proposal SC1
Sub
6
Proposal SC2 Contractors I
3
5

4
Proposal SC21 Sub
Proposal
4
SC Contractors II
22

Figure 3: Collaboration with Subcontractors

In this scenario, we have a net of company-client relationships, where none of the


participants may have full information about the entire setting. Communication to and
information about the client is restricted to the prime contractor. Any subcontractors are only

17
involved in communication with their subcontractors, of which they are the clients, and with
their prime contractor, which is their client. HR is complex in this scenario with nobody
having full view to the complete project structure and to the complete HR information that is
available. In this case HR principles can be applied on any stage, as any company, except the
ones residing on the bottom, can act as a client for a subordinated company in the hierarchy.

3.4 DATABASE DESIGN


All operations relating to database is carried out in this module Including:
Create company
Edit/ Delete company
Back up
Restore

The module in which initial details needed are stored


Including:
Account group
Account ledger
Shelf
Generic name
Product group
Product batch
Unit
Product
Manufacture
Vendor
Sales man
Daily customer
Greeting text

18
3.5 SYSTEM DEVELOPMENT

3.5.1 DESCRIPTION OF MODULES

Major modules of the Automated attendance system are

Login module

Admin / Staff module

Customer module

Reports module

Login Module

It is used for logging in the Leave management system. It is used for verifying the
user. Once the user is authenticated, they can access the system. For viewing and modifying
the details stored in proposed system, user should have privileges. Once the user logins, all
privileges like view, add, delete and modify operations will be granted. Only the admin has
authority to do those, admin may be the staff or root user who is handling the system.

Admin / Staff Module

First module is admin which has right for creating space for new batch. Any entry of
new faculty or updating in subject is necessary and is done by the admin. Sending notice is
also possible. Admin can access entire system. Students can only view the student reports and
attendance report. Second module is handled by the user which can be a faulty or an operator.
Staff has a right of making daily attendance, is done in attendance module. User verification
is done using login module.

Customer Module

This Module used to descirbes the customer details like customer name,E-mail-
id,phone no.The administrator interact with customer through this module.customer can
easily raise the issue and easily get the solution.

19
After the system is implemented and Conversion is completed, a review of the personal is good. They
are Satisfied with this Software facility. Less man power, provide information timely. Save data entry
and duplication work. Timing and also resources allocation for data entry, it fills the gap between data
entry. Provide the lock system and password protection so it is reliable

4.TESTING AND IMPLEMENTATION

TESTING

Software Testing is evaluation of the software against requirements gathered from


users and system specifications. Testing is conducted at the phase level in software
development life cycle or at module level in program code. Software testing comprises of
Validation and Verification.

Software Validation

Validation is process of examining whether or not the software satisfies the user
requirements. It is carried out at the end of the SDLC. If the software matches
requirements for which it was made, it is validated.

Validation ensures the product under development is as per the user requirements.

Validation answers the question "Are we developing the product which attempts all
that user needs from this software ?".

Validation emphasizes on user requirements.

Software Verification

Verification is the process of confirming if the software is meeting the business


requirements, and is developed adhering to the proper specifications and methodologies.

Verification ensures the product being developed is according to design specifications.

Verification answers the question "Are we developing this product by firmly


following all design specifications ?"

Verifications concentrates on the design and system specifications.

20
Target of the test are -

Errors - These are actual coding mistakes made by developers. In addition, there is a
difference in output of software and desired output, is considered as an error.

Fault - When error exists fault occurs. A fault, also known as a bug, is a result of an
error which can cause system to fail.

Failure - failure is said to be the inability of the system to perform the desired task.
Failure occurs when fault exists in the system.

Manual Vs Automated Testing

Testing can either be done manually or using an automated testing tool:

Manual - This testing is performed without taking help of automated testing tools. The
software tester prepares test cases for different sections and levels of the code,
executes the tests and reports the result to the manager.

Manual testing is time and resource consuming. The tester needs to confirm whether or
not right test cases are used. Major portion of testing involves manual testing.

Automated This testing is a testing procedure done with aid of automated testing
tools. The limitations with manual testing can be overcome using automated test tools.

A test needs to check if a webpage can be opened in Internet Explorer. This can be
easily done with manual testing. But to check if the web-server can take the load of 1
million users, it is quite impossible to test manually.

There are software and hardware tools which helps tester in conducting load testing,
stress testing, regression testing.

Testing Approaches

Tests can be conducted based on two approaches

Functionality testing

Implementation testing

21
When functionality is being tested without taking the actual implementation in
concern it is known as black-box testing. The other side is known as white-box testing
where not only functionality is tested but the way it is implemented is also
analyzed.Exhaustive tests are the best-desired method for a perfect testing. Every single
possible value in the range of the input and output values is tested. It is not possible to test
each and every value in real world scenario if the range of values is large.

Black-box testing

It is carried out to test functionality of the program. It is also called Behavioral testing.
The tester in this case, has a set of input values and respective desired results. On
providing input, if the output matches with the desired results, the program is tested ok,
and problematic otherwise.

In this testing method, the design and structure of the code are not known to the tester,
and testing engineers and end users conduct this test on the software.

Black-box testing techniques:

Equivalence class - The input is divided into similar classes. If one element of a class
passes the test, it is assumed that all the class is passed.

Boundary values - The input is divided into higher and lower end values. If these
values pass the test, it is assumed that all values in between may pass too.

Cause-effect graphing - In both previous methods, only one input value at a time is
tested. Cause (input) Effect (output) is a testing technique where combinations of
input values are tested in a systematic way.

Pair-wise Testing - The behavior of software depends on multiple parameters. In


pairwise testing, the multiple parameters are tested pair-wise for their different values.

22
State-based testing - The system changes state on provision of input. These systems
are tested based on their states and input.

White-box testing

It is conducted to test program and its implementation, in order to improve code


efficiency or structure. It is also known as Structural testing.

In this testing method, the design and structure of the code are known to the tester.
Programmers of the code conduct this test on the code.

The below are some White-box testing techniques:

Control-flow testing - The purpose of the control-flow testing to set up test cases
which covers all statements and branch conditions. The branch conditions are tested
for both being true and false, so that all statements can be covered.

Data-flow testing - This testing technique emphasis to cover all the data variables
included in the program. It tests where the variables were declared and defined and
where they were used or changed.

TEST CASE

SCREEN NAME: LOGIN

TEST TEST TEST EXPECTE ACTUAL


ID DESCRIPTIO ACTION D RESULT RESULT STATUS
N
To check
TC_01 username and Username:abc Should Logged In Pass
Password Password:123 login

23
match with the
database
To check
TC_02 username and Username:xxx Should not Username
Password does x login or Pass
not match with Password:yyy password
the database y is
incorrect
To check
TC_03 username Username:abc Should not Username
match and Password:yyy login or
Password does y password Pass
not match with is
the database incorrect
To check Username
TC_04 username does Username:xxx Should not or
not match and x login password Pass
Password Password:123 is
match with the incorrect
database

SCREEN NAME:REGISTRATION

TEST TEST TEST EXPECTE ACTUAL RESULT


ID DESCRIPTIO ACTION D RESULT RESULT
N
TC_01 To check
whether the Username: abc Should Move to PASS
username is move to next line
varchar next line
datatype
TC_02 To check

24
password is Password: Should Move to PASS
integer type 12345 move to next line
next line
TC_03 To check that
username is Username: text Should Move to PASS
unique move to next line
next line
TC_04 To check
password is Password: Should Move to PASS
unique 12345 move to next line
next line
TC_05 To check that Email: Should
the Email ID abc@gmail.co accept the Accept PASS
has @ symbol m Email Email ID
TC_06 To check the Email : Should not Shows
Email ID is not Abc1gmail.co accept the Error in PASS
with @symbol m Email Email ID

5.CONCLUSION

Human resources information system can help both employer and employee in order to do
their job. It can help organization going smoothly using technology. Organization can
improve their management system from traditional approach to a modern approach that using
a technology base. In addition, organization can take advantage in competition when their
organization more advances.

25
There are some benefits of implementing HRIS:

1. Standardization

An HRIS provides uniformity through templates and predetermined procedures for uploading
data and downloading reports. It also means that data retrieved and viewed is in a format that
is easily identifiable and user friendly.

2. Knowledge management

Knowledge management is a important element in successful HRM. HRIS become a house of


important information on the various aspects of an employees history within the company.

Lastly, I enjoy this subject that can make me understand about human resources information
system. I can use this knowledge for my future. As human resources practitioner knowledge
in information system are important in order to manage employee.

6.BIBLIOGRAPHY

BOOKS
1. Book - Single Author.
Adler, N.J. (1991). International dimensions of organizational behavior. Boston:
PWS-Kent Publishing Company.

26
2. Book - Multiple Authors, Second or Subsequent Editions.
Aron, A., & Aron, E.N. (1999). Statistics for psychology. (2nd ed.). New Jersey:
Prentice-Hall International, Inc.
3. Chapter in Edited Book.
Hartmann, L.C. (1998). The impact of trends in labour-force participation in
Australia. In M. Patrickson & L. Hartmann (Eds.), Managing an ageing
workforce (3-25). Warriewood, Australia: Woodslane Pty Limited.
4. Chapter in Edited Book, Several Volumes.
Adams, J.S. (1965). Inequity in social exchange. In L. Berkowitz (Ed.), Advances in
experimental social psychology (Vol. 2, 267-299). New York: Academic Press.
5. Chapter in Edited Book - Two Authors, Second or Subsequent Edition.
Forteza, J.A., & Prieto, J.M. (1994). Aging and work behaviour. In H.C. Triandis,
D. Dunnette, & L.M. Hough (Eds.), Handbook of industrial and organizational
psychology. (2nd ed., Vol. 4, 447-483). Palo Alto, CA: Consulting Psychologists
Press.
6. Edited Book - One or more Authors.
Hewstone, M., & Brown, R. (Eds.). (1986). Contact and conflict in intergroup
encounters. Oxford: Basil Blackwell Ltd.

JOURNALS
1. Journal Article.
Kawakami, K., & Dovidio, J.F. (2001). The reliability of implicit
stereotyping. Personality and Social Psychology Bulletin, 27(2), 212-225.
2. Journal Article - No Volume Number.
Schizas, C.L. (1999). Capitalizing on a generation gap. Management Review,
(June), 62-63.

7.APPENDICES

A.DATA FLOW DIAGRAM

27
A data flow diagram is graphical tool used to describe and analyze movement of data
through a system. These are the central tool and the basis from which the other components
are developed. The transformation of data from input to output, through processed, may be
described logically and independently of physical components associated with the system.
These are known as the logical data flow diagrams. The physical data flow diagrams show
the actual implements and movement of data between people, departments and workstations.
A full description of a system actually consists of a set of data flow diagrams. Using two
familiar notations Yourdon, Gane and Sarson notation develops the data flow diagrams.

Each component in a DFD is labeled with a descriptive name. Process is further


identified with a number that will be used for identification purpose. The development of
DFDS is done in several levels. Each process in lower level diagrams can be broken down
into a more detailed DFD in the next level. The lop-level diagram is often called context
diagram. It consists of a single process bit, which plays vital role in studying the current
system. The process in the context level diagram is exploded into other process at the first
level DFD.

The idea behind the explosion of a process into more process is that understanding at
one level of detail is exploded into greater detail at the next level. This is done until further
explosion is necessary and an adequate amount of detail is described for analyst to understand
the process. Larry Constantine first developed the DFD as a way of expressing system
requirements in a graphical from, this lead to the modular design. A DFD is also known as a
bubble Chart has the purpose of clarifying system requirements and identifying major
transformations that will become programs in system design. So it is the starting point of the
design to the lowest level of detail. A DFD consists of a series of bubbles joined by data
flows in the system.

DFD symbols

In the DFD, there are four symbols

A square defines a source(originator) or destination of system data

An arrow identifies data flow. It is the pipeline through which the information
flows

28
A circle or a bubble represents a process that transforms incoming data flow into
outgoing data flows.

An open rectangle is a data store, data at rest or a temporary repository of data.

Process that transform data flow.

Source or Destination of data

Data flow

Data Store

Constructing a DFD

Several rules of thumb are used in drawing DFDS

Process should be named and numbered for an easy reference. Each name should be
representative of the process.

The direction of flow is from top to bottom and from left to right. Data traditionally flow
from source to the destination although they may flow back to the source. One way to
indicate this is to draw long flow line back to a source.

An alternative way is to repeat the source symbol as a destination. Since it is used more
than once in the DFD it is marked with a short diagonal.

When a process is exploded into lower level details, they are numbered.

29
The names of data stores and destinations are written in capital letters. Process and
dataflow names have the first letter of each work capitalized.

A DFD typically shows the minimum contents of data store. Each data store should contain
all the data elements that flow in and out. Questionnaires should contain all the data elements
that flow in and out. Missing interfaces redundancies and like is then accounted for often
through interviews.

Salient features of DFDs

The DFD shows flow of data, not of control loops and decision are controlled
considerations do not appear on a DFD.

The DFD does not indicate the time factor involved in any process whether the dataflow
take place daily, weekly, monthly or yearly.

The sequence of events is not brought out on the DFD.

Current physical

In current physical DFD process label include the name of people or their positions or
the names of computer systems that might provide some of the overall system-processing
label includes an identification of the technology used to process the data. Similarly data
flows and data stores are often labels with the names of the actual physical media on which
data are stored such as file folders, computer files, business forms or computer tapes.

New logical

This is exactly like a current logical model if the user were completely happy with the
user were completely happy with the functionality of the current system but had problems
with how it was implemented typically through the new logical model will differ from current
logical model while having additional functions, absolute function removal and inefficient
flows recognized.

New physical

30
The new physical represents only the physical implementation of the new system.

Current logical

The physical aspects at the system are removed as much as possible so that the current
system is reduced to its essence to the data and the processors that transforms them regardless
of actual physical form.

Process

No process can have only outputs. No process can have only inputs. If an object has
only inputs than it must be a sink. A process has a verb phrase label.

Data store

Data cannot move directly from one data store to another data store, a process must
move data. Data cannot move directly from an outside source to a data store, a process, which
receives, must move data from the source and place the data into data store. A data store has a
noun phrase label. Source is the origin or destination of data. Data cannot move direly from a
source to sink it must be moved by a process. A source or sink has a noun phrase land.

LEVEL 0

HR management
Register Complaint Check
Complaint
CUSTOMECUSTOMER
R SERVICE

Generate Service Set Priority

PayMent Check
Status

31
LEVEL 1

1.0
CUSTOME CUSTOMER
Raise Issue SERVICE
R

2.0

Rectify

4.0 3.0

Payment Update
Status

B.TABLE DESIGN

Customer Registration

S.N FIELD NAME DATATYPE DESCRIPTION


O
1. Customer_Id Text Unique key for each staff
2. Faculty Name Text Store the name of the staff
3. Department Text The department to which the staff
belongs to
Customer Login

S.NO FIELD NAME DATATYPE DESCRIPTION


1. Customer_ ID Text Unique key for every teacher

32
2. Customer_Name Text Name of the teacher

C.SAMPLE CODING

from openerp import addons


import logging
from openerp.osv import fields, osv
from openerp.tools.translate import _
from openerp import tools

_logger = logging.getLogger(__name__)

class hr_employee_category(osv.osv):

def name_get(self, cr, uid, ids, context=None):


if not ids:
return []
reads = self.read(cr, uid, ids, ['name','parent_id'], context=context)
res = []
for record in reads:
name = record['name']
if record['parent_id']:
name = record['parent_id'][1]+' / '+name
res.append((record['id'], name))
return res

def _name_get_fnc(self, cr, uid, ids, prop, unknow_none, context=None):


res = self.name_get(cr, uid, ids, context=context)
return dict(res)

33
_name = "hr.employee.category"
_description = "Employee Category"
_columns = {
'name': fields.char("Category", size=64, required=True),
'complete_name': fields.function(_name_get_fnc, type="char", string='Name'),
'parent_id': fields.many2one('hr.employee.category', 'Parent Category', select=True),
'child_ids': fields.one2many('hr.employee.category', 'parent_id', 'Child Categories'),
'employee_ids': fields.many2many('hr.employee', 'employee_category_rel',
'category_id', 'emp_id', 'Employees'),
}

def _check_recursion(self, cr, uid, ids, context=None):


level = 100
while len(ids):
cr.execute('select distinct parent_id from hr_employee_category where id IN %s',
(tuple(ids), ))
ids = filter(None, map(lambda x:x[0], cr.fetchall()))
if not level:
return False
level -= 1
return True

_constraints = [
(_check_recursion, 'Error! You cannot create recursive Categories.', ['parent_id'])
]

hr_employee_category()

class hr_job(osv.osv):

34
def _no_of_employee(self, cr, uid, ids, name, args, context=None):
res = {}
for job in self.browse(cr, uid, ids, context=context):
nb_employees = len(job.employee_ids or [])
res[job.id] = {
'no_of_employee': nb_employees,
'expected_employees': nb_employees + job.no_of_recruitment,
}
return res

def _get_job_position(self, cr, uid, ids, context=None):


res = []
for employee in self.pool.get('hr.employee').browse(cr, uid, ids, context=context):
if employee.job_id:
res.append(employee.job_id.id)
return res

_name = "hr.job"
_description = "Job Description"
_inherit = ['mail.thread']
_columns = {
'name': fields.char('Job Name', size=128, required=True, select=True),
'expected_employees': fields.function(_no_of_employee, string='Total Forecasted
Employees',
help='Expected number of employees for this job position after new recruitment.',
store = {
'hr.job': (lambda self,cr,uid,ids,c=None: ids, ['no_of_recruitment'], 10),
'hr.employee': (_get_job_position, ['job_id'], 10),
},

35
multi='no_of_employee'),
'no_of_employee': fields.function(_no_of_employee, string="Current Number of
Employees",
help='Number of employees currently occupying this job position.',
store = {
'hr.employee': (_get_job_position, ['job_id'], 10),
},
multi='no_of_employee'),
'no_of_recruitment': fields.float('Expected in Recruitment', help='Number of new
employees you expect to recruit.'),
'employee_ids': fields.one2many('hr.employee', 'job_id', 'Employees',
groups='base.group_user'),
'description': fields.text('Job Description'),
'requirements': fields.text('Requirements'),
'department_id': fields.many2one('hr.department', 'Department'),
'company_id': fields.many2one('res.company', 'Company'),
'state': fields.selection([('open', 'No Recruitment'), ('recruit', 'Recruitement in
Progress')], 'Status', readonly=True, required=True,
help="By default 'In position', set it to 'In Recruitment' if recruitment process is
going on for this job position."),
}
_defaults = {
'company_id': lambda self,cr,uid,c:
self.pool.get('res.company')._company_default_get(cr, uid, 'hr.job', context=c),
'state': 'open',
}

_sql_constraints = [
('name_company_uniq', 'unique(name, company_id)', 'The name of the job position
must be unique per company!'),
]

36
def on_change_expected_employee(self, cr, uid, ids, no_of_recruitment,
no_of_employee, context=None):
if context is None:
context = {}
return {'value': {'expected_employees': no_of_recruitment + no_of_employee}}

def job_recruitement(self, cr, uid, ids, *args):


for job in self.browse(cr, uid, ids):
no_of_recruitment = job.no_of_recruitment == 0 and 1 or job.no_of_recruitment
self.write(cr, uid, [job.id], {'state': 'recruit', 'no_of_recruitment':
no_of_recruitment})
return True

def job_open(self, cr, uid, ids, *args):


self.write(cr, uid, ids, {'state': 'open', 'no_of_recruitment': 0})
return True

hr_job()

class hr_employee(osv.osv):
_name = "hr.employee"
_description = "Employee"
_inherits = {'resource.resource': "resource_id"}

def _get_image(self, cr, uid, ids, name, args, context=None):


result = dict.fromkeys(ids, False)
for obj in self.browse(cr, uid, ids, context=context):
result[obj.id] = tools.image_get_resized_images(obj.image)
return result

37
def _set_image(self, cr, uid, id, name, value, args, context=None):
return self.write(cr, uid, [id], {'image': tools.image_resize_image_big(value)},
context=context)

_columns = {
#we need a related field in order to be able to sort the employee by name
'name_related': fields.related('resource_id', 'name', type='char', string='Name',
readonly=True, store=True),
'country_id': fields.many2one('res.country', 'Nationality'),
'birthday': fields.date("Date of Birth"),
'ssnid': fields.char('SSN No', size=32, help='Social Security Number'),
'sinid': fields.char('SIN No', size=32, help="Social Insurance Number"),
'identification_id': fields.char('Identification No', size=32),
'otherid': fields.char('Other Id', size=64),
'gender': fields.selection([('male', 'Male'),('female', 'Female')], 'Gender'),
'marital': fields.selection([('single', 'Single'), ('married', 'Married'), ('widower',
'Widower'), ('divorced', 'Divorced')], 'Marital Status'),
'department_id':fields.many2one('hr.department', 'Department'),
'address_id': fields.many2one('res.partner', 'Working Address'),
'address_home_id': fields.many2one('res.partner', 'Home Address'),
'bank_account_id':fields.many2one('res.partner.bank', 'Bank Account Number',
domain="[('partner_id','=',address_home_id)]", help="Employee bank salary account"),
'work_phone': fields.char('Work Phone', size=32, readonly=False),
'mobile_phone': fields.char('Work Mobile', size=32, readonly=False),
'work_email': fields.char('Work Email', size=240),
'work_location': fields.char('Office Location', size=32),
'notes': fields.text('Notes'),
'parent_id': fields.many2one('hr.employee', 'Manager'),
'category_ids': fields.many2many('hr.employee.category', 'employee_category_rel',
'emp_id', 'category_id', 'Tags'),

38
'child_ids': fields.one2many('hr.employee', 'parent_id', 'Subordinates'),
'resource_id': fields.many2one('resource.resource', 'Resource', ondelete='cascade',
required=True),
'coach_id': fields.many2one('hr.employee', 'Coach'),
'job_id': fields.many2one('hr.job', 'Job'),
# image: all image fields are base64 encoded and PIL-supported
'image': fields.binary("Photo",
help="This field holds the image used as photo for the employee, limited to
1024x1024px."),
'image_medium': fields.function(_get_image, fnct_inv=_set_image,
string="Medium-sized photo", type="binary", multi="_get_image",
store = {
'hr.employee': (lambda self, cr, uid, ids, c={}: ids, ['image'], 10),
},
help="Medium-sized photo of the employee. It is automatically "\
"resized as a 128x128px image, with aspect ratio preserved. "\
"Use this field in form views or some kanban views."),
'image_small': fields.function(_get_image, fnct_inv=_set_image,
string="Small-sized photo", type="binary", multi="_get_image",
store = {
'hr.employee': (lambda self, cr, uid, ids, c={}: ids, ['image'], 10),
},
help="Small-sized photo of the employee. It is automatically "\
"resized as a 64x64px image, with aspect ratio preserved. "\
"Use this field anywhere a small image is required."),
'passport_id':fields.char('Passport No', size=64),
'color': fields.integer('Color Index'),
'city': fields.related('address_id', 'city', type='char', string='City'),
'login': fields.related('user_id', 'login', type='char', string='Login', readonly=1),
'last_login': fields.related('user_id', 'date', type='datetime', string='Latest Connection',
readonly=1),
39
}

_order='name_related'

def copy_data(self, cr, uid, ids, default=None, context=None):


if default is None:
default = {}
default.update({'child_ids': False})
return super(hr_employee, self).copy_data(cr, uid, ids, default, context=context)

def create(self, cr, uid, data, context=None):


employee_id = super(hr_employee, self).create(cr, uid, data, context=context)
try:
(model, mail_group_id) = self.pool.get('ir.model.data').get_object_reference(cr,
uid, 'mail', 'group_all_employees')
employee = self.browse(cr, uid, employee_id, context=context)
self.pool.get('mail.group').message_post(cr, uid, [mail_group_id],
body=_('Welcome to %s! Please help him/her take the first steps with
OpenERP!') % (employee.name),
subtype='mail.mt_comment', context=context)
except:
pass # group deleted: do not push a message
return employee_id

def unlink(self, cr, uid, ids, context=None):


resource_ids = []
for employee in self.browse(cr, uid, ids, context=context):
resource_ids.append(employee.resource_id.id)
super(hr_employee, self).unlink(cr, uid, ids, context=context)
return self.pool.get('resource.resource').unlink(cr, uid, resource_ids,
context=context)

40
def onchange_address_id(self, cr, uid, ids, address, context=None):
if address:
address = self.pool.get('res.partner').browse(cr, uid, address, context=context)
return {'value': {'work_phone': address.phone, 'mobile_phone': address.mobile}}
return {'value': {}}

def onchange_company(self, cr, uid, ids, company, context=None):


address_id = False
if company:
company_id = self.pool.get('res.company').browse(cr, uid, company,
context=context)
address = self.pool.get('res.partner').address_get(cr, uid,
[company_id.partner_id.id], ['default'])
address_id = address and address['default'] or False
return {'value': {'address_id' : address_id}}

def onchange_department_id(self, cr, uid, ids, department_id, context=None):


value = {'parent_id': False}
if department_id:
department = self.pool.get('hr.department').browse(cr, uid, department_id)
value['parent_id'] = department.manager_id.id
return {'value': value}

def onchange_user(self, cr, uid, ids, user_id, context=None):


work_email = False
if user_id:
work_email = self.pool.get('res.users').browse(cr, uid, user_id,
context=context).email
return {'value': {'work_email' : work_email}}

41
def _get_default_image(self, cr, uid, context=None):
image_path = addons.get_module_resource('hr', 'static/src/img', 'default_image.png')
return tools.image_resize_image_big(open(image_path, 'rb').read().encode('base64'))

_defaults = {
'active': 1,
'image': _get_default_image,
'color': 0,
}

def _check_recursion(self, cr, uid, ids, context=None):


level = 100
while len(ids):
cr.execute('SELECT DISTINCT parent_id FROM hr_employee WHERE id IN
%s AND parent_id!=id',(tuple(ids),))
ids = filter(None, map(lambda x:x[0], cr.fetchall()))
if not level:
return False
level -= 1
return True

_constraints = [
(_check_recursion, 'Error! You cannot create recursive hierarchy of Employee(s).',
['parent_id']),
]

hr_employee()

class hr_department(osv.osv):
_description = "Department"

42
_inherit = 'hr.department'
_columns = {
'manager_id': fields.many2one('hr.employee', 'Manager'),
'member_ids': fields.one2many('hr.employee', 'department_id', 'Members',
readonly=True),
}

def copy_data(self, cr, uid, ids, default=None, context=None):


if default is None:
default = {}
default['member_ids'] = []
return super(hr_department, self).copy_data(cr, uid, ids, default, context=context)

class res_users(osv.osv):
_name = 'res.users'
_inherit = 'res.users'

def copy_data(self, cr, uid, ids, default=None, context=None):


if default is None:
default = {}
default.update({'employee_ids': False})
return super(res_users, self).copy_data(cr, uid, ids, default, context=context)

def create(self, cr, uid, data, context=None):


user_id = super(res_users, self).create(cr, uid, data, context=context)

# add shortcut unless 'noshortcut' is True in context


if not(context and context.get('noshortcut', False)):
data_obj = self.pool.get('ir.model.data')
try:

43
data_id = data_obj._get_id(cr, uid, 'hr', 'ir_ui_view_sc_employee')
view_id = data_obj.browse(cr, uid, data_id, context=context).res_id
self.pool.get('ir.ui.view_sc').copy(cr, uid, view_id, default = {
'user_id': user_id}, context=context)
except:
# Tolerate a missing shortcut. See product/product.py for similar code.
_logger.debug('Skipped meetings shortcut for user "%s".',
data.get('name','<new'))

return user_id

_columns = {
'employee_ids': fields.one2many('hr.employee', 'user_id', 'Related employees'),
}

res_users()

# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:

D.SAMPLE INPUT

44
E.SAMPLE OUTPUT

45
46
47

You might also like