NIRMAL KHUSHI - MCA
1.
INTRODUCTION
1.1
PROJECT OVERVIEW
The project entitled
“Nirmal Krushi” is totally enhanced with the features that enable
us to feel the real-time environment. The Nirmal Krushi projects include all
information which is needed for marketing a food grain product items. The
Nirmal Krushi contains the detailed information about the food grain market,
daily food product rate updating depends on market value it may be increase or
decreases. This website enhances the way of food product marketing’s. It helps
customers as well as farmers and it avoids broker control. The farmer can get
daily market value of the respected food grain product by registering this
site. It builds the strong relationship
among customers, retailers, farmers and APMC[1]
employees.
This
project allows farmer to upload the details of their product, view the
customers, retailers and view the request made by the retailers and the
customers.
The retailers are provided
with an easy interface between the farmers and the customers.
There
are many farmers who are practicing organic agriculture but they are not
getting proper returns and market for their products. This project is very
useful for such farmers.
This
project helps the customer to place the order for naturally grown food grains
directly to the farmers without involving commission agent or middleman. This
project is trying to target those buyers who want to buy naturally grown food
grains and it is trying to eliminate the middleman and commission agents.
1.2
OBJECTIVE AND SCOPE OF THE PROJECT
According
to Bill Gates it is mandatory for all the people to take the help of
Information Technology to increase and to promote their business. Agricultural
area is the only sector where Information Technology is not sufficiently
utilized. So the objective of “Nirmal Krushi” is to make our farmers to make
use of Information Technology.
This website can be used by any APMC office
employee, retailers & farmers. This website is used to stores and put the
related information of agriculture division. This website can be used across
the world.
2.
SYSTEM ANALYSIS
2.1
PROBLEM STATEMENT
Requirement analysis
involves obtaining a clear and thorough understanding of the product to be
developed, with a view to removing all ambiguous and inconsistencies from the
initial customer perception of the problem. Requirement analysis enables the
System engineer to specify software function and performance indicate
software’s Interface with other system elements, and establish design
constraints that the software must meet.
Requirement analysis allows the analyst to refine
the software allocation and build models of the process, data and behavioral
domains that will be treated by Software. Requirement analysis provides the
software engineer with a representation of information and function that can be
translated to data, architectural and procedural design.
Problem
definition is defining a problem that forces to the user to fulfill users user
requirements. It also discuss about the goals user wants to achieve. We are
developing “Nirmal Krushi”. In the existing system all the operations
are carried out manually. In system study we have four important points they
are:
Ø Existing
system
Ø Limitation
of Existing System
Ø Proposed
system
Ø Advantages
of proposed system
2.2
EXISTING SYSTEM AND ITS LIMITATION
There
is a great demand for organically grown food grains. Many farmers are
practicing this kind of agriculture but they are facing problems to get returns
from their products.There are many farmers who are practicing organic
agriculture but they are not getting proper returns and market for their
products. This project is very useful for such farmers. This project helps the
customer to place the order for naturally grown food grains directly to the
farmers without involving commission agent or middleman. This project is trying
to target those buyers who want to buy naturally grown food grains and it is
trying to eliminate the middleman and commission agents.
Limitation
of Existing System
The
maintenance of various records and procedure of reporting are being done by
manually in the entire department. This leads to many drawbacks some of which
are:
·
At present farmers are selling the
organically grown food grains directly to the middleman or to the commission
agent.
·
Farmers are not getting proper money
from their products.
·
Farmers do not have a proper market to
sell their products.
2.3
PROPOSED SYSTEM
In order to overcome the limitations
of existing system the “Nirmal Krushi” was proposed through which customer can
place the order for naturally grown food grains directly to the farmers without
involving commission agent or middleman. This project is trying to target those
buyers who want to buy naturally grown food grains and it is trying to
eliminate the middleman and commission agents.
All the farmers in the village who are
following organic farming make a group and sell their products online directly
to the customers.
Advantages of Proposed
System
· All
farmers sell their products online.
· Farmers get best
possible price because they will fix price
for their products.
· No middleman or commission agents are
involved.
· Farmers
get a proper market to sell their products.
2.4
FEASIBILITY STUDY
Feasibility
study is the important phase in the software development process. It enables
the developer to have an assessment of the product being developed. It refers
to the feasibility study of the product in the terms of outcomes of the
product, operational use and technical support required for implementing it.
Feasibility study
should be performed on the basis of various criteria and parameters. The
various feasibility studies are
Ø Technical
Feasibility
Ø Operational
Feasibility
Ø Economic
feasibility
2.4.1
TECHNICAL FEASIBILITY
It
refers to whether the software that is available in the market fully supports
the present application. It studies the pros and cons of using particular
software for the development and its feasibility. It also studies the
additional training needed to be given to the people to make the application
work.
2.4.2
OPERATIONAL FEASIBILITY
It
refers to the feasibility of the product to be operational. Some products may
work very well at design and implementation but may fail in the real time
environment.
It includes the study of additional human
resource required and their Technical expertise.
2.4.3
ECONOMIC FEASIBILITY
It
refers to the benefits or outcomes we are deriving from the product as compared
to the total cost we are spending for developing the product. It the more or
less same as the older system, then it is not feasible to develop the product.
3.
PROJECT PLANNING
3.1
PERT Chart
PERT is a method to analyze the involved tasks in completing a
given project, especially the time needed to complete each task, and to
identify the minimum time needed to complete the total project.
PERT was developed primarily to simplify the planning and
scheduling of large and complex projects.
- A
PERT chart is a tool that facilitates decision making. The first draft of
a PERT chart will number its events sequentially in 10s (10, 20, 30, etc.)
to allow the later insertion of additional events.
- Two
consecutive events in a PERT chart are linked by activities, which are
conventionally represented as arrows (see the diagram above).
- The
events are presented in a logical sequence and no activity can commence
until its immediately preceding event is completed.
- The
planner decides which milestones should be PERT events and also decides
their “proper” sequence.
- A
PERT chart may have multiple pages with many sub-tasks.
Terminologies
- PERT
event: a point that marks the start or completion of one or more
activities. It consumes no time and uses no resources. When it marks the
completion of one or more tasks, it is not “reached” (does not occur)
until all of the activities leading to that event have
been completed.
- Predecessor
event: an event that immediately precedes some other event without any
other events intervening. An event can have multiple predecessor events
and can be the predecessor of multiple events.
- Successor
event: an event that immediately follows some other event without any
other intervening events. An event can have multiple successor events and
can be the successor of multiple events.
- PERT
activity: the actual performance of a task which consumes time and
requires resources (such as labor, materials, space, machinery). It can be
understood as representing the time, effort, and resources required to
move from one event to another. A PERT activity cannot be performed until
the predecessor event has occurred.
- Optimistic
time (O): the minimum possible time required to accomplish a task,
assuming everything proceeds better than is normally expected
- Pessimistic
time (P): the maximum possible time required to accomplish a task,
assuming everything goes wrong (but excluding major catastrophes).
- Most
likely time (M): the best estimate of the time required to accomplish
a task, assuming everything proceeds as normal.
- Expected
time (TE): the best estimate of the time required to
accomplish a task, accounting for the fact that things don't always
proceed as normal (the implication being that the expected time is the
average time the task would require if the task were repeated on a number
of occasions over an extended period of time).
TE = (O + 4M + P)
÷ 6
- Float or slack is
a measure of the excess time and resources available to complete a task.
It is the amount of time that a project task can be delayed without causing
a delay in any subsequent tasks (free float) or the whole project (total
float). Positive slack would indicate ahead of schedule;
negative slack would indicate behind schedule; and zero slack
would indicate on schedule.
- Critical path:
the longest possible continuous pathway taken from the initial event to
the terminal event. It determines the total calendar time required for the
project; and, therefore, any time delays along the critical path will
delay the reaching of the terminal event by at least the same amount.
- Critical
activity: An activity that has total float equal to zero. An activity with
zero float is not necessarily on the critical path since its path may not
be the longest.
- Lead time:
the time by which a predecessor event must be completed
in order to allow sufficient time for the activities that must elapse
before a specific PERT event reaches completion.
- Lag
time: the earliest time by which a successor event can
follow a specific PERT event.
- Fast tracking: performing more critical
activities in parallel
- Crashing critical path:
Shortening duration of critical activities
Advantages
- PERT
chart explicitly defines and makes visible dependencies between the work breakdown structure
elements
- PERT
facilitates identification of the critical path and makes this visible
- PERT
facilitates identification of early start, late start, and slack for each
activity,
- PERT
provides for potentially reduced project duration due to better understanding
of dependencies leading to improved overlapping of activities and tasks
where feasible.
- The
large amount of project data can be organized & presented in diagram
for use in decision making.
Disadvantages
- There
can be potentially hundreds or thousands of activities and individual
dependency relationships
- PERT
is not easily scalable for smaller projects
- The
network charts tend to be large and unwieldy requiring several pages to
print and requiring special size paper
- The
lack of a timeframe on most PERT/CPM charts makes it harder to show status
although colors can help (e.g., specific color for completed nodes)
- When
the PERT/CPM charts become unwieldy, they are no longer used to manage the
project.
Cost
Cost/Benefit Analysis
is a systematic approach to estimating the strengths and weaknesses of
technology alternatives that satisfy agency business requirements.
The Cost Benefit Analysis Method (CBAM) is an architecture-centric
method for analyzing the costs, benefits, and schedule implications of architectural
decisions. It also enables assessment of the uncertainty surrounding judgments
of costs and benefits, thereby providing a basis for informed decision making
about architectural design/upgrade.
3.2 Gantt Chart
A Gantt chart is a type of bar chart, developed by Henry Gantt in the 1910s, that illustrates a project schedule. Gantt charts illustrate the start and finish dates of the terminal
elements and summary elements of a project.
Terminal elements and summary elements comprise the work breakdown structure of the project. Some Gantt charts also show the dependency (i.e. precedence network) relationships between activities.
Gantt charts can be used to show current schedule status using percent-complete
shadings and a vertical "TODAY" line as shown here.
Although now regarded as a common charting technique, Gantt
charts were considered revolutionary when first introduced.[1] In recognition of Henry Gantt's contributions, the Henry Laurence Gantt Medal is awarded for distinguished achievement in management and
in community service. This chart is also used in information
technology to
represent data that has been collected.
Gantt charts have become a common technique for representing
the phases and activities of a project work breakdown structure (WBS), so they can be understood by a wide audience all over
the world. The technique is frequently used in Project
Management to help
breakdown the project.[5]
A common error made by those who equate Gantt chart design
with project design is that they attempt to define the project work breakdown
structure at the same time that they define schedule activities. This practice
makes it very difficult to follow the 100% Rule.
Instead the WBS should be fully defined to follow the 100% Rule, then the
project schedule can be designed.
Although a Gantt chart is useful and valuable for small
projects that fit on a single sheet or screen, they can become quite unwieldy
for projects with more than about 30 activities. Larger Gantt charts may not be
suitable for most computer displays.
A related criticism is that Gantt charts communicate
relatively little information per unit area of display. That is, projects are
often considerably more complex than can be communicated effectively with a
Gantt chart.
Gantt charts only represent part of the triple
constraints (cost,
time and scope) on projects, because they focus primarily on schedule
management. Moreover, Gantt charts do not represent the size of a project or
the relative size of work elements, therefore the magnitude of a
behind-schedule condition is easily miscommunicated. If two projects are the
same number of days behind schedule, the larger project has a larger effect on
resource utilization, yet the Gantt does not represent this difference.
Although project management software can show schedule
dependencies as lines between activities, displaying a large number of
dependencies may result in a cluttered or unreadable chart.
Because the horizontal bars of a Gantt chart have a fixed
height, they can misrepresent the time-phased workload (resource requirements)
of a project, which may cause confusion especially in large projects.
In the example shown
in this article, Activities E and G appear to be the same size, but in reality
they may be different orders of magnitude. A related criticism is that all
activities of a Gantt chart show planned workload as constant. In practice,
many activities (especially summary elements) have front-loaded or back-loaded
work plans, so a Gantt chart with percent-complete shading may actually
miscommunication the true schedule performance status.
Coding
Analysis
Software quality measurement is
about quantifying to what extent a system or software possesses desirable
characteristics. This can be performed through qualitative or quantitative
means or a mix of both. In both cases, for each desirable characteristic, there
are a set of measurable attributes the existence of which in a piece of
software or system tend to be correlated and associated with this
characteristic. For example, an attribute associated with portability is the
number of target-dependent statements in a program. More precisely, using the Quality Function Deployment
approach, these measurable attributes are the "how’s" that needs to
be enforced to enable the "what’s" in the Software Quality definition
above.
The structure, classification and
terminology of attributes and metrics applicable to software quality management
have been derived or extracted from the ISO 9126-3
and the subsequent ISO 25000:2005
quality models. The main focus is on internal structural quality. Subcategories
have been created to handle specific areas like business application
architecture and technical characteristics such as data access and manipulation
or the notion of transactions.
The dependence tree between
software quality characteristics and their measurable attributes is represented
in the diagram on the right, where each of the 5 characteristics that matter
for the user (right) or owner of the business system depends on measurable
attributes (left):
- Application Architecture Practices
- Coding Practices
- Application Complexity
- Documentation
- Portability
- Technical & Functional Volume
One of the founding member of the Consortium for IT Software Quality,
the OMG (Object Management Group), has published
an article on "How to Deliver Resilient, Secure, Efficient, and Easily
Changed IT Systems in Line with CISQ Recommendations" that states that
correlations between programming errors and production defects unveil that
basic code errors account for 92% of the total errors in the source code. These
numerous code-level issues eventually count for only 10% of the defects in
production. Bad software engineering practices at the architecture levels
account for only 8% of total defects, but consume over half the effort spent on
fixing problems, and lead to 90% of the serious reliability, security, and
efficiency issues in production.
Code-based
analysis
Many of the existing software
measures count structural elements of the application that result from parsing
the source code for such individual instructions (Park, 1992), tokens
(Halstead, 1977), control structures (McCabe, 1976), and objects (Chidamber
& Kemerer, 1994).
Software quality measurement is
about quantifying to what extent a system or software rates along these
dimensions. The analysis can be performed using a qualitative or quantitative
approach or a mix of both to provide an aggregate view [using for example
weighted average(s) that reflect relative importance between the factors being
measured].
This view of software quality on a
linear continuum has to be supplemented by the identification of discrete Critical Programming
Errors. These vulnerabilities may not fail a test case,
but they are the result of bad practices that under specific circumstances can
lead to catastrophic outages, performance degradations, security breaches,
corrupted data, and myriad other problems (Nygard, 2007) that make a given
system de facto unsuitable for use regardless of its rating based on aggregated
measurements. A well-known example of vulnerability is the Common Weakness Enumeration
(Martin, 2001), a repository of vulnerabilities in the source code that make
applications exposed to security breaches.
The measurement of critical
application characteristics involves measuring structural attributes of the
application's architecture, coding, and in-line documentation, as displayed in
the picture above. Thus, each characteristic is affected by attributes at
numerous levels of abstraction in the application and all of which must be
included calculating the characteristic’s measure if it is to be a valuable
predictor of quality outcomes that affect the business. The layered approach to
calculating characteristic measures displayed in the figure above was first
proposed by Boehm and his colleagues at TRW (Boehm, 1978) and is the approach taken
in the ISO 9126 and 25000 series standards. These attributes can be measured
from the parsed results of a static analysis of the application source code.
Even dynamic characteristics of applications such as reliability and
performance efficiency have their causal roots in the static structure of the
application.
Structural quality analysis and
measurement is performed through the analysis of the source code,
the architecture, software framework, database schema
in relationship to principles and standards that together define the conceptual
and logical architecture of a system.
This is distinct from the basic,
local, component-level code analysis typically performed by development tools
which are mostly concerned with implementation considerations and are crucial
during debugging
and testing activities.
Reliability
The root causes of poor reliability
are found in a combination of non- compliance with good architectural and
coding practices. This non-compliance can be detected by measuring the static
quality attributes of an application. Assessing the static attributes
underlying an application’s reliability provides an estimate of the level of
business risk and the likelihood of potential application failures and defects
the application will experience when placed in operation.
Assessing reliability requires
checks of at least the following software engineering best practices and
technical attributes:
- Application Architecture Practices
- Coding Practices
- Complexity of algorithms
- Complexity of programming practices
- Compliance with Object-Oriented and Structured
Programming best practices (when applicable)
- Component or pattern re-use ratio Dirty programming
- Error & Exception handling (for all layers
- GUI, Logic & Data)
- Multi-layer design compliance
- Resource bounds management
- Software avoids patterns that will lead to
unexpected behaviors
- Software manages data integrity and consistency
- Transaction complexity level
Depending on the application architecture
and the third-party components used (such as external libraries or frameworks),
custom checks should be defined along the lines drawn by the above list of best
practices to ensure a better assessment of the reliability of the delivered
software.
Efficiency
As with Reliability, the causes of
performance inefficiency are often found in violations of good architectural
and coding practice which can be detected by measuring the static quality
attributes of an application. These static attributes predict potential
operational performance bottlenecks and future scalability problems, especially
for applications requiring high execution speed for handling complex algorithms
or huge volumes of data.
Assessing performance efficiency
requires checking at least the following software engineering best practices
and technical attributes:
- Application Architecture Practices
- Appropriate interactions with expensive and/or
remote resources
- Data access performance and data management
- Memory, network and disk space management
- Coding Practices
- Compliance with Object-Oriented and Structured
Programming best practices (as appropriate)
- Compliance with SQL programming best practices
Security
Most security vulnerabilities
result from poor coding and architectural practices such as SQL injection or
cross-site scripting. These are well documented in lists maintained by CWE, and
the SEI/Computer Emergency Center (CERT) at Carnegie Mellon
University.
Assessing security requires at
least checking the following software engineering best practices and technical
attributes:
- Application Architecture Practices
- Multi-layer design compliance
- Security best practices (Input Validation, SQL
Injection, Cross-Site Scripting, etc. )
- Programming Practices (code level)
- Error & Exception handling
- Security best practices (system functions
access, access control to programs)
Maintainability
Maintainability includes concepts
of modularity, understandability, changeability, testability, reusability, and
transferability from one development team to another. These do not take the
form of critical issues at the code level. Rather, poor maintainability is
typically the result of thousands of minor violations with best practices in
documentation, complexity avoidance strategy, and basic programming practices
that make the difference between clean and easy-to-read code vs. unorganized
and difficult-to-read code.
Assessing maintainability requires
checking the following software engineering best practices and technical
attributes:
- Application Architecture Practices
- Architecture, Programs and Code documentation
embedded in source code
- Code readability
- Complexity level of transactions
- Complexity of algorithms
- Complexity of programming practices
- Compliance with Object-Oriented and Structured
Programming best practices (when applicable)
- Component
or pattern re-use ratio
- Controlled level of dynamic coding Coupling ratio
- Dirty programming
- Documentation
- Hardware, OS, middleware, software components
and database independence
- Multi-layer design compliance
- Portability
- Programming Practices (code level)
- Reduced duplicated code and functions
Source code file
organization cleanliness
Maintainability is closely related
to Ward Cunningham's concept of technical debt,
which is an expression of the costs resulting of a lack of maintainability.
Reasons for why maintainability is low can be classified as reckless vs.
prudent and deliberate vs. inadvertent, and often have their origin in
developers' inability, lack of time and goals, their carelessness and
discrepancies in the creation cost of and benefits from documentation and, in
particular, maintainable source code.
Size
Measuring software size requires
that the whole source code be correctly gathered, including database structure
scripts, data manipulation source code, component headers, configuration files
etc. There are essentially two types of software sizes to be measured, the
technical size (footprint) and the functional size:
- There are several software
technical sizing methods that have been widely
described. The most common technical sizing method is number of Lines Of
Code (#LOC) per technology, number of files, functions, classes, tables,
etc., from which backfiring Function Points can be computed;
- The most common for measuring functional size
is Function Point Analysis.
Function Point Analysis measures the size of the software deliverable from
a user’s perspective.
Function Point
sizing is done based on user requirements and provides an accurate
representation of both size for the developer/estimator and value
(functionality to be delivered) and reflects the business functionality being
delivered to the customer. The method includes the identification and weighting
of user recognizable inputs, outputs and data stores. The size value is then
available for use in conjunction with numerous measures to quantify and to
evaluate software delivery and performance (Development Cost per Function
Point; Delivered Defects per Function Point; Function Points per Staff Month.).
The Function Point Analysis sizing standard
is supported by the International Function Point Users Group (IFPUG) (www.ifpug.org). It can be applied
early in the software development life-cycle and it is not dependent on lines
of code like the somewhat inaccurate Backfiring method.
The method is technology agnostic
and can be used for comparative analysis across organizations and across
industries.
Since the inception of Function
Point Analysis, several variations have evolved and the family of functional
sizing techniques has broadened to include such sizing measures as COSMIC,
NESMA, Use Case Points, FP Lite, Early and Quick FPs, and most recently Story
Points. However, Function Points has a history of statistical accuracy, and has
been used as a common unit of work measurement in numerous application
development management (ADM) or outsourcing engagements, serving as the
"currency" by which services are delivered and performance is
measured.
One common limitation to the
Function Point methodology is that it is a manual process and therefore it can
be labor intensive and costly in large scale initiatives such as application
development or outsourcing engagements. This negative aspect of applying the
methodology may be what motivated industry IT leaders to form the Consortium
for IT Software Quality (www.it-cisq.org)
focused on introducing a computable metrics standard for automating the
measuring of software size while the IFPUG (www.ifpug.org) keep promoting a
manual approach as most of its activity rely on FP counters certifications.
In November 2011, CISQ announced
the availability of its first metric standard, Automated Function Points, to
the CISQ membership, in CISQ Technical Report 2011-01 available at http://www.it-cisq.org/cisqwiki/images/a/a2/CISQ_Function_Point_Specification.pdf
{{Dead link |date=July 2013}} . These recommendations have been developed
in OMG’s Request for Comment format and submitted to OMG's process for
standardization.
3.3
Identifying critical programming errors
Critical Programming Errors are
specific architectural and/or coding bad practices that result in the highest,
immediate or long term, business disruption risk.
These are quite often
technology-related and depend heavily on the context, business objectives and
risks. Some may consider respect for naming conventions while others – those preparing
the ground for a knowledge transfer for example – will consider it as
absolutely critical.
Critical Programming Errors can
also be classified per CISQ Characteristics. Basic example below:
- Reliability
- Avoid software patterns that will
lead to unexpected behavior (Uninitialized variable, null pointers, etc.)
- Methods, procedures and functions
doing Insert, Update, Delete, Create Table or Select must include error
management
- Multi-thread functions should be
made thread safe, for instance servlets or struts
action classes must not have instance/non-final static fields
- Efficiency
- Ensure centralization of client
requests (incoming and data) to reduce network traffic
- Avoid SQL queries that don’t use
an index against large tables in a loop
- Security
- Avoid fields in servlet classes
that are not final static
- Avoid data access without
including error management
- Check control return codes and
implement error handling mechanisms
- Ensure input validation to avoid
cross-site scripting flaws or SQL injections flaws
- Maintainability
- Deep inheritance trees and nesting
should be avoided to improve comprehensibility
- Modules should be loosely coupled
(fanout, intermediaries, ) to avoid propagation of modifications
- Enforce homogeneous naming
conventions
3.4
Error Handling
Code is often written without
considering the potential that an error might occur. When events occur that an
application is not expecting, problems arise. Then, during the debugging phase,
an attempt is made to go back to the code and implement some error traps and
correction. However, this is usually not sufficient. Exception handling must be
taken into account during the early stages of application development. The
implementation of an error handler leads to more robust code. This chapter
discusses errors and the topic of exception handling in LabVIEW. First,
exception handling will be defined along with its role in applications.
This explanation will also clarify the
importance of exception handling. Next, the different types of errors that can
occur will be discussed. This will be followed by a description of the
available LabVIEW tools for exception handling, as well as some of the
debugging tools. Finally, several different ways to deal with errors in
applications will be demonstrated.
Many programs have more code to handle
error condition than to solve the problem for which the program was written.
There are many different classes of errors that can occur:
1. User input errors (e.g., mistyped
input, wrong filename given, wrong mouse button pressed, etc.)
2. Device errors (e.g., network
disconnect, disk crash, modem not turned on, etc.)
3. System resource limitations (e.g.,
disk is full, heap memory exhausted, file does not exist)
4. Software and hardware component
failures (e.g., DNS not available, invalid input, etc.)
The java.net.Socket class is a good
example. The class implements an object-oriented wrapper onto Unix sockets for
networking (using Java native functions). However, you don’t need to know all
the low-level details of sockets when implementing a network client
application, but you do need to be made aware of error conditions that may
arise within the class library. These include I/O errors and network errors,
such as unknown host names, broken network connections, etc.
A
class library can sometimes handle errors internally, and do something
sensible. Other times, the user of the class must be notified of the error.
Exceptions provide a structured way to communicate error information across a
class or procedure abstraction boundary. Many programs are written that do not
perform much error checking. Programmers often assume that a program is always
given the correct input, there will be no device errors, system resources are
always available, and component failures either can’t happen.
3.5
Security
The first thing that we must do to
facilitate our discussion of Java security is to discuss just what Java's
security goals are.
The term "security" is
somewhat vague unless it is discussed in some context; different expectations
of the term "security" might lead us to expect that Java programs
would be:
- Safe from malevolent programs : Programs should
not be allowed to harm a user's computing environment. This includes
Trojan horses as well as harmful programs that can replicate
themselves--computer viruses.
- Non-intrusive : Programs should be prevented
from discovering private information on the host computer or the host
computer's network.
- Authenticated: The identity of parties involved
in the program should be verified.
- Encrypted: Data that the program sends and
receives should be encrypted.
- Audited: Potentially sensitive operations
should always be logged.
- Well-defined: A well-defined security
specification would be followed.
- Verified: Rules of operation should be set and
verified.
- Well-behaved: Programs should be prevented from
consuming too many system resources.
- C2 or B1 certified: Programs should have
certification from the U.S. government that certain security procedures
are included.
In fact, while all of these
features could be part of a secure system, only the first two were within the
province of Java's 1.0 default security model. Other items in the list have
been introduced in later versions of Java: authentication was added, encryption
is available as an extension and auditing can be added to any Java program by
providing an auditing security manager. Still others of these items will be
added in the future. But the basic premise remains that Java security was
originally and fundamentally designed to protect the information on a computer
from being accessed or modified (including a modification that would introduce
a virus) while still allowing the Java program to run on that computer.
The point driving this notion of
security is the new distribution model for Java programs. One of the driving
forces behind Java, of course, is its ability to download programs over a
network and run those programs on another machine within the context of a
Java-enabled browser (or within the context of other Java applications).
Coupled with the widespread growth of Internet use--and the public-access
nature of the Internet--Java's ability to bring programs to a user on an
as-needed, just-in-time basis has been a strong reason for its rapid deployment
and acceptance.
The nature of the Internet created
a new and largely unprecedented requirement for programs to be free of viruses
and Trojan horses. Computer users had always been used to purchasing
shrink-wrapped software. Many soon began downloading software via ftp or other
means and then running that software on their machines. But widespread
downloading also led to a pervasive problem of malevolent attributes both in
free and (ironically) in commercial software (a problem which continues
unabated). The introduction of Java into this equation had the potential to
multiply this problem by orders of magnitude, as computer users now download
programs automatically and frequently.
For Java to succeed, it needed to
circumvent the virus/trojan horse problems that plagued other models of
software distribution. Hence, the early work on Java focused on just that
issue: Java programs are considered safe because they cannot install, run, or
propagate viruses, and because the program itself cannot perform any action
that is harmful to the user's computing environment. And in this context,
safety means security. This is not to say that the other issues in the above
list are not important--each has its place and its importance (in fact, we'll
spend a great deal of time in this book on the third and fourth topics in that
list). But the issues of protecting information and preventing viruses were
considered most important; hence, features to provide that level of security
were the first to be adopted. Like all parts of Java, its security model is
evolving (and has evolved through its various releases); many of the notions
about security in our list will eventually make their way into Java.
One of the primary
goals of this book, then, is to explain Java's security model and its evolution
through releases. In the final analysis, whether or not Java is secure is a
subjective judgment that individual users will have to make based on their own
requirements.
If all you want from Java is
freedom from viruses, any release of Java should meet your needs. If you need
to introduce authentication or encryption into your program, you'll need to use
a 1.1 or later release of Java. If you have a requirement that all operations
be audited, you'll need to build that auditing into your applications. If you
really need conformance with a U.S. government-approved definition of security,
Java is not the platform for you. We take a very pragmatic view of security in
this book: the issue is not whether a system that lacks a particular feature
qualifies as "secure" according to someone's definition of security.
The issue is whether Java possesses the features that meet your needs.
When Java security is discussed,
the discussion typically centers around Java's applet-based security model--the
security model that is embodied by Java-enabled browsers. This model is
designed for the Internet. For many users, this is not necessarily the most
appropriate model: it is somewhat restrictive, and the security concerns on a
private, corporate network are not the same as those on the Internet.
In this book, we take a different
tack: the goal of this book is to show how to use the security model and how to
write your own secure Java applications. While some of the information we
present will be applicable to a browser environment, the security of any
particular browser is ultimately up to the provider of the browser. Some
browsers allow us to change the security policy the browser uses, but many do
not.
Hence, reading about the security manager in
this book may help you understand how a particular browser works (and why it
works that way), but that won't necessarily allow you to change the security
model provided by that browser.
4.
SYSTEM SPECIFICATION
4.1
HARDWARE SPECIFICATION
Processor : Pentium IV
Memory :
256 MB RAM
Hard disk : 40 GB
Mouse : Optical mouse
Monitor : 15” color
Key board : 102 keys
4.2
SOFTWARE SPECIFICATION
Operating
System : Windows XP
Technology
: Java Server
Page (JSP)
Database : MySQL 5.0
Scripting
Language : JavaScript
Coding
Language : Java
Front
End Tool : Dream Viewer
8.0
Web Server : Tomcat
6.0
4.3
BRIEF OVERVIEW OF SOFTWARE TOOLS
JSP
Java Server pages are a
way of providing server side executable content in a webpage. In other words, a
way of providing a Web page which is varied depending on conditions on the
server, information filled in to a form, etc. The original way of providing server
side executable content was through the Common Gateway Interface (CGI) and a
variety of programming languages such as C, C++ and (most prevalent) Perl.
Indeed, Perl and CGI are still growing though not to the same extent as some
other technologies. More recently, Java
servlets have been introduced and they allow you to use a similar approach to
writing server side executable content – a program which produces an HTML page
as its output.
Java
servlets are more efficient in operation than CGI programs, and for heavily
used servers they provide an excellent solution. You'll probably want to choose
between modPerl, servlets, and your own server written in C or C++ for such
applications. But for many server side
applications, the number of changes made on a reply page is really quite small
and the work involved in calculating the changes is nearly insignificant. A
great shame, then, to have to write a program to spit out a huge chunk of
non-varying text with just a little changing within it. Many web servers can support "Server
Side Includes" - where a page is parsed by the web server on its way from
the document directory to the browser, and substitutions of certain variable
are made.
Using
SSI, operating system commands can even be run and their outputs written in to
the page sent to the browser - such a web page looks different if you examine
the source on the server's discs and if you ask your browser to "view
source".
Active Server Pages
(ASP) from Microsoft takes a similar approach to SSI. You write Web pages which
include chunks of one or more of VBScript, Perl Script and JavaScript, and the
page is parsed and the script run as the server feeds the page through to the
browser. The facilities provided are much more extensive that SSI, but with the
"interpret every time" approach efficient of operation is not a
strong point of this scheme – even the Microsoft documentation warns you of the
fact!
The SSI/ASP approach is
a good one, but there's a requirement for something that works along the same
lines as part as the provider is concerned: "A page of HTML that changes
is not a program" but doesn't have the same run time resource
inefficiencies. Of course, to make it portable a language like Java would be
nice, especially if your programmers already know Java.
The OO abilities and large class
libraries will minimize what's needed in each individual web page ... and so
came about Java Server pages, or JSP. JSP is much more recent than ASP (SSI has
been around for a very long time); much of the documentation, etc., being dated
early 2000 and as I write this material, anyone who's already using it is an
"early adopter" whereas ASP, servlets, etc., are already well
established. Time will tell us if the design promise of JSP gets translated
into a heavily used product.
The
JSP specification was written by Sun, and they provide a test reference server.
However, you'll probably find that Apache "Tomcat" will become the
big kid on the block as a JSP Server; it's open source, freely available, and
we see no reason why it shouldn't be just as robust as the rest of Apache's Web
server!
•
A language for developing JSP pages, which are text-based documents that
describe how to process a request and construct a response
• Constructs for accessing server-side objects
•
Mechanisms for defining extensions to the JSP language
A
JSP page is a text-based document that contains two types of text:
static template data, which can be expressed in any text-based format, such as
HTML, SVG, WML, and XML; and JSP elements, which construct dynamic content. A
syntax card and reference for the JSP elements are available at
http://java.sun.com/products/jsp/technical.html#syntax
The
Life Cycle of a JSP Page
A JSP page
services requests as a servlet. Thus, the life cycle and many of the
capabilities of JSP pages (in particular the dynamic aspects) are determined by
Java Servlet technology.
When
a request is mapped to a JSP page, it is handled by a special servlet that
first checks whether the JSP page’s servlet is older than the JSP page. If it
is, it translates the JSP page into a servlet class and compiles the class.
During development, one of the advantages of JSP pages over servlets is that
the build process is performed automatically.
Apache Tomcat is an open source
software implementation of the Java Servlet and Java Server Pages technologies.
The Java Servlet and Java Server Pages specifications are developed under the Java
Community Process.
Apache Tomcat is developed in an open
and participatory environment and released under the Apache License
version 2. Apache
Tomcat is intended to be a collaboration of the best-of-breed developers from
around the world. We invite you to participate in this open development
project. To learn more about getting involved, click
here.
Apache Tomcat powers numerous large-scale, mission-critical web
applications across a diverse range of industries and organizations. Some of
these users and their stories are listed on the Powered
by wiki page.
Apache Tomcat, Tomcat, Apache, the
Apache feather, and the Apache Tomcat project logo are trademarks of the Apache
Software Foundation.
Translation and
Compilation
During the
translation phase each type of data in a JSP page is treated differently. Template
data is transformed into code that will emit the data into the stream that
returns data to the client. JSP elements are treated as follows:
• Directives are
used to control how the Web container translates and executes the JSP page.
• Scripting elements
are inserted into the JSP page’s servlet class. See JSP Scripting Elements for
details.
• Elements of
the form <jsp:XXX ... /> are converted into method calls to
JavaBeans components or invocations of the Java Servlet API. For a JSP page
named pageName, the source for a JSP page’s servlet is kept in the file:
<S1AS7_HOME>/domains/domain1/server1/applications/j2eemodules/context_root_n/pageName$jsp.java
Both the
translation and compilation phases can yield errors that are only observed when
the page is requested for the first time. If an error occurs while the page is
being translated (for example, if the translator encounters a malformed JSP
element), the server will return a ParseException, and the servlet class source
file will be empty or incomplete. If an error occurs while the JSP page is
being compiled (for example, there is a syntax error in a scriptlet), the
server will return a JasperException and a message that includes the name of
the JSP page’s servlet and the line where the error occurred.
Once the page has been translated and
compiled, the JSP page’s servlet for the most part follows the servlet life
cycle described in Servlet Life Cycle.
1. If an
instance of the JSP page’s servlet does not exist, the container
a. Loads the JSP
page’s servlet class
b. Instantiates
an instance of the servlet class
c. Initializes
the servlet instance by calling the jspInit method
2. The container
invokes the _jspService method, passing a request and response object. If the
container needs to remove the JSP page’s servlet, it calls the jspDestroy
method.
Architecturally, JSP
may be viewed as a high-level abstraction of Java servlets.
JSPs are translated into servlets at
runtime; each JSP's servlet is cached and re-used until the original JSP is
modified.
JSP can be used
independently or as the view component of a server-side model–view–controller design,
normally with JavaBeans as the model and Java servlets (or
a framework such as Apache Struts)
as the controller. This is a type of Model 2 architecture.
JSP allows Java code
and certain pre-defined actions to be interleaved with static web markup
content, with the resulting page being compiled and executed on the server to
deliver a document. The compiled pages, as well as any dependent Java
libraries, use Java bytecode rather than a native software format.
Like any other Java
program, they must be executed within a Java virtual machine (JVM) that
integrates with the server's host operating system to
provide an abstract platform-neutral environment.
JSPs
are usually used to deliver HTML and XML documents, but through the use of
OutputStream, they can deliver other types of data as well.
The Web container creates
JSP implicit objects like pageContext, servletContext, session, request &
response.
JSP
pages use several delimiters for scripting functions. The most basic is <%
... %>, which encloses a JSP scriptlet. A
scriptlet is a fragment of Java code that is run when the user requests the
page. Other common delimiters include <%= ... %> for expressions, where
the value of the expression is placed into the page delivered to the user,
and directives, denoted with <%@ ... %>.
Java code is not required to be complete or self-contained within
its scriptlet element block, but can straddle markup content providing the page
as a whole is syntactically correct. For example, any Java if/for/while blocks
opened in one scriptlet element must be correctly closed in a later element for
the page to successfully compile. Markup which falls inside a split block of
code is subject to that code, so markup inside an if block
will only appear in the output when the if condition evaluates
to true; likewise, markup inside a loop construct may appear multiple times in
the output depending upon how many times the loop body runs.
JSP
scripting elements are used to create and access objects, define methods, and
manage the flow of control. Since one of the goals of JSP technology is to
separate static template data from the code needed to dynamically generate
content, very sparing use of JSP scripting is recommended.
Much
of the work that requires the use of scripts can be eliminated by using custom
tags, described in Custom Tags in JSP Pages. JSP technology allows a container
to support any scripting language that can call Java objects. If you wish to
use a scripting language other than the default, java, you must specify it in a
page directive at the beginning of a JSP page:
<%@ page
language="scripting language" %>
Since scripting
elements are converted to programming language statements in the JSP page’s
servlet class, you must import any classes and packages used by a JSP page. If
the page language is java, you import a class or package with the page
directive:
<%@
page import="packagename.*, fully_qualified_classname"
%>
For
example, the bookstore example page showcart.jsp imports the classes needed to
implement the shopping cart with the following directive:
<%@
page import="java.util.*, cart.*" %>
A
JSP scriptlet is used to contain any code fragment that is valid for the
scripting language used in a page. The syntax for a scriptlet is as follows:
<%scripting
language statements%>
When
the scripting language is set to java, a scriptlet is transformed into a Java
programming language statement fragment and is inserted into the service method
of the JSP page’s servlet. A programming language variable created within a
scriptlet is accessible from anywhere within the JSP page. The JSP page showcart.jsp
contains a scriptlet that retrieves an iterator from the collection of items
maintained by a shopping cart and sets up a construct to loop through all the
items in the cart. Inside the loop, the JSP page extracts properties of the
book objects and formats them using HTML markup. Since the while loop opens a
block, the HTML markup is followed by a scriptlet that closes the block.
Expressions
A
JSP expression is used to insert the value of a scripting language
expression, converted into a string, into the data stream returned to the
client. When the scripting language is the Java programming language, an
expression is transformed into a statement that converts the value of the
expression into a String object and inserts it into the implicit out object.
The syntax for an expression is as follows:
<%=
scripting language expression %>
Note
that a semicolon is not allowed within a JSP expression, even if the same
expression has a semicolon when you use it within a scriptlet.
THE
standard JSP tags for invoking operations on JavaBeans components and
performing request dispatching simplify JSP page development and maintenance.
JSP technology also provides a mechanism for encapsulating other types of
dynamic functionality in custom tags, which are extensions to the JSP
language. Custom tags are usually distributed in the form of a tag library,
which defines a set of related custom tags and contains the objects that
implement the tags. Some examples of tasks that can be performed by custom tags
include operations on implicit objects, processing forms, accessing databases
and other enterprise services such as e-mail and directories, and performing
flow control.
JSP
tag libraries are created by developers who are proficient at the Java
programming language and expert in accessing data and other services, and are
used by Web application designers who can focus on presentation issues rather
than being concerned with how to access enterprise services. As well as
encouraging division of labor between library developers and library users,
custom tags increase productivity by encapsulating recurring tasks so that they
can be reused across more than one application.
The
Uses of the JSP are
JavaServer
Pages often serve the same purpose as programs implemented using the Common
Gateway Interface (CGI). But JSP offer several advantages in comparison with
the CGI.
·
Performance is significantly better
because JSP allows embedding Dynamic Elements in HTML Pages itself instead of
having a separate CGI files.
·
JSP are always compiled before it's
processed by the server unlike CGI/Perl which requires the server to load an
interpreter and the target script each time the page is requested.
·
JavaServer Pages are built on top of the
Java Servlets API, so like Servlets; JSP also has access to all the powerful
Enterprise Java APIs, including JDBC, JNDI, EJB, JAXP etc.
·
JSP pages can be used in combination
with servlets that handle the business logic, the model supported by Java
servlet template engines.
Finally,
JSP is an integral part of J2EE, a complete platform for enterprise class
applications. This means that JSP can play a part in the simplest applications
to the most complex and demanding.
A
custom tag is a user-defined JSP language element. When a JSP page containing a
custom tag is translated into a servlet, the tag is converted to operations on
an object called a tag handler. The Web container then invokes those
operations when the JSP page’s servlet is executed. Custom tags have a rich set
of features. They can
•
Be customized via attributes passed from the calling page.
•
Access all the objects available to JSP pages.
•
Modify the response generated by the calling page.
•
Communicate with each other. You can create and initialize a JavaBeans
component, create a variable that refers to that bean in one tag, and then use
the bean in another tag.
•
Be nested within one another, allowing for complex interactions within a JSP
page.
The
Struts tag library provides a framework for building internationalized Web
applications that implement the Model-View-Controller design pattern. Struts
include a comprehensive set of utility custom tags for handling:
• HTML forms
• Templates
• JavaBeans
components
• Logic
processing
The
working of the system between the client and the web applications includes the
Apache web server servlet and the tomcat servlet where the client can be a
browser having HTML displaying dynamic pages can be depicted in the
diagrammatic form as
JSP Processing:
The following steps explain how the web server creates the web
page using JSP:
·
As with a normal page, your
browser sends an HTTP request to the web server.
·
The web server recognizes
that the HTTP request is for a JSP page and forwards it to a JSP engine. This
is done by using the URL or JSP page which ends with .jsp instead of .html.
·
The JSP engine loads the JSP
page from disk and converts it into servlet content. This conversion is very
simple in which all template text is converted to println( ) statements and all JSP elements are converted to Java
code that implements the corresponding
dynamic behavior of the page.
·
The JSP engine compiles the servlet into an
executable class and forwards the original request to a servlet engine.
·
A part of the web server
called the servlet engine loads the Servlet class and executes it. During
execution, the servlet produces an output in HTML format, which the servlet
engine passes to the web server inside an HTTP response.
·
The web server forwards the HTTP response to
your browser in terms of static HTML content.
·
Finally web browser handles
the dynamically generated HTML page inside the HTTP response exactly as if it
were a static page.
JSP
Expression:
A
JSP expression element contains a scripting language expression that is
evaluated, converted to a String, and inserted where the expression appears in
the JSP file. Because the value of an
expression is converted to a String, you can use an expression within a line of
text, whether or not it is tagged with HTML, in a JSP file.
The expression element
can contain any expression that is valid according to the Java Language
Specification but you cannot use a semicolon to end an expression.
Following is the syntax
of JSP Expression:
<%= expression %>
JSP Comments:
JSP comment marks text
or statements that the JSP container should ignore. A JSP comment is useful
when you want to hide or "comment out" part of your JSP page.
Following is the syntax
of JSP comments:
<%-- This is JSP
comment --%>
JSP Directives:
A JSP directive affects
the overall structure of the servlet class. It usually has the following form:
<%@ directive
attribute="value" %>
JSP Actions:
JSP actions use
constructs in XML syntax to control the behavior of the servlet engine. You can
dynamically insert a file, reuse JavaBeans components, forward the user to
another page, or generate HTML for the Java plugin.
There is only one
syntax for the Action element, as it conforms to the XML standard:
<jsp:action_name
attribute="value" />
JSP Operators:
JSP supports all the
logical and arithmetic operators supported by Java. Following table give a list
of all the operators with the highest precedence appear at the top of the
table, those with the lowest appear at the bottom.
Within an expression, higher precedence operators will be
evaluated first.
Category
|
Operator
|
Associativity
|
||
Postfix
|
()
[] . (dot operator)
|
Left
to right
|
||
Unary
|
++
- - ! ~
|
Right
to left
|
||
Multiplicative
|
*
/ %
|
Left
to right
|
||
Additive
|
+
-
|
Left
to right
|
||
Shift
|
>>>>><<
|
Left
to right
|
||
Relational
|
>>=
<<=
|
Left
to right
|
||
Equality
|
==
!=
|
Left
to right
|
||
Bitwise
AND
|
&
|
Left
to right
|
||
Bitwise
XOR
|
^
|
Left
to right
|
||
Bitwise
OR
|
|
|
Left
to right
|
||
Objects
|
Description
|
|||
request
|
This
is the HttpServletRequest object associated with the request.
|
|||
response
|
This
is the HttpServletResponse object associated with the response to the
client.
|
|||
out
|
This
is the PrintWriter object used to send output to the client.
|
|||
session
|
This
is the HttpSession object associated with the request.
|
|||
application
|
This
is the ServletContext object associated with application context.
|
|||
config
|
This
is the ServletConfig object associated with the page.
|
|||
JSP Literals:
The JSP expression
language defines the following literals:
·
Boolean: true
and false
·
Integer: as
in Java
·
Floating point: as
in Java
·
String: with
single and double quotes; " is escaped as \", ' is escaped as \', and
\ is escaped as \\.
·
Null: null
Setting up JSP
Environment
·
This step involves downloading an
implementation of the Java Software Development Kit (SDK) and setting up PATH
environment variable appropriately.
·
You can downloaded SDK from Oracle's
Java site: Java SE Downloads.
·
Once you download your Java
implementation, follow the given instructions to install and configure the
setup.
·
Finally set PATH and JAVA_HOME environment
variables to refer to the directory that contains java and javac, typically
java_install_dir/bin and java_install_dir respectively.
·
If you are running Windows
and installed the SDK in C:\jdk1.5.0_20, you would put the following line in
your
C:\autoexec.bat file. set
PATH=C:\jdk1.5.0_20\bin;%PATH%
set JAVA_HOME=C:\jdk1.5.0_20
Setting up Web Server:
Tomcat
A number of Web Servers
that support JavaServer Pages and Servlets development are available in the
market. Some web servers are freely downloadable and Tomcat is one of them.
Apache Tomcat is an
open source software implementation of the JavaServer Pages and Servlet
technologies and can act as a standalone server for testing JSP and Servlets
and can be integrated with the Apache Web Server. Here are the steps to setup
Tomcat on your machine:
·
Download latest version of Tomcat from
http://tomcat.apache.org/.
·
Once you downloaded the installation,
unpack the binary distribution into a convenient location. For example in
C:\apache-tomcat-5.5.29 on windows, or /usr/local/apache-tomcat-5.5.29 on
Linux/Unix and create CATALINA_HOME environment variable pointing to these
locations.
Tomcat can be started
by executing the following commands on windows machine:
%CATALINA_HOME%\bin\startup.bat
or
C:\apache-tomcat-5.5.29\bin\startup.bat
Further information
about configuring and running Tomcat can be found in the documentation included
here, as well as on the Tomcat web site: http://tomcat.apache.org
ABOUT RDBMS
A database management, or DBMS, gives the user access to their
data and helps them transform the data into information. Such database
management systems include dBase, paradox, IMS, SQL Server and SQL Server. These systems allow users to create, update
and extract information from their database.
A database is a
structured collection of data. Data
refers to the characteristics of people, things and events. SQL Server stores each data item in its own
fields. In SQL Server, the fields
relating to a particular person, thing or event are bundled together to form a
single complete unit of data, called a record(it can also be referred to as raw
or an occurrence). Each record is made
up of a number of fields. No two fields
in a record can have the same field name.
During an SQL
Server Database design project, the analysis of your business needs identifies
all the fields or attributes of interest.
If your business needs change over time, you define any additional
fields or change the definition of existing fields.
SQL server tables
SQL Server stores
records relating to each other in a table.
Different tables are created for the various groups of information.
Related tables are grouped together to form a database.
Primary key
Every table in SQL
Server has a field or a combination of fields that uniquely identifies each
record in the table. The Unique
identifier is called the Primary Key, or simply the Key. The primary key provides the means to
distinguish one record from all other in a table. It allows the user and the database system to
identify, locate and refer to one particular record in the database.
Relational database
Sometimes all the
information of interest to a business operation can be stored in one
table. SQL Server makes it very easy to
link the data in multiple tables. Matching an employee to the department in
which they work is one example. This is
what makes SQL Server a relational database management system, or RDBMS. It stores data in two or more tables and
enables you to define relationships between the tables and enables you to
define relationships between the tables.
Foreign key
When a field is one
table matches the primary key of another field is referred to as a foreign
key. A foreign key is a field or a group
of fields in one table whose values match those of the primary key of another
table.
Referential integrity
Not only does SQL
Server allow you to link multiple tables, it also maintains consistency between
them. Ensuring that the data among
related tables is correctly matched is referred to as maintaining referential
integrity.
Data abstraction
A major purpose of
a database system is to provide users with an abstract view of the data. This system hides certain details of how the
data is stored and maintained. Data abstraction is divided into three levels.
Physical level: This is the lowest level of abstraction at
which one describes how the data are actually stored.
Conceptual Level: At this level of database abstraction all the
attributed and what data are actually stored is described and entries and
relationship among them.
View level: This is the highest level of abstraction at
which one describes only part of the database.
Advantages of RDBMS
·
Redundancy can be avoided
·
Inconsistency can be
eliminated
·
Data can be Shared
·
Standards can be enforced
·
Security restrictions can be
applied
·
Integrity can be maintained
·
Conflicting requirements can
be balanced
·
Data independence can be
achieved.
Disadvantages of DBMS
A significant
disadvantage of the DBMS system is cost.
In addition to the cost of purchasing of developing the software, the
hardware has to be upgraded to allow for the extensive programs and the
workspace required for their execution and storage. While centralization reduces duplication, the
lack of duplication requires that the database be adequately backed up so that
in case of failure the data can be recovered.
Features of SQL server (RDBMS)
SQL SERVER is one
of the leading database management systems (DBMS) because it is the only
Database that meets the uncompromising requirements of today’s most demanding
information systems. From complex
decision support systems (DSS) to the most rigorous online transaction
processing (OLTP) application, even application that require simultaneous DSS
and OLTP access to the same critical data, SQL Server leads the industry in
both performance and capability.
SQL SERVER is a truly portable, distributed, and open DBMS that
delivers unmatched performance, continuous operation and support for every
database.
SQL SERVER RDBMS is high performance fault tolerant DBMS which is
specially designed for online transactions processing and for handling large
database application.
SQL SERVER with transactions processing option offers two features
which contribute to very high level of transaction processing throughput, which
are
·
The row level lock manager
Enterprise wide data sharing
The unrivaled
portability and connectivity of the SQL SERVER DBMS enables all the systems in
the organization to be linked into a singular, integrated computing resource.
Portability
SQL SERVER is fully
portable to more than 80 distinct hardware and operating systems platforms,
including UNIX, MSDOS, OS/2, Macintosh and dozens of proprietary
platforms. This portability gives
complete freedom to choose the database server platform that meets the system
requirements.
Open systems
SQL SERVER offers a
leading implementation of industry –standard SQL. SQL Server’s open architecture integrates SQL
SERVER and non –SQL SERVER DBMS with industry’s most comprehensive collection
of tools, application, and third party software products SQL Server’s Open
architecture provides transparent access to data from other relational database
and even non-relational database.
Distributed data sharing
SQL Server’s
networking and distributed database capabilities to access data stored on
remote server with the same ease as if the information was stored on a single
local computer. A single SQL statement
can access data at multiple sites. You can store data where system requirements
such as performance, security or availability dictate.
Unmatched performance
The most advanced
architecture in the industry allows the SQL SERVER DBMS to deliver unmatched
performance.
Sophisticated concurrency control
Real World applications demand access to critical
data. With most database Systems
application becomes “contention bound” – which performance is limited not by
the CPU power or by disk I/O, but user waiting on one another for data access.
SQL Server employs full, unrestricted row-level locking and
contention free queries to minimize and in many cases entirely eliminates
contention wait times.
No I/O bottlenecks
SQL Server’s fast
commit groups commit and deferred write technologies dramatically reduce disk
I/O bottlenecks. While some database write whole data block to disk at commit
time, SQL Server commits transactions with at most sequential log file on disk
at commit time, On high throughput systems, one sequential writes typically
group commit multiple transactions. Data
read by the transaction remains as shared memory so that other transactions may
access that data without reading it again from disk. Since fast commits write all data necessary
to the recovery to the log file, modified blocks are written back to the
database independently of the transaction commit, when written from memory to
disk.
5.
SYSTEM DESIGN
5.1
OBJECT ORIENTED ANALYSIS AND DESIGN
Object-oriented analysis and design (OOAD) is a software engineering approach that models a system as a
group of interacting objects. Each object represents some entity
of interest in the system being modeled, and is characterized by its class, its
state (data elements), and its behavior. Various models can be created to show
the static structure, dynamic behavior, and run-time deployment of these
collaborating objects. There are a number of different notations for
representing these models, such as the Unified
Modeling Language (UML).
Object-oriented analysis (OOA) applies object-modeling
techniques to analyze the functional
requirements for a system. Object-oriented
design (OOD) elaborates the analysis models to produce implementation
specifications. OOA focuses on what the system does, OOD on how the system does it.
Object-oriented analysis (OOA) is the process of analyzing a
task (also known as a problem domain) to develop a conceptual model that can then be used to
complete the task. A typical OOA model would describe computer software that
could be used to satisfy a set of customer-defined requirements. During the
analysis phase of problem-solving, the analyst might consider a written
requirements statement, a formal vision document, or interviews with stakeholders
or other interested parties. The task to be addressed might be divided into
several subtasks (or domains), each representing a different business,
technological, or other areas of interest.
Each subtask would be analyzed separately. Implementation
constraints, (e.g., concurrency, distribution, persistence, or how the system is to be built)
are not considered during the analysis phase; rather, they are addressed during
object-oriented design (OOD).
The conceptual model that results from OOA will typically
consist of a set of use cases, one or more UML class diagrams, and a number of interaction diagrams. It may also include some kind of user interface mock-up.
5.1.1
CLASS DIAGRAM
The class diagram is the main building block of object oriented modeling. It is used both for general conceptual modeling of the systematics of the
application, and for detailed modeling translating the models into programming code. Class diagrams can also be used for data modeling. The classes
in a class diagram represent both the main objects, interactions in the
application and the classes to be programmed.
In the diagram, classes are represented with boxes which
contain three parts:
·
The upper part holds the name of the
class
·
The middle part contains the attributes
of the class
·
The bottom part gives the methods or
operations the class can take or undertake
In the design of a system, a number of classes are identified and grouped together in a class diagram which helps to determine the static relations between those objects. With detailed modeling, the classes of the conceptual design are often split into a number of subclasses.
7.
PSEUDO CODE
Adminregbranch.jsp
<%@ page
contentType="text/html; charset=iso-8859-1" language="java"
import="java.sql.*" errorPage="" %>
<!DOCTYPE html
PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html
xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta
http-equiv="Content-Type" content="text/html;
charset=iso-8859-1" />
<title>registerBranch</title>
<%
classfile.Design ds=new classfile.Design();
String head=ds.gethead();
String ubody=ds.adminUpperBody();
String lbody=ds.getlbody();
%>
<%=head%>
<script type="text/javascript"
language="javascript" src="myscript.js"></script>
<script
language="javascript" type="text/javascript">
function datadelete()
{
var chk=confirm("Sure to
Delete?");
if(chk==true)
{
form.action="adminregbranchdelete.jsp";
}
else
return chk
}
function
databranchinsert()
{
if(form.ddldistrict.value=="0")
{
form.ddldistrict.focus();
alert("Select District");
return false;
}
else if(form.txtpincode.value=="")
{
form.txtpincode.focus();
alert("Enter Pin Code");
return false;
}
else if(form.ddlstate.value=="0")
{
form.ddlstate.focus();
alert("Select State");
return false;
}
else if(form.txtcountry.value=="")
{
form.txtcountry.focus();
alert("Enter Country");
return false;
}
else if(form.txtaddress.value=="")
{
form.txtaddress.focus();
alert("Enter Address");
return false;
}
else
{
form.action="adminregbranchsave.jsp";
}
}
function
databranchupdate()
{
form.action="branchupdateupdate.jsp";
}
</script>
</head>
<body>
<form name="form"
method="post">
<table align="center">
<tr>
<td><%=ubody%>
<br /><br /><br />
<h2
align="center">RegisterBranch</h2>
<table align="center"
border="2">
<tr>
<td>District</td>
<td><select
name="ddldistrict">
<option
value="0">Select</option>
<option>Gulbarga</option>
<option>Bidar</option>
<option>Hubli</option>
</select>
</td>
</tr>
<tr>
<td>Pin
Code</td>
<td><input
name="txtpincode" onkeypress="return isNumber(event)"
type="text"/></td>
</tr>
<tr>
<td>State</td>
<td><select
name="ddlstate">
<option
value="0">Select</option>
<option>Karnataka</option>
<option>Andra
Pradesh</option>
<option>Uttar
Pradesh</option>
<option>Kerela</option>
</select></td>
</tr>
<tr>
<td>Country</td>
<td><input
value="India" readonly type="text"
name="txtcountry"/>
</td>
</tr>
<tr>
<td>Address</td>
<td><textarea name="txtaddress"
></textarea></td>
</tr>
<tr>
<td
colspan="5" align="center">
<input onclick="return databranchinsert()"
name="btnregister"type="submit"
value="Register"/>
<input type="reset" name="rstreset"
value="Reset"/>
</td> </tr>
</table> <br/><br/>
<%
try
{
Class.forName("org.gjt.mm.mysql.Driver");
Connection
cn=DriverManager.getConnection("jdbc:mysql://localhost:3306/nirmalkrushi","","");
Statement st=cn.createStatement();
ResultSet
rs=st.executeQuery("select * from tblbranch;");
%>
<table align="center" border="2">
<tr>
<td><b>Branch Id</b></td>
<td><b>District</b></td>
<td><b>Pin Code</b></td>
<td><b>State</b></td>
<td><b>Country</b></td>
<td><b>Address</b></td>
<td><input type="submit"
onclick="datadelete()" value="Delete"/></td>
<td><input type="submit"
onclick="databranchupdate()" value="Update"/></td>
</tr>
<%
while(rs.next())
{
%>
<tr>
<td><%=rs.getInt(1)%></td>
<td><%=rs.getString(2)%></td>
<td><%=rs.getInt(3)%></td>
<td><%=rs.getString(4)%></td>
<td><%=rs.getString(5)%></td>
<td><%=rs.getString(6)%></td>
<td><input
type="checkbox" value="<%=rs.getString(1)%>"
name="chkdelete"/></td>
<td><input
type="checkbox" value="<%=rs.getString(1)%>"
name="chkupdate"/></td>
</tr>
<%
}
}
catch(Exception exe)
{
System.out.print(exe);
}
%>
</table><%=lbody%></td>
</tr>
</table>
</form>
</body>
</html>
Farmerupdate.jsp
<%@ page
contentType="text/html; charset=iso-8859-1" language="java"
import="java.sql.*" errorPage="" %>
<!DOCTYPE html
PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta
http-equiv="Content-Type" content="text/html;
charset=iso-8859-1" />
<title>Farmer
Update</title>
<%
classfile.Design ds=new classfile.Design();
String head=ds.gethead();
String ubody=ds.adminUpperBody();
String lbody=ds.getlbody();
%>
<%= head %>
<script
type="text/javascript" language="javascript">
function dataupdate()
{
var chk=confirm("Sure want to Update?");
if(chk==true)
{
form.action="farmerupdateupdate.jsp";
}
else
return chk
}
</script>
</head>
<body>
<form
name="form" method="post">
<table
align="center">
<tr>
<td><%=ubody%>
<br/><br/><br/>
<%
try
{
Class.forName("org.gjt.mm.mysql.Driver");
Connection
cn=DriverManager.getConnection("jdbc:mysql://localhost:3306/nirmalkrushi","","");
Statement st=cn.createStatement();
ResultSet rs=st.executeQuery("select * from
admin_register_farmers;");
%>
<div
style="overflow:scroll;width:600px;height:250px;">
<br/><br/><br/>
<table align="center" border="2">
<tr>
<td>Farmer ID</td>
<td>Farmer Name</td>
<td>Last Name
<td>Address</td>
<td>Contact Number</td>
<td>Landline Number</td>
<td>LandArea(Acres)</td>
<td>Owner</td>
<td>Paddy</td>
<td>Cereals</td>
<td>Pulse</td>
<td>Wheat</td>
<td>Jowar</td>
<td>Cotton</td>
<td>Sugarcane</td>
<td>Sunflower</td>
</tr>
<%
while(rs.next())
{
%>
<tr>
<td><input
type="text" value="<%=rs.getInt(1)%>"
onKeyPress="return isNumber(event)" name="txtid" /></td>
<td><input
onkeypress="return isCharacter(event)" type="text"
value="<%=rs.getString(2)%>" name="txfarmername" /></td>
<td><input
onKeyPress="return isNumber(event)" type="text"
value="<%=rs.getString(3)%>" maxlength="10"
name="txtlastname"
/></td>
<td><input
type="text" value="<%=rs.getString(4)%>"
name="txtaddress"
/></td>
<td><input
type="text" value="<%=rs.getString(5)%>"
name="txtcontact"
/></td>
<td><input
type="text" value="<%=rs.getString(6)%>"
name="txtlandline"
/></td>
<td><input
type="text" value="<%=rs.getFloat(7)%>"
name="txtlandarea"
/></td>
<td><input
type="text" value="<%=rs.getString(8)%>"
name="sltowner"
/></td>
<td><input
type="text" value="<%=rs.getString(9)%>"
name="chkpaddy"
/></td>
<td><input
type="text" value="<%=rs.getString(10)%>"
name="chkcereals"
/></td>
<td><input
type="text" value="<%=rs.getString(11)%>"
name="chkpulse"
/></td>
<td><input
type="text" value="<%=rs.getString(12)%>"
name="chkwheat"
/></td>
<td><input
type="text" value="<%=rs.getString(13)%>"
name="chkjowar"
/></td>
<td><input
type="text" value="<%=rs.getString(14)%>"
name="chkcotton"
/></td>
<td><input
type="text" value="<%=rs.getString(15)%>"
name="chksugarcane"
/></td>
<td><input
type="text" value="<%=rs.getString(16)%>" name="chksunflower" /></td>
</tr>
</table>
</div> <table
align="center" border="2">
<tr>
<td><input
type="submit" value="Update" onclick="return
dataupdate()"/></td>
<td><input
type="reset" value="Cancel" /></td>
</tr>
</table>
<%
}
} catch(Exception
exe)
{
System.out.print(exe);
}
%>
<%=lbody%></td>
</tr>
</table>
</form>
</body></html>
8.
TESTING AND IMPLEMENTATION
Introduction
Testing is a process, which reveals in
the program. It is the major quality measure employed during software
development. During software development, during testing, the program is
executed with a set of test cases and the output of the program for the test
cases is evaluated to determine if the program is performing as it is expected
to perform.
The important of software testing and
its implication with respect to software quality cannot be over emphasized. The
development of software system involves a series of production activities where
opportunities for injection of human fallibilities are enormous. Error may be
erroneously or imperfectly specified, as well as later and development stage,
because of human inability to perform and communication with perfection,
software development is accomplished by quality assurance activity.
Software
testing is a critical element of software quality assurance and represents the
ultimate review of specification, design and code generation. The increasing
visibility of software as a system element and the attendant “costs” associated
with a software failure are motivating forces from well planned, through
testing. It is not unusual for a software development organization to expend
between 30 to 40 percent of total project effort on testing.
Once source code has been generated
software must be tested to uncover and correct as many errors as possible
before delivering it to the customer.
Principles
of testing
·
All tests should be traceable to
customer requirement.
·
Tests should be planned before testing
begins.
·
The praetor principles apply to software
testing.
·
Testing should begin in small scale and
progress towards testing in large scale.
·
Exhaustive testing is not possible.
·
To be most effective, an independent
third party should conduct testing.
Attributes
of good Testing
·
A good testing has high probability of
finding an error. To achieve this goal, tester must understand the software and
attempt to develop a mental picture of how the software might fail.
·
A god test is not redundant. Testing
time and resources are limited. There is no point in conducting a test that has
the same purpose as another test. Every test should have a different use case.
·
A good test should be a “best of breed”.
In a good of test that have a similar intent, time and resource limitations may
militate towards execution of only a subset of these tests. In such cases, the
test that has the highest likelihood of uncovering a whole class of errors
should be used.
·
A good test should be neither too simple
nor too complex. Although it is sometimes possible to combine a series of tests
into one test case, the possible side effect associates with approach may
errors. In general, each test should be executed separately.
Testing
Objective
The
following are the objectives of the testing.
·
Finding recognizable errors.
·
Tracing and correcting undiscovered
errors.
·
To uncover different classes of errors
with minimum amount of time and effort.
Test
Approaches
Black
Box Testing
Black
box testing is done to find
·
Incorrect or missing functions.
·
Interface errors.
·
Errors in external database.
·
Performance errors.
·
Initialization and termination errors.
White
Box Testing
This
test allows the tester to
·
Check whether all independent paths
within a model have exercised at least once.
·
Exercise all logical decision on their
true and false sides.
·
Exercise all lops at their boundaries
and within their bounds.
·
Exercise the internal data structure to
ensure their validity.
Ensure
whether the possible validity check and validity lookups have been provided to
validate data entry.
Testing
Strategies
Unit
Testing
Individual components are tested to
ensure that they operate correctly. Each component is tested independently
without other system components.
Module
Testing
Module is a collection of dependent
components such as an objects class, an abstract data type or some looser
collection of procedure and functions. A module encapsulates related components
so can be tested without other system modules.
Subsystem
Testing
This phase involves testing collection
of modules, which have been integrated into subsystems. Subsystems may be
independently designed and implemented. The most common problems that arise in
the large software system are subsystems interface mismatches. The subsystem
test process should therefore concentrate on the detection of interface errors
by rigorously exercising these interfaces.
System
Testing
The subsystems are integrated to make
up the entire system .the testing process is concerned with finding errors
which result from unanticipated interactions between subsystems and system
components. It is also concerned with validating that the system is functional
and non-functional requirements.
Integration
Testing
Top down integration starts with the
main routine and immediate subordinate routines in the system structure. After
this top level “skeleton” has been thoroughly tested, it becomes the test
harness for its immediately subordinate routines. Top-down integration requires
the use for program tubs to stimulate the effect of lower-level routines that
are called by those being tested.
Top-down
integration testing offers several advantages
·
System integration is distributed
throughout the implementation phase.
·
Modules are integrated as they are
developed.
·
Top-level interface are tested first and
most often.
·
The top-level routines provide a natural
test harness for lower-level routines.
·
Errors are localized to the new modules
and interface that are being added.
While it may appear the top-down
integration is always preferable, there are many situations in which it is not
possible to adhere to a strict top-down approach. It may be necessary to test
certain critical low-level modules first. The sandwich testing strategy may be
preferred in these situations.
Sandwich integration is predominately
top-down but bottom-up techniques are used on some modules and subsystems. This
mix alleviates many of the problems encountered in pure top-down testing and
retains the advantage of top-down integration at the subsystem and system
level.
Acceptance
Testing
This is final stage in testing process
before the system is tested for operational use. The system is tested with data
supplied by the system procurer rather than simulated test data.
Acceptance
testing may reveal errors and omissions in the systems requirements definitions
because the real data exercises the system in different phase from the test
data.
Acceptance
testing may also reveal the requirements problems where the system facilities
do not really meet the user’s need or system performance is unacceptable.
Example:
We tested for all the objectives that were stated in the project statement
whether they meet the requirements or not.
OTHER
TESTING APPOACHES
Testing
can be done in two ways:
Ø Bottom
up approach
Ø Top
down approach
Bottom
up approach:
Testing
can be performed starting from smallest and lowest level modules and processing
one at a time. For each module in bottom up testing a short program executes
the module and provides the needed data so that the module is asked to perform
the way it will when embedded within the larger system.
When bottom level ones they are tested
attention turns to those on the next level that use the lower level ones they
are tested individually and then linked with the previously examined lower
level modules.
Top
down approach:
This type of testing starts from upper
level modules. Since the detailed activities usually performed in the lower
level routines are not provided stubs are written. A stub is a module shell
called by upper level module and that indicating that when reached properly
will return a message to be calling module indicating that proper interaction
occurred. No attempt is made to verify the correctness of the lower level
module.
Validation:
The system has been tested and
implemented successfully and thus that all the requirements as listed in the
software requirements specification are completely fulfilled. In case of
erroneous input corresponding error messages are displayed.
IMPLEMENTATION
Implementation
is the process of converting a new system design into operation. It is phase
that focuses on user training, site preparation, and file conversion for
installing the system under consideration.
The important factor that should be considered
here is that the conversion should not disrupt the following of the
organization.
The objective is to put the tested system into
operation while holding costs, risks, and personnel irritation to a minimum.
In our project the conversion involves following
steps:
1. Conversion
begins with a review of the project plan, the system test documentation, and
the implementation plan. The parties involved are the user, the project team,
programmers, and operators.
2. The
conversion portion of implementation plan are finalized and approved.
3. Files
are converted.
4. Parallel
processing between the existing and the new systems re initiated.
5. Results
of the computer runs and operations for the new system are logged on a special
form.
6. Assuming
no problems, parallel processing is continued. Implementation details are
documented for reference.
7. Conversion
is completed at this stage. Plans for the post implementation review are
prepared. Following the review, the new system is officially operational.
The prime concern during the
conversion process is copying the old files into the new system.
Once a particular file is selected,
the next step is to specify the data to be converted. A file comparison program
is best used for verifying the accuracy of the copying process.
Well-planned test files are
important for successful conversion. An audit trail was performed on the system
since it is the key to detect errors and fraud in the new system.
During the implementation the user
training is most important. In our Web Server project no heavy training is
required. Only training how to design and post the files and how to use the
administration tools and how to get files etc.
A post-implementation review is an
evaluation of a system in terms of the extent to which the system accomplishes
stated objectives and actual project cost exceeds initial estimates. It is
usually a review of major problems that need converting and those that surfaced
during the implementation phase. The team prepares a review plan around the
type of evaluation to be done and the time frame for its completion. The plan
considers administrative, personnel, and system performance and the changes
that are likely to take place through maintenance.
9.
CONCLUSION
The project “Nirmal Krushi”provides
a web based application for the agriculture field. The farmers are allowed to
upload the products grown by them so that retailers and customers can directly
buy the products by ordering online. Here the farmers can view the information
of agricultural products, tools, schemes by the government and take benefit of
it. With the help of this application the customers and retailers can purchase
the products online directly through farmers. This reduces the price of the
products and gives benefit to both farmers and the customers.
10.
BIBLIOGRAPHY
·
FOR DEPLOYMENT AND PACKING ON SERVER
·
FOR SQL
·
FOR Templates
www.1000templates.com
·
SOFTWARE ENGINEERING (ROGER’S
PRESSMAN)
·
Java A Complete Reference, Herbert
Schield.
·
Java, E Balaguruswami
Comments
Post a Comment