STOP GLOBAL WARMING - (MCA)
STOP GLOBAL WARMING
ABSTRACT
This is
a social website for encouraging people to abstain from various pollution
causatives. It works on the principles of health promotion and strengthening
the society. It not only makes the users aware of the diseases caused but also
how to prevent them. It encourage, conduct and participate in investigations
and research relating to problems of water, land and air pollution and its
prevention, control and abatement thereof.
Global warming refers to an average
increase in the Earth's temperature, which in turn causes changes in climate. A
warmer Earth may lead to changes in rainfall patterns, a rise in sea level, and
a wide range of impacts on plants, wildlife, and humans. When scientists talk
about the issue of climate change, their concern is about global warming caused
by human activities.
While the Earth's
climate has always changed naturally, for the first time human activity is now
a major force affecting the process, with potentially drastic consequences.
Huge volumes of fossil fuels in the form of gasoline, oil, coal and
natural gas are used every day, releasing carbon dioxide. This, together with
other emissions generated by human activity, such as methane and nitrous oxide,
accentuate the natural 'greenhouse effect' that makes the Earth habitable. Carbon
dioxide is the most important anthropogenic greenhouse gas, with annual
emissions growing 80 per cent in 1970–2004.
INTRODUCTION
2.1.
INTRODUCTION TO PROJECT
·
An increase in the average temperature of the
earth's atmosphere (especially a sustained increase that causes climatic
changes)
·
An increase in the earth's atmospheric and
oceanic temperatures widely predicted to occur due to an increase in the greenhouse
effect resulting especially from pollution
·
The progressive gradual rise of the earth's
surface temperature thought to be caused by the greenhouse effect and
responsible for changes in global climate patterns. An increase in the near
surface temperature of the Earth.
·
The changes in the surface air temperature,
referred to as the global temperature, brought about by the enhanced greenhouse
effect, which is induced by emissions of greenhouse gases into the air.
·
An increase in the average worldwide
temperature primarily caused by fossil fuel burning and an increase of carbon
dioxide in the atmosphere.
·
Global warming refers to an average
increase in the Earth's temperature, which in turn causes changes in climate. A
warmer Earth may lead to changes in rainfall patterns, a rise in sea level, and
a wide range of impacts on plants, wildlife, and humans. When scientists talk
about the issue of climate change, their concern is about global warming caused
by human activities.
·
SYSTEM
OVERVIEW
3.
SDLC METHDOLOGIES
3.1.
ANALYSIS MODEL
Mainly
there are four phases in the "Spiral
Model":
Ø Planning
Ø Evolutions
Ø Risk
Analysis
Ø Engineering
Planning: In this phase, the
aims, option and constraints of the project are determined and are documented.
The aims and other specifications are fixed so as to determine the
strategies/approaches to go after during the project life cycle.
Risk Analysis: It is the most
significant phase of "Spiral Model". In this phase the entire
possible option that are available and helpful in developing a cost efficient
project are analyzed and strategies are determined to employ the available
resources. This phase has been added particularly so as to recognize and
resolve all the possible risks in the project Farmers Buddy.
If any indication shows some uncertainty in needs, prototyping may be utilized
to continue with the obtainable data and discover out possible software
development solution so as to deal with the potential modification in
the needs.
Engineering:
In this phase, the specific software development of the
project is worked out. The output of developed of modules by modules is passed
through all the phases iteratively so as to obtain development in the same.
Customer Evaluation: In this phase, before releasing the developed product, the product is passed on to the customer so as to obtain customer’s views and suggestions and if some is left or the desire result is not achieved then all the needs will be identified and resolve all the possible problems/errors in the Farmers Buddy. One can compare it from the TESTING phase.
Customer Evaluation: In this phase, before releasing the developed product, the product is passed on to the customer so as to obtain customer’s views and suggestions and if some is left or the desire result is not achieved then all the needs will be identified and resolve all the possible problems/errors in the Farmers Buddy. One can compare it from the TESTING phase.
The spiral
model, illustrated in below figure, combines the iterative nature of
prototyping with the controlled and systematic aspects of the waterfall model,
therein providing the potential for rapid development of incremental versions
of the software. In this model the
software is developed in a series of incremental releases with the early stages
being either paper models or prototypes. Later iterations become increasingly
more complete versions of the product.
Depending
on the model it may have 3-6 task regions our case will consider a ‘6-task
region’ model.
These regions
are:
1. The
User communication task – to establish effective communication between
developer and User.
2.
The planning task – to define
resources, time lines and other project related information..
3.
The risk analysis task –
to assess both technical and management risks.
4.
The engineering task – to
build one or more representations of the application.
5.
The construction and release task
– to construct, test, install and provide user support (e.g., documentation
and training).
6.
The User evaluation task – to
obtain customer feedback based on the evaluation of the software representation
created during the engineering stage and implemented during the install stage.
The evolutionary
process begins at the centre position and moves in a clockwise
direction. Each traversal of the spiral typically results in a deliverable. For
example, the first and second spiral traversals may result in the production of
a product specification and a prototype, respectively. Subsequent traversals
may then produce more sophisticated versions of the software.
An important
distinction between the spiral model and other software models is the explicit
consideration of risk. There are no fixed phases such as specification or
design phases in the model and it encompasses other process models. For
example, prototyping may be used in one spiral to resolve requirement
uncertainties and hence reduce risks. This may then be followed by a
conventional waterfall development.
Ø Note
that each passage through the planning stage results in an adjustment to the
project plan.
Ø Each
of the regions is populated by a set of work tasks called a task set that are
adapted to characteristics of the project to be undertaken. For small projects
the number of tasks and their formality is low. Conversely, for large projects
the reverse is true.
Advantages
of the Spiral Model
Ø The spiral model is
a realistic approach to the development of large-scale software products
because the software evolves as the process progresses. In addition, the
developer and the client better understand and react to risks at each
evolutionary level.
Ø The model uses
prototyping as a risk reduction mechanism and allows for the development of
prototypes at any stage of the evolutionary development.
Ø It maintains a
systematic stepwise approach, like the classic life cycle model, but
incorporates it into an iterative framework that more reflect the real world.
Ø If employed
correctly, this model should reduce risks before they become problematic, as
consideration of technical risks are considered at all stages.
Disadvantages of the
Spiral Model
Ø Demands
considerable risk-assessment expertise
Ø It
has not been employed as much proven models (e.g. the WF model) and hence may
prove difficult to ‘sell’ to the client that this model is controllable and
efficient.
3.2.
PROCESS MODEL
The
process model is typically used in structured analysis and design methods. Also called a data flow diagram (DFD), it
shows the flow of information through a system.
Each process transforms inputs into outputs.
The model generally starts with a
context diagram showing the system as a single process connected to external
entities outside of the system boundary.
This process explodes to a lower level DFD that divides the system into
smaller parts and balances the flow of information between parent and child
diagrams. Many diagram levels may be
needed to express a complex system.
Primitive processes, those that don't explode to a child diagram, are
usually described in a connected textual specification.
SYSTEM DEVELOPMENT
ENVIRONMENT
4.
SYSTEM REQUIREMENT SPECIFICATIONS
4.1
Software Requirements:
·
WINDOWS OS (XP / 2000 / 200 Server /
2003 Server)
·
Visual Studio .Net 2008 Enterprise
Edition
·
Internet Information Server 5.0 (IIS)
·
Visual Studio .Net Framework (Minimal
for Deployment) version 3.5
·
SQL Server 2005 Enterprise Edition
4.2
Hardware Requirements:
·
PIV 2.8 GHz Processor and Above
·
RAM 512MB and Above
·
HDD 40 GB Hard Disk Space and Above
4.3.
INTRODUCTION
TO .NET FRAMEWORK
The Microsoft .NET Framework is a software
technology that is available with several Microsoft Windows operating systems.
It includes a large library of pre-coded solutions to common programming
problems and a virtual machine that manages the execution of
programs written specifically for the framework. The .NET Framework is a key
Microsoft offering and is intended to be used by most new applications created
for the Windows platform.
The
pre-coded solutions that form the framework's Base Class Library cover a large
range of programming needs in a number of areas,
including user interface, data access, database connectivity, cryptography, web application development, numeric algorithms, and network
communications. The
class library is used by programmers, who combine it with their own code to produce applications.
Programs
written for the .NET Framework execute in a software environment that manages the program's runtime requirements. Also part of the .NET Framework, this runtime
environment is known as the Common Language
Runtime (CLR). The CLR
provides the appearance of an application virtual
machine so that
programmers need not consider the capabilities of the specific CPU that will execute the program. The
CLR also provides other important services such as security, memory management, and exception handling. The class library and the CLR
together compose the .NET Framework.
Principal design features
Interoperability
Because
interaction between new and older applications is commonly required, the .NET
Framework provides means to access functionality that is implemented in
programs that execute outside the .NET environment. Access to COM components is provided in the
System.Runtime.InteropServices and System.EnterpriseServices namespaces of the
framework; access to other functionality is provided using the P/Invoke feature.
Common Runtime Engine
The
Common Language Runtime (CLR) is the virtual machine component of the .NET
framework. All .NET programs execute under the supervision of the CLR,
guaranteeing certain properties and behaviors in the areas of memory
management, security, and exception handling.
Base Class Library
The
Base Class Library (BCL), part of the Framework Class Library (FCL), is a
library of functionality available to all languages using the .NET Framework.
The BCL provides classes which encapsulate a number of common functions,
including file reading and writing, graphic rendering, database interaction and
XML document manipulation.
Simplified Deployment
Installation
of computer software must be carefully managed to ensure that it does not
interfere with previously installed software, and that it conforms to security
requirements. The .NET framework includes design features and tools that help
address these requirements.
Security
The
design is meant to address some of the vulnerabilities, such as buffer
overflows, that have been exploited by malicious software. Additionally, .NET
provides a common security model for all applications.
Portability
The
design of the .NET Framework allows it to theoretically be platform agnostic,
and thus cross-platform compatible. That is, a program written to use the
framework should run without change on any type of system for which the
framework is implemented. Microsoft's commercial implementations of the
framework cover Windows, Windows CE, and the Xbox 360. In addition, Microsoft submits the
specifications for the Common Language Infrastructure (which includes the core
class libraries, Common Type System, and the Common Intermediate Language), the
C# language, and the C++/CLI language to both ECMA and the ISO, making them
available as open standards. This makes it possible for third parties to create
compatible implementations of the framework and its languages on other
platforms.
Architecture
Visual
overview of the Common Language Infrastructure (CLI)
The core
aspects of the .NET framework
lie within the Common Language Infrastructure, or CLI. The purpose of the CLI is to provide a language-neutral
platform for application development and execution, including functions for
exception handling, garbage collection, security, and interoperability.
Microsoft's implementation of the CLI is called the Common Language Runtime or CLR.
Assemblies
The
intermediate CIL code is housed in .NET assemblies. As mandated by
specification, assemblies are stored in the Portable Executable (PE) format,
common on the Windows platform for all DLL and EXE files. The assembly consists
of one or more files, one of which must contain the manifest, which has the
metadata for the assembly. The complete name of an assembly (not to be confused
with the filename on disk) contains its simple text name, version number,
culture, and public key token. The public key token is a unique hash generated
when the assembly is compiled, thus two assemblies with the same public key
token are guaranteed to be identical from the point of view of the framework. A
private key can also be specified known only to the creator of the assembly and
can be used for strong naming and to guarantee that the assembly is from the
same author when a new version of the assembly is compiled (required to add an
assembly to the Global Assembly Cache).
All
CLI is self-describing through .NET metadata. The CLR checks the metadata to
ensure that the correct method is called. Metadata is usually generated by
language compilers but developers can create their own metadata through custom
attributes. Metadata contains information about the assembly, and is also used
to implement the reflective programming capabilities of .NET Framework.
.NET
has its own security mechanism with two general features: Code Access Security
(CAS), and validation and verification. Code Access Security is based on evidence
that is associated with a specific assembly. Typically the evidence is the
source of the assembly (whether it is installed on the local machine or has
been downloaded from the intranet or Internet). Code Access Security uses
evidence to determine the permissions granted to the code. Other code can
demand that calling code is granted a specified permission. The demand causes
the CLR to perform a call stack walk: every assembly of each method in the call
stack is checked for the required permission; if any assembly is not granted
the permission a security exception is thrown.
When an assembly is loaded the CLR
performs various tests. Two such tests are validation and verification. During
validation the CLR checks that the assembly contains valid metadata and CIL,
and whether the internal tables are correct. Verification is not so exact. The
verification mechanism checks to see if the code does anything that is
'unsafe'. The algorithm used is quite conservative; hence occasionally code
that is 'safe' does not pass. Unsafe code will only be executed if the assembly
has the 'skip verification' permission, which generally means code that is
installed on the local machine.
.NET Framework uses appdomains as a
mechanism for isolating code running in a process. Appdomains can be created
and code loaded into or unloaded from them independent of other appdomains.
This helps increase the fault tolerance of the application, as faults or
crashes in one appdomain do not affect rest of the application. Appdomains can
also be configured independently with different security privileges. This can
help increase the security of the application by isolating potentially unsafe
code. The developer, however, has to split the application into sub domains; it
is not done by the CLR.
Class library
Namespaces in the BCL
|
System
|
System.
CodeDom
|
System.
Collections
|
System.
Diagnostics
|
System.
Globalization
|
System.
IO
|
System.
Resources
|
System.
Text
|
System.Text.RegularExpressions
|
Microsoft
.NET Framework includes a set of standard class libraries. The class library is
organized in a hierarchy of namespaces. Most of the built in APIs are part of
either
System.*
or Microsoft.*
namespaces. It
encapsulates a large number of common functions, such as file reading and
writing, graphic rendering, database interaction, and XML document
manipulation, among others. The .NET class libraries are available to all .NET
languages. The .NET Framework class library is divided into two parts: the Base Class Library and the Framework Class Library.
The Base Class Library (BCL) includes a small subset of the entire
class library and is the core set of classes that serve as the basic API of the
Common Language Runtime. The
classes in
mscorlib.dll
and some
of the classes in System.dll
and System.core.dll
are
considered to be a part of the BCL. The BCL classes are available in both .NET
Framework as well as its alternative implementations including .NET Compact
Framework, Microsoft Silver light and Mono.
The Framework Class Library (FCL) is a superset of the BCL classes and
refers to the entire class library that ships with .NET Framework. It includes
an expanded set of libraries, including Win Forms, ADO.NET, ASP.NET, Language
Integrated Query, Windows
Presentation Foundation, Windows
Communication Foundation among others. The FCL is much larger in scope than standard libraries
for languages like C++, and comparable in scope to the standard libraries
of Java.
The
.NET Framework CLR frees the developer from the burden of managing memory
(allocating and freeing up when done); instead it does the memory management
itself. To this end, the memory allocated to instantiations of .NET types
(objects) is done contiguously from the managed heap, a pool of memory managed
by the CLR. As long as there exists a reference to an object, which might be
either a direct reference to an object or via a graph of objects, the object is
considered to be in use by the CLR. When there is no reference to an object,
and it cannot be reached or used, it becomes garbage. However, it still holds
on to the memory allocated to it. .NET Framework includes a garbage collector
which runs periodically, on a separate thread from the application's thread,
that enumerates all the unusable objects and reclaims the memory allocated to
them.
The .NET Garbage Collector (GC) is a
non-deterministic, compacting, mark-and-sweep garbage collector. The GC runs
only when a certain amount of memory has been used or there is enough pressure
for memory on the system. Since it is not guaranteed when the conditions to
reclaim memory are reached, the GC runs are non-deterministic. Each .NET
application has a set of roots, which are pointers to objects on the managed
heap (managed objects). These include references to static objects and
objects defined as local variables or method parameters currently in scope, as
well as objects referred to by CPU registers.
When the GC runs, it pauses the application, and for each object
referred to in the root, it recursively enumerates all the objects reachable
from the root objects and marks them as reachable. It uses .NET metadata and
reflection to discover the objects encapsulated by an object, and then
recursively walk them. It then enumerates all the objects on the heap (which
were initially allocated contiguously) using reflection. All objects not marked
as reachable are garbage. This
is the mark phase. Since
the memory held by garbage is not of any consequence, it is considered free
space. However, this leaves chunks of free space between objects which were
initially contiguous. The objects are then compacted together, by using
memory
to copy them over to the free space to
make them contiguous again. Any
reference to an object invalidated by moving the object is updated to reflect
the new location by the GC. The
application is resumed after the garbage collection is over.
The GC used by .NET Framework is
actually generational. Objects are assigned a generation;
newly created objects belong to Generation 0. The objects that survive a
garbage collection are tagged as Generation 1, and the Generation 1
objects that survive another collection are Generation 2 objects. The
.NET Framework uses up to Generation 2 objects.
Higher generation objects are garbage collected less frequently than
lower generation objects. This helps increase the efficiency of garbage
collection, as older objects tend to have a larger lifetime than newer
objects. Thus, by removing older (and
thus more likely to survive a collection) objects from the scope of a
collection run, fewer objects need to be checked and compacted.
Versions:
Microsoft started development on the .NET Framework in the late 1990s
originally under the name of Next Generation Windows Services (NGWS). By late
2000 the first beta versions of .NET 1.0 were released.
Version
|
Version Number
|
Release Date
|
1.0
|
1.0.3705.0
|
2002-01-05
|
1.1
|
1.1.4322.573
|
2003-04-01
|
2.0
|
2.0.50727.42
|
2005-11-07
|
3.0
|
3.0.4506.30
|
2006-11-06
|
3.5
|
3.5.21022.8
|
2007-11-09
|
ASP.NET
SERVER APPLICATION DEVELOPMENT
Server-side
applications in the managed world are implemented through runtime hosts.
Unmanaged applications host the common language runtime, which allows your
custom managed code to control the behavior of the server. This model provides
you with all the features of the common language runtime and class library
while gaining the performance and scalability of the host server.
The following
illustration shows a basic network schema with managed code running in
different server environments. Servers such as IIS and SQL Server can perform
standard operations while your application logic executes through the managed
code.
SERVER-SIDE
MANAGED CODE
ASP.NET is
the hosting environment that enables developers to use the .NET Framework to
target Web-based applications. However, ASP.NET is more than just a runtime
host; it is a complete architecture for developing Web sites and
Internet-distributed objects using managed code. Both Web Forms and XML Web
services use IIS and ASP.NET as the publishing mechanism for applications, and
both have a collection of supporting classes in the .NET Framework.
XML Web
services, an important evolution in Web-based technology, are distributed,
server-side application components similar to common Web sites. However, unlike
Web-based applications, XML Web services components have no UI and are not
targeted for browsers such as Internet Explorer and Netscape Navigator. Instead,
XML Web services consist of reusable software components designed to be
consumed by other applications, such as traditional client applications,
Web-based applications, or even other XML Web services. As a result, XML Web
services technology is rapidly moving application development and deployment
into the highly distributed environment of the Internet.
If you have
used earlier versions of ASP technology, you will immediately notice the
improvements that ASP.NET and Web Forms offers. For example, you can develop
Web Forms pages in any language that supports the .NET Framework. In addition,
your code no longer needs to share the same file with your HTTP text (although
it can continue to do so if you prefer). Web Forms pages execute in native
machine language because, like any other managed application, they take full
advantage of the runtime. In contrast, unmanaged ASP pages are always scripted
and interpreted. ASP.NET pages are faster, more functional, and easier to
develop than unmanaged ASP pages because they interact with the runtime like
any managed application.
The .NET
Framework also provides a collection of classes and tools to aid in development
and consumption of XML Web services applications. XML Web services are built on
standards such as SOAP (a remote procedure-call protocol), XML (an extensible
data format), and WSDL ( the Web Services Description Language). The .NET
Framework is built on these standards to promote interoperability with
non-Microsoft solutions.
For example,
the Web Services Description Language tool included with the .NET Framework SDK
can query an XML Web service published on the Web, parse its WSDL description,
and produce C# or Visual Basic source code that your application can use to
become a client of the XML Web service. The source code can create classes
derived from classes in the class library that handle all the underlying
communication using SOAP and XML parsing. Although you can use the class
library to consume XML Web services directly, the Web Services Description Language
tool and the other tools contained in the SDK facilitate your development
efforts with the .NET Framework.
If you
develop and publish your own XML Web service, the .NET Framework provides a set
of classes that conform to all the underlying communication standards, such as
SOAP, WSDL, and XML. Using those classes enables you to focus on the logic of
your service, without concerning yourself with the communications
infrastructure required by distributed software development.
Finally, like Web Forms pages in the
managed environment, your XML Web service will run with the speed of native
machine language using the scalable communication of IIS.
ACTIVE SERVER PAGES.NET
ASP.NET
is a programming framework built on the common language runtime that can be used
on a server to build powerful Web applications. ASP.NET offers several
important advantages over previous Web development models:
·
Enhanced
Performance. ASP.NET is compiled common language
runtime code running on the server. Unlike its interpreted predecessors,
ASP.NET can take advantage of early binding, just-in-time compilation, native
optimization, and caching services right out of the box. This amounts to
dramatically better performance before you ever write a line of code.
·
World-Class
Tool Support. The ASP.NET framework is complemented
by a rich toolbox and designer in the Visual Studio integrated development
environment. WYSIWYG editing, drag-and-drop server controls, and automatic
deployment are just a few of the features this powerful tool provides.
·
Power
and Flexibility. Because ASP.NET is based on
the common language runtime, the power and flexibility of that entire platform
is available to Web application developers. The .NET Framework class library,
Messaging, and Data Access solutions are all seamlessly accessible from the
Web. ASP.NET is also language-independent, so you can choose the language that
best applies to your application or partition your application across many
languages. Further, common language runtime interoperability guarantees that
your existing investment in COM-based development is preserved when migrating
to ASP.NET.
·
Simplicity. ASP.NET makes it easy to perform common tasks, from simple form
submission and client authentication to deployment and site configuration. For
example, the ASP.NET page framework allows you to build user interfaces that
cleanly separate application logic from presentation code and to handle events
in a simple, Visual Basic - like forms processing model. Additionally, the
common language runtime simplifies development, with managed code services such
as automatic reference counting and garbage collection.
·
Manageability. ASP.NET employs a text-based, hierarchical configuration system,
which simplifies applying settings to your server environment and Web applications.
Because configuration information is stored as plain text, new settings may be
applied without the aid of local administration tools. This "zero local
administration" philosophy extends to deploying ASP.NET Framework
applications as well. An ASP.NET Framework application is deployed to a server
simply by copying the necessary files to the server. No server restart is
required, even to deploy or replace running compiled code.
·
Scalability
and Availability. ASP.NET has been designed
with scalability in mind, with features specifically tailored to improve
performance in clustered and multiprocessor environments. Further, processes
are closely monitored and managed by the ASP.NET runtime, so that if one
misbehaves (leaks, deadlocks), a new process can be created in its place, which
helps keep your application constantly available to handle requests.
·
Customizability
and Extensibility. ASP.NET delivers a
well-factored architecture that allows developers to "plug-in" their
code at the appropriate level. In fact, it is possible to extend or replace any
subcomponent of the ASP.NET runtime with your own custom-written component.
Implementing custom authentication or state services has never been easier.
·
Security. With built in Windows authentication and per-application
configuration, you can be assured that your applications are secure.
LANGUAGE
SUPPORT
The Microsoft .NET Platform currently offers built-in support for
three languages: C#, Visual Basic, and Java Script.
The ASP.NET Web Forms page framework is a
scalable common language runtime programming model that can be used on the
server to dynamically generate Web pages.
Intended as a logical evolution of ASP
(ASP.NET provides syntax compatibility with existing pages), the ASP.NET Web
Forms framework has been specifically designed to address a number of key
deficiencies in the previous model. In particular, it provides:
·
The ability to create and
use reusable UI controls that can encapsulate common functionality and thus
reduce the amount of code that a page developer has to write.
·
The ability for developers
to cleanly structure their page logic in an orderly fashion (not
"spaghetti code").
·
The ability for development
tools to provide strong WYSIWYG design support for pages (existing ASP code is
opaque to tools).
ASP.NET Web Forms pages
are text files with an .aspx file name extension. They can be deployed
throughout an IIS virtual root directory tree. When a browser client requests
.aspx resources, the ASP.NET runtime parses and compiles the target file into a
.NET Framework class. This class can then be used to dynamically process
incoming requests. (Note that the .aspx file is compiled only the first time it
is accessed; the compiled type instance is then reused across multiple
requests).
An ASP.NET page can be
created simply by taking an existing HTML file and changing its file name
extension to .aspx (no modification of code is required). For example, the
following sample demonstrates a simple HTML page that collects a user's name
and category preference and then performs a form post back to the originating
page when a button is clicked:
ASP.NET provides syntax
compatibility with existing ASP pages. This includes support for <% %>
code render blocks that can be intermixed with HTML content within an .aspx
file. These code blocks execute in a top-down manner at page render time.
CODE-BEHIND
WEB FORMS
ASP.NET supports two methods of authoring
dynamic pages. The first is the method shown in the preceding samples, where
the page code is physically declared within the originating .aspx file. An
alternative approach--known as the code-behind method--enables the page code to
be more cleanly separated from the HTML content into an entirely separate file.
INTRODUCTION TO ASP.NET SERVER CONTROLS
In
addition to (or instead of) using <% %> code blocks to program dynamic
content, ASP.NET page developers can use ASP.NET server controls to program Web
pages. Server controls are declared within an .aspx file using custom tags or
intrinsic HTML tags that contain a runat="server"
attributes value. Intrinsic HTML tags are handled by one of the controls in the
System.Web.UI.HtmlControls
namespace. Any tag that doesn't explicitly map to one of the controls is
assigned the type of System.Web.UI.HtmlControls.HtmlGenericControl.
Server controls automatically maintain any
client-entered values between round trips to the server. This control state is
not stored on the server (it is instead stored within an <input type="hidden">
form field that is round-tripped between requests). Note also that no client-side
script is required.
In addition to supporting standard HTML
input controls, ASP.NET enables developers to utilize richer custom controls on
their pages. For example, the following sample demonstrates how the <asp:adrotator> control can be
used to dynamically display rotating ads on a page.
2. ASP.NET Web Forms pages can target any browser client (there are
no script library or cookie requirements).
3. ASP.NET Web Forms pages provide syntax compatibility with existing
ASP pages.
4. ASP.NET server controls provide an easy way to encapsulate common
functionality.
5. ASP.NET ships with 45 built-in server controls. Developers can
also use controls built by third parties.
6. ASP.NET server controls can automatically project both uplevel and
downlevel HTML.
7. ASP.NET templates provide an easy way to customize the look and
feel of list server controls.
8. ASP.NET validation controls provide an easy way to do declarative
client or server data validation.
C#.NET
ADO.NET OVERVIEW
ADO.NET
is an evolution of the ADO data access model that directly addresses user
requirements for developing scalable applications. It was designed specifically
for the web with scalability, statelessness, and XML in mind.
ADO.NET uses some ADO
objects, such as the Connection
and Command objects, and also
introduces new objects. Key new ADO.NET objects include the Dataset, Data Reader, and Data
Adapter.
The
important distinction between this evolved stage of ADO.NET and previous data
architectures is that there exists an object -- the DataSet -- that is separate and distinct from any data stores.
Because of that, the DataSet
functions as a standalone entity. You can think of the DataSet as an always
disconnected recordset that knows nothing about the source or destination of
the data it contains. Inside a DataSet,
much like in a database, there are tables, columns, relationships, constraints,
views, and so forth.
A
DataAdapter is the object that
connects to the database to fill the DataSet.
Then, it connects back to the database to update the data there, based on
operations performed while the DataSet
held the data. In the past, data processing has been primarily
connection-based. Now, in an effort to make multi-tiered apps more efficient,
data processing is turning to a message-based approach that revolves around
chunks of information. At the center of this approach is the DataAdapter, which provides a bridge
to retrieve and save data between a DataSet
and its source data store. It accomplishes this by means of requests to the
appropriate SQL commands made against the data store.
The
XML-based DataSet object
provides a consistent programming model that works with all models of data
storage: flat, relational, and hierarchical. It does this by having no 'knowledge'
of the source of its data, and by representing the data that it holds as
collections and data types. No matter what the source of the data within the DataSet is, it is manipulated through
the same set of standard APIs exposed through the DataSet and its subordinate objects.
While the DataSet
has no knowledge of the source of its data, the managed provider has detailed
and specific information. The role of the managed provider is to connect, fill,
and persist the DataSet to and
from data stores. The OLE DB and SQL Server .NET Data Providers
(System.Data.OleDb and System.Data.SqlClient) that are part of the .Net
Framework provide four basic objects: the Command, Connection,
DataReader and DataAdapter. In the remaining sections
of this document, we'll walk through each part of the DataSet and the OLE DB/SQL Server .NET Data Providers explaining
what they are, and how to program against them.
The following sections
will introduce you to some objects that have evolved, and some that are new.
These objects are:
·
Connections: For connection to and
managing transactions against a database.
·
Commands: For issuing SQL commands against a database.
·
DataReaders: For reading a forward-only stream of data records from a SQL
Server data source.
·
DataSet: For storing, Remoting and
programming against flat data, XML data and relational data.
·
DataAdapters: For pushing data into a DataSet, and reconciling data against
a database.
When
dealing with connections to a database, there are two different options: SQL
Server .NET Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider
(System.Data.OleDb). In these samples we will use the SQL Server .NET Data
Provider. These are written to talk directly to Microsoft SQL Server. The OLE
DB .NET Data Provider is used to talk to any OLE DB provider (as it uses OLE DB
underneath).
Connections:
Connections are used to
'talk to' databases, and are represented by provider-specific classes such as SqlConnection. Commands travel over
connections and resultsets are returned in the form of streams which can be
read by a DataReader object, or
pushed into a DataSet object.
Commands:
Commands contain the
information that is submitted to a database, and are represented by
provider-specific classes such as SqlCommand.
A command can be a stored procedure call, an UPDATE statement, or a statement
that returns results. You can also use input and output parameters, and return
values as part of your command syntax. The example below shows how to issue an
INSERT statement against the Northwind
database.
DataReaders:
The DataReader object is somewhat synonymous with a
read-only/forward-only cursor over data. The DataReader API supports flat as well as hierarchical data. A DataReader object is returned after
executing a command against a database. The format of the returned DataReader object is different from a
recordset. For example, you might use the DataReader to show the results of a search list in a web page.
DATASETS AND DATAADAPTERS
DataSets:
The DataSet object is similar to the ADO Recordset object, but more powerful, and with one other important distinction: the DataSet is always disconnected. The DataSet object represents a cache of data, with database-like structures such as tables, columns, relationships, and constraints. However, though a DataSet can and does behave much like a database, it is important to remember that DataSet objects do not interact directly with databases, or other source data. This allows the developer to work with a programming model that is always consistent, regardless of where the source data resides. Data coming from a database, an XML file, from code, or user input can all be placed into DataSet objects. Then, as changes are made to the DataSet they can be tracked and verified before updating the source data. The GetChanges method of the DataSet object actually creates a second DatSet that contains only the changes to the data. This DataSet is then used by a DataAdapter (or other objects) to update the original data source.
The DataSet object is similar to the ADO Recordset object, but more powerful, and with one other important distinction: the DataSet is always disconnected. The DataSet object represents a cache of data, with database-like structures such as tables, columns, relationships, and constraints. However, though a DataSet can and does behave much like a database, it is important to remember that DataSet objects do not interact directly with databases, or other source data. This allows the developer to work with a programming model that is always consistent, regardless of where the source data resides. Data coming from a database, an XML file, from code, or user input can all be placed into DataSet objects. Then, as changes are made to the DataSet they can be tracked and verified before updating the source data. The GetChanges method of the DataSet object actually creates a second DatSet that contains only the changes to the data. This DataSet is then used by a DataAdapter (or other objects) to update the original data source.
The DataSet has many XML characteristics,
including the ability to produce and consume XML data and XML schemas. XML
schemas can be used to describe schemas interchanged via WebServices. In fact,
a DataSet with a schema can
actually be compiled for type safety and statement completion.
DATAADAPTERS (OLEDB/SQL)
The DataAdapter object works as a bridge
between the DataSet and the
source data. Using the provider-specific SqlDataAdapter (along with its associated SqlCommand and SqlConnection)
can increase overall performance when working with a Microsoft SQL Server
databases. For other OLE DB-supported databases, you would use the OleDbDataAdapter object and its
associated OleDbCommand and OleDbConnection objects.
The DataAdapter object uses commands to
update the data source after changes have been made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command;
using the Update method calls
the INSERT, UPDATE or DELETE command for each changed row. You can explicitly
set these commands in order to control the statements used at runtime to
resolve changes, including the use of stored procedures. For ad-hoc scenarios,
a CommandBuilder object can
generate these at run-time based upon a select statement. However, this
run-time generation requires an extra round-trip to the server in order to
gather required metadata, so explicitly providing the INSERT, UPDATE, and
DELETE commands at design time will result in better run-time performance.
1. ADO.NET is the next evolution of ADO for the .Net Framework.
2. ADO.NET was created with n-Tier, statelessness and XML in the
forefront. Two new objects, the DataSet
and DataAdapter, are provided
for these scenarios.
3. ADO.NET can be used to get data from a stream, or to store data in
a cache for updates.
4. There is a lot more information about ADO.NET in the
documentation.
5. Remember, you can execute a command directly against the database
in order to do inserts, updates, and deletes. You don't need to first put data
into a DataSet in order to
insert, update, or delete it.
Also, you can use a DataSet to bind to the data, move
through the data, and navigate data relationships
SQL
SERVER -2005
A database management, or DBMS, gives the user access to
their data and helps them transform the data into information. Such database
management systems include dBase, paradox, IMS, SQL Server and SQL Server. These systems allow users to create, update
and extract information from their database.
A database is a
structured collection of data. Data
refers to the characteristics of people, things and events. SQL Server stores each data item in its own
fields. In SQL Server, the fields
relating to a particular person, thing or event are bundled together to form a
single complete unit of data, called a record (it can also be referred to as
raw or an occurrence). Each record is
made up of a number of fields. No two
fields in a record can have the same field name.
During an SQL
Server Database design project, the analysis of your business needs identifies
all the fields or attributes of interest.
If your business needs change over time, you define any additional
fields or change the definition of existing fields.
SQL SERVER TABLES
SQL Server stores
records relating to each other in a table.
Different tables are created for the various groups of information.
Related tables are grouped together to form a database.
PRIMARY KEY
Every table in SQL
Server has a field or a combination of fields that uniquely identifies each
record in the table. The Unique
identifier is called the Primary Key, or simply the Key. The primary key provides the means to
distinguish one record from all other in a table. It allows the user and the database system to
identify, locate and refer to one particular record in the database.
ELATIONAL DATABASER
Sometimes all the information of interest to a business operation
can be stored in one table. SQL Server
makes it very easy to link the data in multiple tables. Matching an employee to
the department in which they work is one example. This is what makes SQL Server a relational
database management system, or RDBMS. It
stores data in two or more tables and enables you to define relationships between
the table and enables you to define relationships between the tables.
FOREIGN KEY
When a field is one
table matches the primary key of another field is referred to as a foreign
key. A foreign key is a field or a group
of fields in one table whose values match those of the primary key of another
table.
REFERENTIAL INTEGRITY
Not only does SQL
Server allow you to link multiple tables, it also maintains consistency between
them. Ensuring that the data among
related tables is correctly matched is referred to as maintaining referential
integrity.
DATA ABSTRACTION
A major purpose of a database system is to provide users with an
abstract view of the data. This system
hides certain details of how the data is stored and maintained. Data
abstraction is divided into three levels.
Physical level: This is the lowest level
of abstraction at which one describes how the data are actually stored.
Conceptual Level: At this level of database
abstraction all the attributed and what data are actually stored is described
and entries and relationship among them.
View level: This is the highest level
of abstraction at which one describes only part of the database.
ADVANTAGES OF RDBMS
·
Redundancy can be avoided
·
Inconsistency can be
eliminated
·
Data can be Shared
·
Standards can be enforced
·
Security restrictions ca be
applied
·
Integrity can be maintained
·
Conflicting requirements can
be balanced
·
Data independence can be
achieved.
DISADVANTAGES OF DBMS
A significant
disadvantage of the DBMS system is cost.
In addition to the cost of purchasing of developing the software, the
hardware has to be upgraded to allow for the extensive programs and the
workspace required for their execution and storage. While centralization reduces duplication, the
lack of duplication requires that the database be adequately backed up so that
in case of failure the data can be recovered.
FEATURES OF SQL SERVER
(RDBMS)
SQL SERVER is one
of the leading database management systems (DBMS) because it is the only
Database that meets the uncompromising requirements of today’s most demanding
information systems. From complex
decision support systems (DSS) to the most rigorous online transaction
processing (OLTP) application, even application that require simultaneous DSS
and OLTP access to the same critical data, SQL Server leads the industry in
both performance and capability.
SQL SERVER is a truly portable, distributed, and open DBMS that
delivers unmatched performance, continuous operation and support for every
database.
SQL SERVER RDBMS is high performance fault tolerant DBMS which is
specially designed for online transactions processing and for handling large
database application.
SQL SERVER with transactions processing option offers two features
which contribute to very high level of transaction processing throughput, which
are
·
The row level lock manager
ENTERPRISE WIDE DATA
SHARING
The unrivaled
portability and connectivity of the SQL SERVER DBMS enables all the systems in
the organization to be linked into a singular, integrated computing resource.
PORTABILITY
SQL SERVER is fully
portable to more than 80 distinct hardware and operating systems platforms,
including UNIX, MSDOS, OS/2, Macintosh and dozens of proprietary
platforms. This portability gives
complete freedom to choose the database server platform that meets the system
requirements.
OPEN SYSTEMS
SQL SERVER offers a
leading implementation of industry –standard SQL. SQL Server’s open architecture integrates SQL
SERVER and non –SQL SERVER DBMS with industry’s most comprehensive collection
of tools, application, and third party software products SQL Server’s Open
architecture provides transparent access to data from other relational database
and even non-relational database.
DISTRIBUTED DATA
SHARING
SQL Server’s
networking and distributed database capabilities to access data stored on
remote server with the same ease as if the information was stored on a single
local computer. A single SQL statement
can access data at multiple sites. You can store data where system requirements
such as performance, security or availability dictate.
UNMATCHED PERFORMANCE
The most advanced architecture in the industry allows the SQL
SERVER DBMS to deliver unmatched performance.
SOPHISTICATED
CONCURRENCY CONTROL
Real World
applications demand access to critical data.
With most database Systems application becomes “contention bound” –
which performance is limited not by the CPU power or by disk I/O, but user
waiting on one another for data access. SQL Server employs full, unrestricted
row-level locking and contention free queries to minimize and in many cases
entirely eliminates contention wait times.
NO I/O BOTTLENECKS
SQL Server’s fast
commit groups commit and deferred write technologies dramatically reduce disk
I/O bottlenecks. While some database write whole data block to disk at commit
time, SQL Server commits transactions with at most sequential log file on disk
at commit time, On high throughput systems, one sequential writes typically
group commit multiple transactions. Data
read by the transaction remains as shared memory so that other transactions may
access that data without reading it again from disk. Since fast commits write all data necessary
to the recovery to the log file, modified blocks are written back to the
database independently of the transaction commit, when written from memory to
disk.
SYSTEM
ANALYSIS
5.1.
PROBLEMS AND WEAKNESS IN EXISTING SYSTEM
·
It
is important to understand and discuss the significance of global warming.
Global warming is also known as the "Greenhouse effect". The
"Greenhouse Earth" is surrounded by a shield of atmospheric gases,
rather than a glass or a plastic cover. The air that makes up our atmosphere
consists primarily of nitrogen and oxygen molecules (N2 at 78% and O2 at 21%).
A large number of "trace gases" make up the remainder of air's
composition. Many of these, including carbon dioxide (CO2) and methane (CH4)
are the so called "greenhouse" gases. Our sun, powered by its hot,
nuclear fusion reaction, produces radiant energy in the visible and ultraviolet
regions with relatively short wavelengths. Of the sunlight that strikes the
earth, about 70% is absorbed by the planet and its atmosphere, while the other
30% is immediately reflected. If the earth did not re-radiate most of this newly
absorbed energy back into space the world would continue to get warmer.
Instead, an energy balance is maintained.
·
The
earth is about 60 degrees Fahrenheit (33 degrees Celsius) warmer than it would
be if it did not have the atmospheric blanket of greenhouse gases and clouds
around it. Clouds and greenhouse gases keep the earth warm. Once warmed, their
molecules then radiate a portion of this heat energy back to earth, creating
more warming on the surface of our planet. It is this radiation which causes atmospheric
gases to move back to earth that scientists call the "greenhouse
effect".
·
Carbon
dioxide (CO2) gas generated by man's burning of fossil fuels and the forests is
responsible for about half the greenhouse gas warming. Other gases (CFCs,
methane, nitrous oxide, troposphere ozone) are responsible for the rest.
Increases in all these gases are due to mankind's explosive population growth
over the last century, and increased industrial expansion.
·
There
is no facility available for getting the information regarding atmospheric
greenhouse gases, result for climate change, long term droughts and raising sea
levels in the existing system.
·
National
and international News are not posted to all over the globe for new rules,
amendments, and new laws regarding global warming policies.
5.2.
REQUREMENTS OF NEW SYSTEM
It's time we take responsibility for our planet and give future
generations the opportunity to inhabit the earth the fortunate way their
predecessors have. Are we really that selfish? Why not make a collective
commitment to something more important. We can be the generations that show
leadership and shape our children's lifestyles into a positive, sustainable
force that prolongs the life of our planet.
·
The world’s leading science journals report that glaciers are
melting ten times faster than previously thought, that atmospheric greenhouse
gases have reached levels not seen for millions of years, and that species are
vanishing as a result of climate change. They also report of extreme weather
events, long-term droughts, and rising sea levels.
·
People
need to be able to find sources of information on the internet they know they
can trust by using like this stop global warming websites. The Stop Global
Warming is a great, reliable source of information on environmental topics that
is available to anyone. The goal is to make environmental information about the
Earth and its ecosystems accessible, both in terms of the ability to get to it
for free through the internet and to present the information in a way people
can understand and use.
·
The
Stop Global Warming takes material from original peer-reviewed articles (by
organizations that allow to publish their work) and "free and open content
sources," such as various government agencies' publications. These sources
are edited for length and style, and then added to the application. Currently,
we are having more articles.
·
The
goal of the service is to provide a definitive and authoritative reference for
environmental information, authored by people who know what they're
referencing. The NGO’s behind the site
were tired of scouring with other search engines for articles on topics of
interest to them only to find sites of dubious scientific merit.
5.3.
FEATURES
OF NEW SYSTEM
5.3.1.
PURPOSE
OF THE PROJECT
This is a social website
for encouraging people to abstain from various pollution causatives. It works
on the principles of health promotion and strengthening the society. It not
only makes the users aware of the diseases caused but also how to prevent them.
It encourage, conduct and participate in investigations and research relating
to problems of water, land and air pollution and its prevention, control and
abatement thereof.
FEASIBILITY
STUDY:
Preliminary investigation
examine project feasibility, the likelihood the system will be useful to the
organization. The main objective of the feasibility study is to test the
Technical, Operational and Economical feasibility for adding new modules and
debugging old running system. All system is feasible if they are unlimited
resources and infinite time. There are aspects in the feasibility study portion
of the preliminary investigation:
·
Technical
Feasibility
·
Operational
Feasibility
·
Economical
Feasibility
TECHNICAL FEASIBILITY
Technical Feasibility centers on the existing
computer system hardware, software, etc. and to some extent how it can support
the proposed addition. This involves financial considerations to accommodate
technical enhancements. Technical support is also a reason for the success of
the project. The techniques needed for
the system should be available and it must be reasonable to use. Technical
Feasibility is mainly concerned with the study of function, performance, and
constraints that may affect the ability to achieve the system. By conducting an
efficient technical feasibility we need to ensure that the project works to
solve the existing problem area.
Since the project is designed with ASP.NET with C# as Front end and SQL
Server 2000 as Back end, it is easy to install in all the systems wherever
needed. It is more efficient, easy and user-friendly to understand by almost
everyone. Huge amount of data can be handled efficiently using SQL Server as
back end. Hence this project has good
technical feasibility
OPERATIONAL FEASIBILITY
People are
inherently instant to change and computers have been known to facilitate
change. An estimate should be made to how strong a reaction the user staff is
likely to have towards the development of the computerized system.
The staff is accustomed to computerized systems. These kinds of systems
are becoming more common day by day for evaluation of the software engineers.
Hence, this system is operationally feasible. As this system is technically,
economically and operationally feasible, this system is judged feasible.
ECONOMICAL FEASIBILITY
The role of interface design is to reconcile the differences that
prevail among the software engineer’s design model, the designed system meet
the end user requirement with economical way at minimal cost within the
affordable price by encouraging more of proposed system.
Economic feasibility is concerned with
comparing the development cost with the income/benefit derived from the
developed system. In this we need to derive how this project will help the
management to take effective decisions.
Economic Feasibility is mainly concerned with
the cost incurred in the implementation of the software. Since this project is developed using ASP.NET
with C# and SQL Server which is more commonly available and even the cost
involved in the installation process is not high.
Similarly
it is easy to recruit persons for operating the software since almost all the
people are aware of ASP.NET with C# and SQL Server. Even if we want to train the persons in these
area the cost involved in training is also very less. Hence this project has good
economic feasibility.
The
system once developed must be used efficiently. Otherwise there is no meaning
for developing the system. For this a
careful study of the existing system and its drawbacks are needed. The user should be able to distinguish the
existing one and proposed one, so that one must be able to appreciate the
characteristics of the proposed system, the manual one is not highly reliable
and also is considerably fast. The proposed system is efficient, reliable and
also quickly responding.
6.3.
DATAFLOW DIAGRAM
A
data flow diagram is graphical tool used to describe and analyze movement of
data through a system. These are the central
tool and the basis from which the other components are developed. The transformation of data from input to
output, through processed, may be described logically and independently of
physical components associated with the system.
These are known as the logical data flow diagrams. The physical data flow diagrams show the
actual implements and movement of data between people, departments and
workstations. A full description of a
system actually consists of a set of data flow diagrams. Using two familiar notations Yourdon, Gane
and Sarson notation develops the data flow diagrams. Each component in a DFD is
labeled with a descriptive name. Process
is further identified with a number that will be used for identification
purpose. The development of DFD’S is
done in several levels. Each process in
lower level diagrams can be broken down into a more detailed DFD in the next
level. The lop-level diagram is often
called context diagram. It consists a single process bit, which plays vital
role in studying the current system. The
process in the context level diagram is exploded into other process at the
first level DFD.
The
idea behind the explosion of a process into more process is that understanding
at one level of detail is exploded into greater detail at the next level. This is done until further explosion is
necessary and an adequate amount of detail is described for analyst to
understand the process.
Larry Constantine first developed the
DFD as a way of expressing system requirements in a graphical from, this lead
to the modular design.
A DFD is also known as a “bubble
Chart” has the purpose of clarifying system requirements and identifying major
transformations that will become programs in system design. So it is the starting point of the design to
the lowest level of detail. A DFD
consists of a series of bubbles joined by data flows in the system.
DFD SYMBOLS:
In
the DFD, there are four symbols
1. A
square defines a source(originator) or destination of system data
2. An
arrow identifies data flow. It is the
pipeline through which the information flows
3. A
circle or a bubble represents a process that transforms incoming data flow into
outgoing data flows.
4. An
open rectangle is a data store, data at rest or a temporary repository of data
Process
that transforms data flow.
|
Source
or Destination of data
Data flow
Data Store
CONSTRUCTING A DFD:
Several
rules of thumb are used in drawing DFD’S:
1. Process
should be named and numbered for an easy reference. Each name should be representative of the
process.
2. The
direction of flow is from top to bottom and from left to right. Data traditionally flow from source to the
destination although they may flow back to the source. One way to indicate this is to draw long flow
line back to a source. An alternative
way is to repeat the source symbol as a destination. Since it is used more than once in the DFD it
is marked with a short diagonal.
3. When
a process is exploded into lower level details, they are numbered.
4. The
names of data stores and destinations are written in capital letters. Process
and dataflow names have the first letter of each work capitalized.
A
DFD typically shows the minimum contents of data store. Each data store should contain all the data
elements that flow in and out.
Questionnaires
should contain all the data elements that flow in and out. Missing interfaces redundancies and like is
then accounted for often through interviews.
SAILENT FEATURES OF DFD’S
1. The
DFD shows flow of data, not of control loops and decision are controlled
considerations do not appear on a DFD.
2. The
DFD does not indicate the time factor involved in any process whether the
dataflow take place daily, weekly, monthly or yearly.
3. The
sequence of events is not brought out on the DFD.
TYPES OF DATA FLOW DIAGRAMS
1. Current
Physical
2. Current
Logical
3. New
Logical
4. New
Physical
CURRENT PHYSICAL:
In Current Physical DFD process label
include the name of people or their positions or the names of computer systems
that might provide some of the overall system-processing label includes an
identification of the technology used to process the data. Similarly data flows and data stores are
often labels with the names of the actual physical media on which data are
stored such as file folders, computer files, business forms or computer tapes.
CURRENT LOGICAL:
The physical aspects at the system are
removed as much as possible so that the current system is reduced to its
essence to the data and the processors that transforms them regardless of
actual physical form.
NEW LOGICAL:
This is exactly like a current logical
model if the user were completely happy with the user were completely happy
with the functionality of the current system but had problems with how it was
implemented typically through the new logical model will differ from current
logical model while having additional functions, absolute function removal and
inefficient flows recognized.
NEW PHYSICAL:
The
new physical represents only the physical implementation of the new system.
RULES GOVERNING THE DFD’S
PROCESS
1) No
process can have only outputs.
2) No
process can have only inputs. If an
object has only inputs than it must be a sink.
3) A
process has a verb phrase label.
DATA STORE
1) Data
cannot move directly from one data store to another data store, a process must
move data.
2) Data
cannot move directly from an outside source to a data store, a process, which
receives, must move data from the source and place the data into data store
3) A
data store has a noun phrase label.
SOURCE OR SINK
The
origin and /or destination of data.
1) Data
cannot move direly from a source to sink it must be moved by a process
2) A
source and /or sink has a noun phrase land
DATA FLOW
1) A
Data Flow has only one direction of flow between symbols. It may flow in both directions between a process
and a data store to show a read before an update. The later is usually indicated however by two
separate arrows since these happen at different type.
2) A
join in DFD means that exactly the same data comes from any of two or more
different processes data store or sink to a common location.
3) A
data flow cannot go directly back to the same process it leads. There must be at least one other process that
handles the data flow produce some other data flow returns the original data
into the beginning process.
4) A
Data flow to a data store means update (delete or change).
5) A
data Flow from a data store means retrieve or use.
A
data flow has a noun phrase label more than one data flow noun phrase can
appear on a single arrow as long as all of the flows on the same arrow move
together as one package.
SYSTEM TESTING
1.1
INTRODUCTION
Software
testing is a critical element of software quality assurance and represents the
ultimate review of specification, design and coding. In fact, testing is the
one step in the software engineering process that could be viewed as
destructive rather than constructive.
A
strategy for software testing integrates software test case design methods into
a well-planned series of steps that result in the successful construction of
software. Testing is the set of activities that can be planned in advance and
conducted systematically. The underlying motivation of program testing is to
affirm software quality with methods that can economically and effectively
apply to both strategic to both large and small-scale systems.
8.2. STRATEGIC APPROACH TO SOFTWARE TESTING
The
software engineering process can be viewed as a spiral. Initially system
engineering defines the role of software and leads to software requirement
analysis where the information domain, functions, behavior, performance,
constraints and validation criteria for software are established. Moving inward
along the spiral, we come to design and finally to coding. To develop computer
software we spiral in along streamlines that decrease the level of abstraction
on each turn.
A
strategy for software testing may also be viewed in the context of the spiral.
Unit testing begins at the vertex of the spiral and concentrates on each unit
of the software as implemented in source code. Testing progress by moving
outward along the spiral to integration testing, where the focus is on the
design and the construction of the software architecture. Talking another turn
on outward on the spiral we encounter validation testing where requirements
established as part of software requirements analysis are validated against the
software that has been constructed. Finally we arrive at system testing, where
the software and other system elements are tested as a whole.
8.3.
UNIT TESTING
Unit
testing focuses verification effort on the smallest unit of software design,
the module. The unit testing we have is white box oriented and some modules the
steps are conducted in parallel.
1. WHITE BOX TESTING
This
type of testing ensures that
·
All independent paths have been
exercised at least once
·
All logical decisions have been
exercised on their true and false sides
·
All loops are executed at their
boundaries and within their operational bounds
·
All internal data structures have been
exercised to assure their validity.
To follow the concept of white box
testing we have tested each form .we have created independently to verify that
Data flow is correct, All conditions are exercised to check their validity, All
loops are executed on their boundaries.
2. BASIC PATH TESTING
Established
technique of flow graph with Cyclomatic complexity was used to derive test
cases for all the functions. The main steps in deriving test
cases were:
Use the design of the
code and draw correspondent flow graph.
Determine the
Cyclomatic complexity of resultant flow graph, using formula:
V(G)=E-N+2 or
V(G)=P+1 or
V(G)=Number Of Regions
Where V(G) is
Cyclomatic complexity,
E is the number of
edges,
N is the number of flow
graph nodes,
P is the number of
predicate nodes.
Determine the
basis of set of linearly independent paths.
3. CONDITIONAL TESTING
In this part of the
testing each of the conditions were tested to both true and false aspects. And
all the resulting paths were tested. So that each path that may be generate on
particular condition is traced to uncover any possible errors.
4. DATA FLOW TESTING
This
type of testing selects the path of the program according to the location of
definition and use of variables. This kind of testing was used only when some
local variable were declared. The definition-use chain method was used
in this type of testing. These were particularly useful in nested statements.
5. LOOP TESTING
In this type of testing
all the loops are tested to all the limits possible. The following exercise was
adopted for all loops:
All the loops were
tested at their limits, just above them and just below them.
All the loops were
skipped at least once.
For nested loops test
the inner most loop first and then work outwards.
For concatenated loops
the values of dependent loops were set with the help of connected loop.
Unstructured loops were
resolved into nested loops or concatenated loops and tested as above.
Each unit has been separately tested by the development team
itself and all the input have been validated.
CONCLUSION
It
has been a great pleasure for me to work on this exciting and challenging
project. This project proved good for me as it provided practical knowledge of
not only programming in ASP.NET and C#.Net web based application and no some
extent Windows Application and SQL Server, but also about all handling
procedure related with “Stop Global
Warming”. It also provides knowledge about the latest technology
used in developing web enabled application and client server technology that
will be great demand in future. This will provide better opportunities and
guidance in future in developing projects independently.
BENEFITS:
The
project is identified by the merits of the system offered to the user. The
merits of this project are as follows: -
·
It’s
a web-enabled project.
·
This
project offers user to enter the data through simple and interactive forms.
This is very helpful for the client to enter the desired information through so
much simplicity.
·
The
user is mainly more concerned about the validity of the data, whatever he is
entering. There are checks on every stages of any new creation, data entry or
updation so that the user cannot enter the invalid data, which can create
problems at later date.
·
Sometimes
the user finds in the later stages of using project that he needs to update
some of the information that he entered earlier. There are options for him by
which he can update the records. Moreover there is restriction for his that he
cannot change the primary data field. This keeps the validity of the data to
longer extent.
·
User
is provided the option of monitoring the records he entered earlier. He can see
the desired records with the variety of options provided by him.
·
From
every part of the project the user is provided with the links through framing
so that he can go from one option of the project to other as per the
requirement. This is bound to be simple and very friendly as per the user is
concerned. That is, we can sat that the project is user friendly which is one
of the primary concerns of any good project.
·
Data
storage and retrieval will become faster and easier to maintain because data is
stored in a systematic manner and in a single database.
·
Decision
making process would be greatly enhanced because of faster processing of
information since data collection from information available on computer takes
much less time then manual system.
·
Allocating
of sample results becomes much faster because at a time the user can see the
records of last years.
·
Easier
and faster data transfer through latest technology associated with the computer
and communication.
·
Through
these features it will increase the efficiency, accuracy and transparency
LIMITATIONS:
·
The
size of the database increases day-by-day, increasing the load on the database
back up and data maintenance activity.
·
Training
for simple computer operations is necessary for the users working on the system.
BIBLIOGRAPHY
·
FOR .NET
INSTALLATION
·
FOR DEPLOYMENT
AND PACKING ON SERVER
·
FOR SQL
·
FOR ASP.NET
Asp.Net 3.5 Unleashed
·
Software Engineering
(Roger’s Pressman)
Comments
Post a Comment