Credit Card Fraud Detection

 

1. INTRODUCTION

PROJECT INTRODUCTION

Due to a rapid advancement in the electronic commerce technology, the use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper, we model the sequence of operations in credit card transaction processing using a Hidden Markov Model (HMM) and show how it can be used for the detection of frauds. An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected. We present detailed experimental results to show the effectiveness of our approach and compare it with other techniques available in the literature.

 

Overview

Credit-card-based purchases can be categorized into two types: physical card and virtual card. In a physical-card based purchase, the cardholder presents his card physically to a merchant for making a payment. To carry out fraudulent transactions in this kind of purchase, an attacker has to steal the credit card. If the cardholder does not realize the loss of card, it can lead to a substantial financial loss to the credit card company. In the second kind of purchase, only some important information about a card (card number, expiration date, secure code) is required to make the payment.

Such purchases are normally done on the Internet or over the telephone. To commit fraud in these types of purchases, a fraudster simply needs to know the card details. Most of the time, the genuine cardholder is not aware that someone else has seen or stolen his card information. The only way to detect this kind of fraud is to analyze the spending patterns on every card and to figure out any inconsistency with respect to the “usual” spending patterns. Fraud detection based on the analysis of existing purchase data of cardholder is a promising way to reduce the rate of successful credit card frauds. Since humans tend to exhibit specific behaviorist profiles, every cardholder can be represented by a set of patterns containing information about the typical purchase category, the time since the last purchase, the amount of money spent, etc. Deviation from such patterns is a potential threat to the system.

 


2. SYSTEM ANALYSIS

 

EXISTING SYSTEM

In case of the existing system the case was reported after the fraud is done i.e. the credit card user has to report that his card was miss used and then an action has been done. And so the card holder faced a lot of trouble before the investigation finish. And also as all the transaction is maintained in a log, we need to maintain a huge data. And also now a day’s lot of online purchase are made so we don’t know the person how is using the card online, we just capture the IP address for verification purpose. So there need a help from the cyber crime to investigate the fraud. To avoid the entire above disadvantage we propose the system to detect the fraud in a best and easy way.

 

PROPOSED SYSTEM

In proposed system, we present a Hidden Markov Model (HMM).Which does not require fraud signatures and yet is able to detect frauds by considering a cardholder’s spending habit. Card transaction processing sequence by the stochastic process of an HMM. The details of items purchased in Individual transactions are usually not known to an FDS running at the bank that issues credit cards to the cardholders. Hence, we feel that HMM is an ideal choice for addressing this problem. Another important advantage of the HMM-based approach is a drastic reduction in the number of False Positives transactions identified as malicious by an FDS although they are actually genuine. An FDS runs at a credit card issuing bank. Each incoming transaction is submitted to the FDS for verification. FDS receives the card details and the value of purchase to verify, whether the transaction is genuine or not.

The types of goods that are bought in that transaction are not known to the FDS. It tries to find any anomaly in the transaction based on the spending profile of the cardholder, shipping address, and billing address, etc.

 

Advantages

1.     The detection of the fraud use of the card is found much faster that the existing system.

2.     In case of the existing system even the original card holder is also checked for fraud detection. But in this system no need to check the original user as we maintain a log.

3.     The log which is maintained will also be a proof for the bank for the transaction made.

4.     We can find the most accurate detection using this technique.

5.     this reduce the tedious work of an employee in the bank

6.     Be more convenient to carry than cash.

7.     Help you establish a good credit history.

8.     Provide a convenient payment method for purchases made on the Internet and over the telephone.

9.     Give you incentives, such as reward points, that you can redeem.

 

 


MODULES AND ITS DESCRIPTION

In this project we have two modules are included they are

1.     Administrator

2.     User

1. Administrator:

          In this administrator module, administrator is an authentication user so he can login with his username and password after login he entered into the admin home page in this admin can add the new user and he can view the users those who are all ready taken credit cards and he can view the blocked users.

 

2. User:

          In this new user module, first user can register then he can login with registered username and password. In this user can view his details and view logs details. In this user can view transaction details also.

 

FEASIBILITY STUDY

            The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

·      Economical feasibility

·      Technical feasibility

·      Social feasibility

Economical feasibility:       

            Economic analysis is the most frequently used method for evaluating the effectiveness of a candidate system. More commonly known as cost/benefit analysis, the procedure is to determine the benefits and savings that are expected from a candidate system and compare them with costs. If benefits outweigh costs, then the decision is made to design and implement the system.

Technical Feasibility:     

   This involves questions such as whether the technology needed for the system exists, how difficult it will be to build, and whether the firm has enough experience using that technology. The assessment is based on an outline design of system requirements in terms of Input, Processes, Output, Fields, Programs, and Procedures. This can be quantified in terms of volumes of data, trends, frequency of updating, etc in order to estimate if the new system will perform adequately or not.

Social Feasibility:

   Determines whether the proposed system conflicts with legal requirements, (e.g. a data processing system must comply with the local data protection acts). When an organization has either internal or external legal counsel, such reviews are typically standard. However, a project may face legal issues after completion if this factor is not considered at this stage. It is about the authorization.

 


SYSTEM DESIGN

Input Design

          Design is concerned with identifying software components specifying relationships among components. Specifying software structure and providing blue print for the document phase. Modularity is one of the desirable properties of large systems. It implies that the system is divided into several parts. In such a manner, the interaction between parts is minimal clearly specified. Design will explain software components in detail. This will help the implementation of the system. Moreover, this will guide the further changes in the system to satisfy the future requirements.

 

Input Design:   

     Input design is the process of converting user-originated inputs to a computer-based format. Input design is one of the most expensive phases of the operation of computerized system and is often the major problem of a system.

 

Inputs:

§        Import Test case file into Test Suite tool.

§        Function level calculation

§        Statement level calculation

§        Error Calculation in the Source code

 


Output Design

          Output design generally refers to the results and information that are generated by the system for many end-users; output is the main reason for developing the system and the basis on which they evaluate the usefulness of the application. In any system, the output design determines the input to be given to the application.

 

Expected Outputs:

·        Find out the number of statements.

·        Function level calculation in the source code.

·        Find out the errors during compilation.

·        We have empirically evaluated several test case filtering techniques that are based on exercising complex information flows; these include both coverage-based and profile distribution- based filtering techniques. They were compared, with respect to their effectiveness for revealing defects, to simple random sampling and to filtering techniques based on exercising simpler program elements including basic blocks, branches, function calls, call pairs, and def-use pairs.

·        Both coverage maximization and distribution-based filtering techniques was more effective overall than simple random sampling, although the latter performed well in one case in which failures comprised a relatively large proportion of the test suite.

 


Normalization

          It is a process of converting a relation to a standard form.  The process is used to handle the problems that can arise due to data redundancy i.e. repetition of data in the database, maintain data integrity as well as handling problems that can arise due to insertion, updation, deletion anomalies.

 

          Decomposing is the process of splitting relations into multiple relations to eliminate anomalies and maintain anomalies and maintain data integrity.  To do this we use normal forms or rules for structuring relation.

 

Insertion anomaly: Inability to add data to the database due to absence of other data.

 

Deletion anomaly: Unintended loss of data due to deletion of other data.

 

Update anomaly: Data inconsistency resulting from data redundancy and partial update

 

Normal Forms:  These are the rules for structuring relations that eliminate anomalies.

 

First normal form:

A relation is said to be in first normal form if the values in the relation are atomic for every attribute in the relation.  By this we mean simply that no attribute value can be a set of values or, as it is sometimes expressed, a repeating group.

Second normal form:

      A relation is said to be in second Normal form is it is in first normal form and it should satisfy any one of the following rules.

1)    Primary key is a not a composite primary key

2)    No non key attributes are present

3)    Every non key attribute is fully functionally dependent on full set of primary key.

 

Third normal form:

A relation is said to be in third normal form if their exits no transitive dependencies.

 

Transitive Dependency:  If two non key attributes depend on each other as well as on the primary key then they are said to be transitively dependent.

 

          The above normalization principles were applied to decompose the data in multiple tables thereby making the data to be maintained in a consistent state.

 

 

 

 

 

 

3. SOFTWARE/ HARDWARE REQUIREMENTS

 

 

Hardware Requirements

         SYSTEM                       : Pentium IV 2.4 GHz

         HARD DISK                 : 40 GB

         RAM                              : 256 MB

 

Software Requirements

         Operating system            : Windows XP Professional

         Technology                    : Microsoft Visual Studio .Net 2008

         Coding Language           : C#

         Front End                       : ASP.Net

         Back End                       : SQL Server 2005

 

SOFTWARE REQUIREMENT SPECIFICATIONS

Introduction

Scope: The main scope of the project is to remove the credit card fraud detections using Hidden Markov Model.

 

Purpose: The purpose of this project is to provide more security to your credit cards and to avoid the fraud’s going through the credit cards.

 

Objective: In this paper, we model the sequence of operations in credit card transaction processing using a Hidden Markov Model (HMM) and show how it can be used for the detection of frauds. An HMM is initially trained with the normal behavior of a cardholder.

          If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected.

 

Overview

Credit-card-based purchases can be categorized into two types: physical card and virtual card. In a physical-card based purchase, the cardholder presents his card physically to a merchant for making a payment. To carry out fraudulent transactions in this kind of purchase, an attacker has to steal the credit card. If the cardholder does not realize the loss of card, it can lead to a substantial financial loss to the credit card company. In the second kind of purchase, only some important information about a card (card number, expiration date, secure code) is required to make the payment. Such purchases are normally done on the Internet or over the telephone.

 

To commit fraud in these types of purchases, a fraudster simply needs to know the card details. Most of the time, the genuine cardholder is not aware that someone else has seen or stolen his card information. The only way to detect this kind of fraud is to analyze the spending patterns on every card and to figure out any inconsistency with respect to the “usual” spending patterns. Fraud detection based on the analysis of existing purchase data of cardholder is a promising way to reduce the rate of successful credit card frauds.


Since humans tend to exhibit specific behaviorist profiles, every cardholder can be represented by a set of patterns containing information about the typical purchase category, the time since the last purchase, the amount of money spent, etc. Deviation from such patterns is a potential threat to the system.

 

Tools: In this we used pacestar UML diagram tool

 

E-R DIAGRAM

 

·        The relation upon the system is structure through a conceptual ER-Diagram, which not only specifics the existential entities but also the standard relations through which the system exists and the cardinalities that are necessary for the system state to continue.

 

·        The entity Relationship Diagram (ERD) depicts the relationship between the data objects. The ERD is the notation that is used to conduct the date modeling activity the attributes of each data object noted is the ERD can be described resign a data object descriptions.

 

·        The set of primary components that are identified by the ERD are

·        Data object           

·        Relationships

·        Attributes             

·        Various types of indicators.

 

          The primary purpose of the ERD is to represent data objects and their relationships.

E-R Diagram

 

 

 


DATA FLOW DIAGRAMS

A data flow diagram is graphical tool used to describe and analyze movement of data through a system.  These are the central tool and the basis from which the other components are developed.  The transformation of data from input to output, through processed, may be described logically and independently of physical components associated with the system.  These are known as the logical data flow diagrams.  The physical data flow diagrams show the actual implements and movement of data between people, departments and workstations. 

 

A full description of a system actually consists of a set of data flow diagrams.  Using two familiar notations Yourdon, Gane and Sarson notation develops the data flow diagrams. Each component in a DFD is labeled with a descriptive name.  Process is further identified with a number that will be used for identification purpose.  The development of DFD’S is done in several levels.  Each process in lower level diagrams can be broken down into a more detailed DFD in the next level.  The lop-level diagram is often called context diagram. It consists a single process bit, which plays vital role in studying the current system.  The process in the context level diagram is exploded into other process at the first level DFD.

DFD SYMBOLS:

In the DFD, there are four symbols

1.     A square defines a source(originator) or destination of system data

2.     An arrow identifies data flow.  It is the pipeline through which the information flows

3.     A circle or a bubble represents a process that transforms incoming data flow into outgoing data flows.

4.     An open rectangle is a data store, data at rest or a temporary repository of data

CONSTRUCTING A DFD:

Several rules of thumb are used in drawing DFD’S:

1.     Process should be named and numbered for an easy reference.  Each name should be representative of the process.

 

2.     The direction of flow is from top to bottom and from left to right.  Data traditionally flow from source to the destination although they may flow back to the source.  One way to indicate this is to draw long flow line back to a source.  An alternative way is to repeat the source symbol as a destination.  Since it is used more than once in the DFD it is marked with a short diagonal.

 

3.     When a process is exploded into lower level details, they are numbered.

 

4.     The names of data stores and destinations are written in capital letters. Process and dataflow names have the first letter of each work capitalized

 

          A DFD typically shows the minimum contents of data store.  Each data store should contain all the data elements that flow in and out.

 

Questionnaires should contain all the data elements that flow in and out.  Missing interfaces redundancies and like is then accounted for often through interviews.

 


Sailent Features Of Dfd’s

1.     The DFD shows flow of data, not of control loops and decision are controlled considerations do not appear on a DFD.

2.     The DFD does not indicate the time factor involved in any process whether the dataflow take place daily, weekly, monthly or yearly.

3.     The sequence of events is not brought out on the DFD.

 

 

UML DIAGRAMS

The Unified Modeling Language (UML) is a standard language for specifying, visualizing, constructing, and documenting the artifacts of software systems, as well as for business modeling and other non-software systems. The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems.  The UML is a very important part of developing objects oriented software and the software development process.  The UML uses mostly graphical notations to express the design of software projects.  Using the UML helps project teams communicate, explore potential designs, and validate the architectural design of the software.

Goals of UML

The primary goals in the design of the UML were:

1.     Provide users with a ready-to-use, expressive visual modeling language so they can develop and exchange meaningful models.

2.     Provide extensibility and specialization mechanisms to extend the core concepts.

3.     Be independent of particular programming languages and development processes.

4.     Provide a formal basis for understanding the modeling language.

5.     Encourage the growth of the OO tools market.

6.     Support higher-level development concepts such as collaborations, frameworks, patterns and components.

7.     Integrate best practices.

  

Use Case Diagrams:

          A use case is a set of scenarios that describing an interaction between a user and a system.  A use case diagram displays the relationship among actors and use cases.  The two main components of a use case diagram are use cases and actors.

 

Class Diagram:

          Class diagrams are widely used to describe the types of objects in a system and their relationships.  Class diagrams model class structure and contents using design elements such as classes, packages and objects.  Class diagrams describe three different perspectives when designing a system, conceptual, specification, and implementation.  

          These perspectives become evident as the diagram is created and help solidify the design.  This example is only meant as an introduction to the UML and class diagrams. 

 

Sequence diagrams:

          Sequence diagrams demonstrate the behavior of objects in a use case by describing the objects and the messages they pass.  The diagrams are read left to right and descending.  The example below shows an object of class 1 start the behavior by sending a message to an object of class 2.  Messages pass between the different objects until the object of class 1 receives the final message.

 

Collaboration diagrams:

          Collaboration diagrams are also relatively easy to draw.  They show the relationship between objects and the order of messages passed between them.  The objects are listed as icons and arrows indicate the messages being passed between them. The numbers next to the messages are called sequence numbers.  As the name suggests, they show the sequence of the messages as they are passed between the objects.  There are many acceptable sequence numbering schemes in UML.  A simple 1, 2, 3... format can be used.

 

State Diagrams:

          State diagrams are used to describe the behavior of a system.  State diagrams describe all of the possible states of an object as events occur.  Each diagram usually represents objects of a single class and tracks the different states of its objects through the system. 


Activity Diagrams:

          Activity diagrams describe the workflow behavior of a system.  Activity diagrams are similar to state diagrams because activities are the state of doing something.  The diagrams describe the state of activities by showing the sequence of activities performed.  Activity diagrams can show activities that are conditional or parallel.

 

 

 

 

 

DATA DICTIONARY

          Data dictionary consists of description of all the data used in the system. It consists of logical characteristics of current systems data stores including name, description, aliases, contents and organization. Data dictionary serves as the basis for identifying database requirements during system design. Data dictionary is a catalog, a depositary of the elements in the system.

 

          The data dictionary is used to manage the details in the large system, to communicate a common meaning for all system elements, to document the future of the system, to locate errors and omission in the system. Data dictionary contains two types of descriptions for the data flowing through the system attributes and tables. Attributes are grouped together to make up the tables. The most fundamental data level is attributes tables are a   Set of data items, data related to one another and that collectively describes a component in the system. The description of the attributes consists of data names, data descriptions, aliases, and length and data values. The description of data structures consists sequence relationship, selection relationship, iteration relationship and operational relationship.


 4. SYSTEM IMPLEMENTATION

 

Methodology:

Waterfall - Software Development Model

Software products are oriented towards customers like any other engineering products. It is either driver by market or it drives the market. Customer Satisfaction was the main aim in the 1980's. Customer Delight is today's logo and Customer Ecstasy is the new buzzword of the new millennium. Products which are not customer oriented have no place in the market although they are designed using the best technology. The front end of the product is as crucial as the internal technology of the product.

 

A market study is necessary to identify a potential customers need. This process is also called as market research. The already existing need and the possible future needs that are combined together for study. A lot of assumptions are made during market study. Assumptions are the very important factors in the development or start of a product's development. The assumptions which are not realistic can cause a nosedive in the entire venture. Although assumptions are conceptual, there should be a move to develop tangible assumptions to move towards a successful product.

Once the Market study is done, the customer's need is given to the Research and Development Department to develop a cost-effective system that could potentially solve customer's needs better than the competitors. Once the system is developed and tested in a hypothetical environment, the development team takes control of it. The development team adopts one of the software development models to develop the proposed system and gives it to the customers.

 

This model has the following activities.

1.     Software Requirements Analysis

2.     Systems Analysis and Design

3.     Code Generation

4.     Testing

5.     Maintenance

 

 

 

1) Software Requirement Analysis

Software Requirement Analysis is also known as feasibility study. In this requirement analysis phase, the development team visits the customer and studies their system requirement. They examine the need for possible software automation in the given software system. After feasibility study, the development team provides a document that holds the different specific recommendations for the candidate system. It also consists of personnel assignments, costs of the system, project schedule and target dates.


          The requirements analysis and information gathering process is intensified and focused specially on software.

          To understand what type of the programs to be built, the system analyst must study the information domain for the software as well as understand required function, behavior, performance and interfacing. The main purpose of requirement analysis phase is to find the need and to define the problem that needs to be solved.


2) System Analysis and Design

In System Analysis and Design phase, the whole software development process, the overall software structure and its outlay are defined. In case of the client/server processing technology, the number of tiers required for the package architecture, the database design, the data structure design etc are all defined in this phase. After designing part a software development model is created. Analysis and Design are very important in the whole development cycle process. Any fault in the design phase could be very expensive to solve in the software development process. In this phase, the logical system of the product is developed.




3) Code Generation

In Code Generation phase, the design must be decoded into a machine-readable form. If the design of software product is done in a detailed manner, code generation can be achieved without much complication. For generation of code, Programming tools like Compilers, Interpreters, and Debuggers are used. For coding purpose different high level programming languages like C, C++, Pascal and Java are used. The right programming language is chosen according to the type of application.

 

4) Testing

          After code generation phase the software program testing begins. Different testing methods are available to detect the bugs that were committed during the previous phases. A number of testing tools and methods are already available for testing purpose.


5) Maintenance

          Software will definitely go through change once when it is delivered to the customer. There are large numbers of reasons for the change. Change could happen due to some unpredicted input values into the system.

          In addition to this the changes in the system directly have an effect on the software operations. The software should be implemented to accommodate changes that could be happen during the post development period.

 

MAINTENANCE

          The objectives of this maintenance work are to make sure that the system gets into work all time without any bug. Provision must be for environmental changes which may affect the computer or software system. This is called the maintenance of the system. Nowadays there is the rapid change in the software world. Due to this rapid change, the system should be capable of adapting these changes. In our project the process can be added without affecting other parts of the system. Maintenance plays a vital role. The system liable to accept any modification after its implementation. This system has been designed to favor all new changes. Doing this will not affect the system’s performance or its accuracy.

 

 

 

TECHNOLOGIES USED:

Overview of the .NET Framework

          The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet. The .NET Framework is designed to fulfill the following objectives:

·        To provide a consistent object-oriented programming environment whether object code is stored and executed locally, executed locally but Internet-distributed, or executed remotely.

 

·        To provide a code-execution environment that minimizes software deployment and versioning conflicts.

 

·        To provide a code-execution environment that guarantees safe execution of code, including code created by an unknown or semi-trusted third party.

 

·        To provide a code-execution environment that eliminates the performance problems of scripted or interpreted environments.

 

·        To make the developer experience consistent across widely varying types of applications, such as Windows-based applications and Web-based applications.

 

·        To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code.

 

The .NET Framework has two main components: the common language runtime and the .NET Framework class library. The common language runtime is the foundation of the .NET Framework. You can think of the runtime as an agent that manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that ensure security and robustness. In fact, the concept of code management is a fundamental principle of the runtime. Code that targets the runtime is known as managed code, while code that does not target the runtime is known as unmanaged code. The class library, the other main component of the .NET Framework, is a comprehensive, object-oriented collection of reusable types that you can use to develop applications ranging from traditional command-line or graphical user interface (GUI) applications to applications based on the latest innovations provided by ASP.NET, such as Web Forms and XML Web services.

 

          The .NET Framework can be hosted by unmanaged components that load the common language runtime into their processes and initiate the execution of managed code, thereby creating a software environment that can exploit both managed and unmanaged features. The .NET Framework not only provides several runtime hosts, but also supports the development of third-party runtime hosts.

 

          For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for managed code. ASP.NET works directly with the runtime to enable Web Forms applications and XML Web services, both of which are discussed later in this topic.

          Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form of a MIME type extension). Using Internet Explorer to host the runtime enables you to embed managed components or Windows Forms controls in HTML documents. Hosting the runtime in this way makes managed mobile code (similar to Microsoft® ActiveX® controls) possible, but with significant improvements that only managed code can offer, such as semi-trusted execution and secure isolated file storage.

 

Features of the Common Language Runtime

          The common language runtime manages memory, thread execution, code execution, code safety verification, compilation, and other system services. These features are intrinsic to the managed code that runs on the common language runtime.

 

          With regards to security, managed components are awarded varying degrees of trust, depending on a number of factors that include their origin (such as the Internet, enterprise network, or local computer). This means that a managed component might or might not be able to perform file-access operations, registry-access operations, or other sensitive functions, even if it is being used in the same active application.

 

          The runtime enforces code access security. For example, users can trust that an executable embedded in a Web page can play an animation on screen or sing a song, but cannot access their personal data, file system, or network. The security features of the runtime thus enable legitimate Internet-deployed software to be exceptionally feature rich.

          The runtime also enforces code robustness by implementing a strict type- and code-verification infrastructure called the common type system (CTS). The CTS ensures that all managed code is self-describing. The various Microsoft and third-party language compilers generate managed code that conforms to the CTS. This means that managed code can consume other managed types and instances, while strictly enforcing type fidelity and type safety.

 

          In addition, the managed environment of the runtime eliminates many common software issues. For example, the runtime automatically handles object layout and manages references to objects, releasing them when they are no longer being used. This automatic memory management resolves the two most common application errors, memory leaks and invalid memory references.

 

          The runtime also accelerates developer productivity. For example, programmers can write applications in their development language of choice, yet take full advantage of the runtime, the class library, and components written in other languages by other developers. Any compiler vendor who chooses to target the runtime can do so. Language compilers that target the .NET Framework make the features of the .NET Framework available to existing code written in that language, greatly easing the migration process for existing applications.

 

          While the runtime is designed for the software of the future, it also supports software of today and yesterday. Interoperability between managed and unmanaged code enables developers to continue to use necessary COM components and DLLs.

          The runtime is designed to enhance performance. Although the common language runtime provides many standard runtime services, managed code is never interpreted. A feature called just-in-time (JIT) compiling enables all managed code to run in the native machine language of the system on which it is executing. Meanwhile, the memory manager removes the possibilities of fragmented memory and increases memory locality-of-reference to further increase performance.

 

          Finally, the runtime can be hosted by high-performance, server-side applications, such as Microsoft® SQL Server™ and Internet Information Services (IIS). This infrastructure enables you to use managed code to write your business logic, while still enjoying the superior performance of the industry's best enterprise servers that support runtime hosting.

 

Common Type System

          The common type system defines how types are declared, used, and managed in the runtime, and is also an important part of the runtime's support for cross-language integration. The common type system performs the following functions:

 

          Establishes a framework that enables cross-language integration, type safety, and high performance code execution. Provides an object-oriented model that supports the complete implementation of many programming languages.

 

          Defines rules that languages must follow, which helps ensure that objects written in different languages can interact with each other.

In This Section Common Type System Overview

          Describes concepts and defines terms relating to the common type system.

 

Type Definitions

Describes user-defined types.

Type Members

          Describes events, fields, nested types, methods, and properties, and concepts such as member overloading, overriding, and inheritance.

 

Value Types: Describes built-in and user-defined value types.

 

Classes: Describes the characteristics of common language runtime classes.

 

Delegates: Describes the delegate object, which is the managed alternative to unmanaged    function pointers.

 

Arrays: Describes common language runtime array types.

 

Interfaces: Describes characteristics of interfaces and the restrictions on interfaces imposed by the common language runtime.

 

Pointers: Describes managed pointers, unmanaged pointers, and unmanaged function pointers.

Related Sections

. NET Framework 1Class Library

          Provides a reference to the classes, interfaces, and value types included in the Microsoft .NET Framework SDK.

 

Common Language Runtime

          Describes the run-time environment that manages the execution of code and provides application development services.

 

Cross-Language Interoperability

          The common language runtime provides built-in support for language interoperability. However, this support does not guarantee that developers using another programming language can use code you write. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.

 

          This section describes the common language runtime's built-in support for language interoperability and explains the role that the CLS plays in enabling guaranteed cross-language interoperability. CLS features and rules are identified and CLS compliance is discussed.

 

In This Section

Language Interoperability

          Describes built-in support for cross-language interoperability and introduces the Common Language Specification.

What is the Common Language Specification?

          Explains the need for a set of features common to all languages and identifies CLS rules and features.

 

Writing CLS-Compliant Code

          Discusses the meaning of CLS compliance for components and identifies levels of CLS compliance for tools.

 

Common Type System

          Describes how types are declared, used, and managed by the common language runtime.

 

Metadata and Self-Describing Components

          Explains the common language runtime's mechanism for describing a type and storing that information with the type itself.

 

. NET Framework Class Library

          The .NET Framework class library is a collection of reusable types that tightly integrate with the common language runtime. The class library is object oriented, providing types from which your own managed code can derive functionality. This not only makes the .NET Framework types easy to use, but also reduces the time associated with learning new features of the .NET Framework. In addition, third-party components can integrate seamlessly with classes in the .NET Framework.

 

          For example, the .NET Framework collection classes implement a set of interfaces that you can use to develop your own collection classes. Your collection classes will blend seamlessly with the classes in the .NET Framework.

ADO.NET Overview

          ADO.NET is an evolution of the ADO data access model that directly addresses customer requirements for developing scalable applications. It was designed specifically for the web with scalability, statelessness, and XML in mind.

 

          ADO.NET uses some ADO objects, such as the Connection and Command objects, and also introduces new objects. Key new ADO.NET objects include the Dataset, Data Reader, and Data Adapter.

 

          The important distinction between this evolved stage of ADO.NET and previous data architectures is that there exists an object -- the Dataset -- that is separate and distinct from any data stores. Because of that, the Dataset functions as a standalone entity. You can think of the DataSet as an always disconnected record set that knows nothing about the source or destination of the data it contains. Inside a Dataset, much like in a database, there are tables, columns, relationships, constraints, views, and so forth.

 

          A Data Adapter is the object that connects to the database to fill the Dataset. Then, it connects back to the database to update the data there, based on operations performed while the Dataset held the data. In the past, data processing has been primarily connection-based. Now, in an effort to make multi-tiered apps more efficient, data processing is turning to a message-based approach that revolves around chunks of information. At the center of this approach is the Data Adapter, which provides a bridge to retrieve and save data between a Dataset and its source data store.

          It accomplishes this by means of requests to the appropriate SQL commands made against the data store.

 

          The XML-based Dataset object provides a consistent programming model that works with all models of data storage: flat, relational, and hierarchical. It does this by having no 'knowledge' of the source of its data, and by representing the data that it holds as collections and data types. No matter what the source of the data within the Dataset is, it is manipulated through the same set of standard APIs exposed through the Dataset and its subordinate objects.

 

          While the Dataset has no knowledge of the source of its data, the managed provider has detailed and specific information. The role of the managed provider is to connect, fill, and persist the Dataset to and from data stores. The OLE DB and SQL Server .NET Data Providers (System.Data.OleDb and System.Data.SqlClient) that are part of the .Net Framework provide four basic objects: the Command, Connection, DataReader and DataAdapter. In the remaining sections of this document, we'll walk through each part of the DataSet and the OLE DB/SQL Server .NET Data Providers explaining what they are, and how to program against them.

 

          The following sections will introduce you to some objects that have evolved, and some that are new. These objects are:

·         Connections. For connection to and managing transactions against a database.

·         Commands. For issuing SQL commands against a database.

·         DataReaders. For reading a forward-only stream of data records from a SQL Server data source.

·         DataSets. For storing, remoting and programming against flat data, XML data and relational data.

·         DataAdapters. For pushing data into a DataSet, and reconciling data against a database.

 

When dealing with connections to a database, there are two different options: SQL Server .NET Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider (System.Data.OleDb). In these samples we will use the SQL Server .NET Data Provider. These are written to talk directly to Microsoft SQL Server. The OLE DB .NET Data Provider is used to talk to any OLE DB provider (as it uses OLE DB underneath).

 

Connections

          Connections are used to 'talk to' databases, and are respresented by provider-specific classes such as SQLConnection. Commands travel over connections and resultsets are returned in the form of streams which can be read by a DataReader object, or pushed into a DataSet object.

 

Commands

          Commands contain the information that is submitted to a database, and are represented by provider-specific classes such as SQLCommand. A command can be a stored procedure call, an UPDATE statement, or a statement that returns results. You can also use input and output parameters, and return values as part of your command syntax. The example below shows how to issue an INSERT statement against the Northwind database.

DataReaders

          The DataReader object is somewhat synonymous with a read-only/forward-only cursor over data. The DataReader API supports flat as well as hierarchical data. A DataReader object is returned after executing a command against a database. The format of the returned DataReader object is different from a recordset. For example, you might use the DataReader to show the results of a search list in a web page.

 

DataSets and DataAdapters

DataSets
         
The DataSet object is similar to the ADO Recordset object, but more powerful, and with one other important distinction: the DataSet is always disconnected. The DataSet object represents a cache of data, with database-like structures such as tables, columns, relationships, and constraints. However, though a DataSet can and does behave much like a database, it is important to remember that DataSet objects do not interact directly with databases, or other source data. This allows the developer to work with a programming model that is always consistent, regardless of where the source data resides. Data coming from a database, an XML file, from code, or customer input can all be placed into DataSet objects. Then, as changes are made to the DataSet they can be tracked and verified before updating the source data. The GetChanges method of the DataSet object actually creates a second DatSet that contains only the changes to the data. This DataSet is then used by a DataAdapter (or other objects) to update the original data source.


          The DataSet has many XML characteristics, including the ability to produce and consume XML data and XML schemas. XML schemas can be used to describe schemas interchanged via WebServices. In fact, a DataSet with a schema can actually be compiled for type safety and statement completion.

 

DataAdapters (OLEDB/SQL)

          The DataAdapter object works as a bridge between the DataSet and the source data. Using the provider-specific SqlDataAdapter (along with its associated SqlCommand and SqlConnection) can increase overall performance when working with a Microsoft SQL Server databases. For other OLE DB-supported databases, you would use the OleDbDataAdapter object and its associated OleDbCommand and OleDbConnection objects.

 

          The DataAdapter object uses commands to update the data source after changes have been made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command; using the Update method calls the INSERT, UPDATE or DELETE command for each changed row. You can explicitly set these commands in order to control the statements used at runtime to resolve changes, including the use of stored procedures. For ad-hoc scenarios, a Command Builder object can generate these at run-time based upon a select statement. However, this run-time generation requires an extra round-trip to the server in order to gather required metadata, so explicitly providing the INSERT, UPDATE, and DELETE commands at design time will result in better run-time performance.

 

1.     ADO.NET is the next evolution of ADO for the .Net Framework.

2.     ADO.NET was created with n-Tier, statelessness and XML in the forefront. Two new objects, the DataSet and DataAdapter, are provided for these scenarios.

3.     ADO.NET can be used to get data from a stream, or to store data in a cache for updates.

4.     There is a lot more information about ADO.NET in the documentation.

5.     Remember, you can execute a command directly against the database in order to do inserts, updates, and deletes. You don't need to first put data into a DataSet in order to insert, update, or delete it.

6.     Also, you can use a DataSet to bind to the data, move through the data, and navigate data relationships

 

ASP.NET

Server Application Development

Server-side applications in the managed world are implemented through runtime hosts. Unmanaged applications host the common language runtime, which allows your custom managed code to control the behavior of the server. This model provides you with all the features of the common language runtime and class library while gaining the performance and scalability of the host server.

 

The following illustration shows a basic network schema with managed code running in different server environments. Servers such as IIS and SQL Server can perform standard operations while your application logic executes through the managed code.

SERVER-SIDE MANAGED CODE

ASP.NET is the hosting environment that enables developers to use the .NET Framework to target Web-based applications. However, ASP.NET is more than just a runtime host; it is a complete architecture for developing Web sites and Internet-distributed objects using managed code. Both Web Forms and XML Web services use IIS and ASP.NET as the publishing mechanism for applications, and both have a collection of supporting classes in the .NET Framework.

 

XML Web services, an important evolution in Web-based technology, are distributed, server-side application components similar to common Web sites. However, unlike Web-based applications, XML Web services components have no UI and are not targeted for browsers such as Internet Explorer and Netscape Navigator. Instead, XML Web services consist of reusable software components designed to be consumed by other applications, such as traditional client applications, Web-based applications, or even other XML Web services. As a result, XML Web services technology is rapidly moving application development and deployment into the highly distributed environment of the Internet.

 

If you have used earlier versions of ASP technology, you will immediately notice the improvements that ASP.NET and Web Forms offers. For example, you can develop Web Forms pages in any language that supports the .NET Framework. In addition, your code no longer needs to share the same file with your HTTP text (although it can continue to do so if you prefer). Web Forms pages execute in native machine language because, like any other managed application, they take full advantage of the runtime.

In contrast, unmanaged ASP pages are always scripted and interpreted. ASP.NET pages are faster, more functional, and easier to develop than unmanaged ASP pages because they interact with the runtime like any managed application.

 

The .NET Framework also provides a collection of classes and tools to aid in development and consumption of XML Web services applications. XML Web services are built on standards such as SOAP (a remote procedure-call protocol), XML (an extensible data format), and WSDL ( the Web Services Description Language). The .NET Framework is built on these standards to promote interoperability with non-Microsoft solutions.

 

For example, the Web Services Description Language tool included with the .NET Framework SDK can query an XML Web service published on the Web, parse its WSDL description, and produce C# or Visual Basic source code that your application can use to become a client of the XML Web service. The source code can create classes derived from classes in the class library that handle all the underlying communication using SOAP and XML parsing. Although you can use the class library to consume XML Web services directly, the Web Services Description Language tool and the other tools contained in the SDK facilitate your development efforts with the .NET Framework.

 

If you develop and publish your own XML Web service, the .NET Framework provides a set of classes that conform to all the underlying communication standards, such as SOAP, WSDL, and XML. Using those classes enables you to focus on the logic of your service, without concerning yourself with the communications infrastructure required by distributed software development.

          Finally, like Web Forms pages in the managed environment, your XML Web service will run with the speed of native machine language using the scalable communication of IIS.

 

ACTIVE SERVER PAGES.NET

ASP.NET is a programming framework built on the common language runtime that can be used on a server to build powerful Web applications. ASP.NET offers several important advantages over previous Web development models:

 

·        Enhanced Performance. ASP.NET is compiled common language runtime code running on the server. Unlike its interpreted predecessors, ASP.NET can take advantage of early binding, just-in-time compilation, native optimization, and caching services right out of the box. This amounts to dramatically better performance before you ever write a line of code.

 

·        World-Class Tool Support. The ASP.NET framework is complemented by a rich toolbox and designer in the Visual Studio integrated development environment. WYSIWYG editing, drag-and-drop server controls, and automatic deployment are just a few of the features this powerful tool provides.

 

·        Power and Flexibility. Because ASP.NET is based on the common language runtime, the power and flexibility of that entire platform is available to Web application developers. The .NET Framework class library, Messaging, and Data Access solutions are all seamlessly accessible from the Web. ASP.NET is also language-independent, so you can choose the language that best applies to your application or partition your application across many languages. Further, common language runtime interoperability guarantees that your existing investment in COM-based development is preserved when migrating to ASP.NET.

 

·        Simplicity. ASP.NET makes it easy to perform common tasks, from simple form submission and client authentication to deployment and site configuration. For example, the ASP.NET page framework allows you to build user interfaces that cleanly separate application logic from presentation code and to handle events in a simple, Visual Basic - like forms processing model. Additionally, the common language runtime simplifies development, with managed code services such as automatic reference counting and garbage collection.

 

·        Manageability. ASP.NET employs a text-based, hierarchical configuration system, which simplifies applying settings to your server environment and Web applications. Because configuration information is stored as plain text, new settings may be applied without the aid of local administration tools. This "zero local administration" philosophy extends to deploying ASP.NET Framework applications as well. An ASP.NET Framework application is deployed to a server simply by copying the necessary files to the server. No server restart is required, even to deploy or replace running compiled code.

 

 

·        Scalability and Availability. ASP.NET has been designed with scalability in mind, with features specifically tailored to improve performance in clustered and multiprocessor environments. Further, processes are closely monitored and managed by the ASP.NET runtime, so that if one misbehaves (leaks, deadlocks), a new process can be created in its place, which helps keep your application constantly available to handle requests.

 

·        Customizability and Extensibility. ASP.NET delivers a well-factored architecture that allows developers to "plug-in" their code at the appropriate level. In fact, it is possible to extend or replace any subcomponent of the ASP.NET runtime with your own custom-written component. Implementing custom authentication or state services has never been easier.

 

·        Security. With built in Windows authentication and per-application configuration, you can be assured that your applications are secure.

 

LANGUAGE SUPPORT

The Microsoft .NET Platform currently offers built-in support for three languages: C#, Visual Basic, and JScript.

 

WHAT IS ASP.NET WEB FORMS?

The ASP.NET Web Forms page framework is a scalable common language runtime programming model that can be used on the server to dynamically generate Web pages.

 

Intended as a logical evolution of ASP (ASP.NET provides syntax compatibility with existing pages), the ASP.NET Web Forms framework has been specifically designed to address a number of key deficiencies in the previous model. In particular, it provides:

 

·        The ability to create and use reusable UI controls that can encapsulate common functionality and thus reduce the amount of code that a page developer has to write.

·        The ability for developers to cleanly structure their page logic in an orderly fashion (not "spaghetti code").

·        The ability for development tools to provide strong WYSIWYG design support for pages (existing ASP code is opaque to tools).

 

ASP.NET Web Forms pages are text files with an .aspx file name extension. They can be deployed throughout an IIS virtual root directory tree. When a browser client requests .aspx resources, the ASP.NET runtime parses and compiles the target file into a .NET Framework class. This class can then be used to dynamically process incoming requests. (Note that the .aspx file is compiled only the first time it is accessed; the compiled type instance is then reused across multiple requests).

 

An ASP.NET page can be created simply by taking an existing HTML file and changing its file name extension to .aspx (no modification of code is required). For example, the following sample demonstrates a simple HTML page that collects a user's name and category preference and then performs a form postback to the originating page when a button is clicked:

ASP.NET provides syntax compatibility with existing ASP pages. This includes support for <% %> code render blocks that can be intermixed with HTML content within an .aspx file. These code blocks execute in a top-down manner at page render time.

 

CODE-BEHIND WEB FORMS

ASP.NET supports two methods of authoring dynamic pages. The first is the method shown in the preceding samples, where the page code is physically declared within the originating .aspx file. An alternative approach--known as the code-behind method--enables the page code to be more cleanly separated from the HTML content into an entirely separate file.

 

INTRODUCTION TO ASP.NET SERVER CONTROLS

In addition to (or instead of) using <% %> code blocks to program dynamic content, ASP.NET page developers can use ASP.NET server controls to program Web pages. Server controls are declared within an .aspx file using custom tags or intrinsic HTML tags that contain a runat="server" attributes value. Intrinsic HTML tags are handled by one of the controls in the System.Web.UI.HtmlControls namespace. Any tag that doesn't explicitly map to one of the controls is assigned the type of System.Web.UI.HtmlControls.HtmlGenericControl.

 

Server controls automatically maintain any client-entered values between round trips to the server. This control state is not stored on the server (it is instead stored within an <input type="hidden"> form field that is round-tripped between requests). Note also that no client-side script is required.

In addition to supporting standard HTML input controls, ASP.NET enables developers to utilize richer custom controls on their pages. For example, the following sample demonstrates how the <asp:adrotator> control can be used to dynamically display rotating ads on a page.

 

1.     ASP.NET Web Forms provide an easy and powerful way to build dynamic Web UI.

2.     ASP.NET Web Forms pages can target any browser client (there are no script library or cookie requirements).

3.     ASP.NET Web Forms pages provide syntax compatibility with existing ASP pages.

4.     ASP.NET server controls provide an easy way to encapsulate common functionality.

5.     ASP.NET ships with 45 built-in server controls. Developers can also use controls built by third parties.

6.     ASP.NET server controls can automatically project both uplevel and downlevel HTML.

7.     ASP.NET templates provide an easy way to customize the look and feel of list server controls.

8.     ASP.NET validation controls provide an easy way to do declarative client or server data validation.

 

 

 

 

 

 


5. SYSTEM DESIGN

 

Introduction

          Software testing is a critical element of software quality assurance and represents the ultimate reviews of specification, design and coding. Testing represents an interesting anomaly for the software. During earlier definition and development phases, it was attempted to build software from an abstract concept to a tangible implementation. No system is error free because it is so till the next error drops up during any phase of the development or usage of the product. A sincere effort however needs to be put to bring out a product that is satisfactory.

 

          The testing phase involves the testing of development system using various data. Preparation of the test data plays a vital role in system testing. After preparing the test data, the system under study was tested using those data. While testing the system, by using the test data, errors were found and corrected by using the following testing steps and corrections were also noted for future use. Thus, a series of testing is performed on the proposed system before the system is ready for implementation.

 

TEST PLAN:

          The importance of software testing and its implications cannot be overemphasized. Software testing is a critical element of Software Quality Assurance and represents the ultimate review of the specifications, design and coding.

 

Software Testing:

          As the coding is completed according to the requirement we have to test the quality of the software. Software testing is a critical element of the software quality assurance and represents the ultimate review of specification, design and coding. Although testing is to uncover the errors in the software functions appear to be working as per the specification, those performance requirements appear top have been met. In addition, data collected as testing is conducted provide a good indication of software reliability and some indications of software quality as a whole. To assure the software quality we conduct both white box testing and black box testing.

 

White box testing

          White box testing is a test case design method that uses the control structure of the procedural designs to derive test cases. As we are using a non procedural language, there is very small scope for the white box testing. Whenever it is necessary, there the control structures are tested and successfully passed all the control structures with a very minimum errors.

 

Black box testing

          It focuses on the functional requirements to the software. It enables to derive sets of input conditions that will fully exercise all functional requirements for a program. The Black box testing finds almost all errors. It finds some interface errors and errors in accessing the database and some performance errors. In Black box testing we use two techniques equivalence partitioning the boundary volume analyzing technique.

 

System testing:

It is designated to uncover weakness that was not detected in the earlier tests. The total system is tested for recovery and fallback after various major failures to ensure that no data are lost. an acceptance test is done to validity and reliability of the system. The philosophy behind the testing is to find error in project. There are many test cases designed with this in mond. The flow of testing is as follows

 

§        Code Testing :

          Specification testing is done to check if the program does with it should do and how it should behave under various condition or combinations and submitted for processing in the system and it is checked if any overlaps occur during the processing. This strategy examines the logic of the program. Here only syntax of the code is tested. In code testing syntax errors are corrected, to ensure that the code is perfect.

 

§        Unit Testing :

     The first level of testing is called unit testing. Here different modules are tested against the specification produced running the design of the modules. Unit testing is done to test the working of individual modules with test oracles. Unit testing comprises a set of tests performed by an individual programmer prior to integration of the units into a large system. A program unit is usually small enough that the programmer who developed it can test it in a great detail. Unit testing focuses first on the modules to locate errors. These errors are verified and corrected so that the unit perfectly fits to the project.

 

§        System Testing :

          The next level of testing is system testing and acceptance testing. This testing is done to check if the system has met its requirements and to find the external behavior of the system. System testing involves two kinds of activities.

§        Integration testing

§        Acceptance testing

 

     The next level of testing is called the Integration testing. In this many tested modules are combined into subsystems, which were then tested. Test case data is prepared to check the control flow of all the modules and to exhaust all possible inputs to the program. Situations like treating the modules when there is no data entered in the test box is also tested.

 

     This testing strategy dictates the order in which modules must be available, and exerts strong influence on the order in which the modules must be written, debugged and unit tested. In integration testing, all modules on which unit testing\g is performed are integrated together and tested.

 

Acceptance testing:

This testing is performed finally by user to demonstrate that the implemented system satisfies its requirements. The user gives various inputs to get required outputs.

 


Specification Testing:

This is done to check if the program does what it should do and how it should behave under various conditions or combination and submitted for processing in the system and it is checked if any overlaps occur during the processing.

 

Performance Time Testing:

This is done to determine how long it takes to accept and respond i.e., the total time for processing when it has to handle quite a large number of records. It is essential to check the exception speed of the system, which runs well with only a handful of test transactions. Such systems might be slow when fully loaded. So testing is done by providing large number of data for processing. A system testing is designed to uncover weaknesses that were not detected in the earlier tests.

 

The total system is tested for recovery and fall back after various major failures to ensure that no data is lost during an emergency, an acceptance test is done to ensure about the validity and reliability of the system.


 

 

 

 

 

6. FUTURE ENHANCEMENT

 

We have used the ranges of transaction amount as the observation symbols, while the types of item have been considered to be states of the HMM. We have suggested a method for finding the spending profile of cardholders as well as application of this knowledge in deciding the value of observation symbols and initial estimate of the model parameters. It has also been explained how the HMM can detect whether an incoming transaction is fraudulent or not.

 

Experimental results show the performance and effectiveness of our system and demonstrate the usefulness of learning the spending profile of the cardholders. Comparative studies reveal that the Accuracy of the system is close to 80% over a wide variation in the input data. The system is also scalable for handling large volumes of transactions. The only way to detect this kind of fraud is to analyze the spending patterns on every card and to figure out any inconsistency with respect to the “usual” spending patterns.

 

Fraud detection based on the analysis of existing purchase data of cardholder is a promising way to reduce the rate of successful credit card frauds. Since humans tend to exhibit specific behaviorist profiles, every cardholder can be represented by a set of patterns containing information about the typical purchase category, the time since the last purchase, the amount of money spent, etc. Deviation from such patterns is a potential threat to the system. To commit fraud in these types of purchases, a fraudster simply needs to know the card details.

Most of the time, the genuine cardholder is not aware that someone else has seen or stolen his card information.

 

The only way to detect this kind of fraud is to analyze the spending patterns on every card and to figure out any inconsistency with respect to the “usual” spending patterns. Fraud detection based on the analysis of existing purchase data of cardholder is a promising way to reduce the rate of successful credit card frauds. Since humans tend to exhibit specific behaviorist profiles, every cardholder can be represented by a set of patterns containing information about the typical purchase category, the time since the last purchase, the amount of money spent, etc. Deviation from such patterns is a potential threat to the system.

 

 

 

 

 

 

 

 

 

 

 

7. CONCLUSION

 

In this paper, we have proposed an application of HMM in credit card fraud detection. The different steps in credit card transaction processing are represented as the underlying stochastic process of an HMM. We have used the ranges of transaction amount as the observation symbols, whereas the types of item have been considered to be states of the HMM. We have suggested a method for finding the spending profile of cardholders, as well as application of this knowledge in deciding the value of observation symbols and initial estimate of the model parameters. It has also been explained how the HMM can detect whether an incoming transaction is fraudulent or not. Experimental results show the performance and effectiveness of our system and demonstrate the usefulness of learning the spending profile of the cardholders. Comparative studies reveal that the Accuracy of the system is close to 80 percent over a wide variation in the input data. The system is also scalable for handling large volumes of transactions.


8. BIBLIOGRAPHY

 

 

·        Statistics for General and On-Line Card Fraud,” http://www.

epaynews.com/statistics/fraud.html, Mar. 2007.

·        S. Ghosh and D.L. Reilly, “Credit Card Fraud Detection with a

Neural-Network,” Proc. 27th Hawaii Int’l Conf. System Sciences:

Information Systems: Decision Support and Knowledge-Based Systems,vol. 3, pp. 621-630, 1994

·        W. Fan, A.L. Prodromidis, and S.J. Stolfo, “Credit Card     Fraud Detection,” IEEE Intelligent Systems, vol. 14, no. 6, pp. 67-74, 1999.

 

 

 

Comments

Popular posts from this blog

Chemical test for Tragacanth

Chemical test for Benzoin

Chemical test for Agar/Agar-Agar / Japaneese Isinglass