Thursday, April 30, 2020

Rural Banking in Nigeria, Issues and Challenges (a Case Study of Wema Bank of Nigeria Plc free essay sample

Chapter one 1. 0 Introduction Database system developed because of the need to store large amount of data and retrieve that data quickly and accurately for example, a University abrary stores details about the books held and loans taken out by student. Not very long ago this information about the books and loads might have been stored in a box card index, nowadays, only a few decades later, student are able to view their loans online and see if a book is available and reserve it. The abrary staff can quickly access statistics on overdue books, popular books which never have the shelves. Another example is a company that accepts customer orders for instance, orders for spare parts for electrical goods. Originally orders might have been created when a customer telephone the company to place the order. If information about the customer already existed in a paper file then his/her details would be requested and recorded. We will write a custom essay sample on Rural Banking in Nigeria, Issues and Challenges (a Case Study of Wema Bank of Nigeria Plc or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page An order form would have been filled in and copies: one copy being stored in a filing cabinet, the other, information on stock held would need to be accessed. Eventually, the order entry system was computerized so that by the 1960’s the data about customers and orders might have been stored in a computer file- a magnetic tape file and then later magnetic desk. These files were processed by computer programs. Other applications programs were used which could create invoices, orders to suppliers and so on. Although different application software would at times require similar data, the data would be kept on different files. In both types of system, the paper one and the files system, processing was slow and problems of inconsistencies of data could easily develop. The introduction of shaved files. Whereby different applications shaved some of the same files, solved some of the problems described earlier, and was good for providing routine data. For example, a customer order application and an invoicing application might use both the customer and stock files, and ind addition their own files. As only one copy of each file was made available, the inconsisitencies were avoided. However, this method was not efficeient, as a share file would only be available to one application at a time. Share files systems were also not effective in providing data for planning and control of an organization. In the 1960’s database systems began to emerge with the release of the IBM production IMS, a system where the user viewed the data as a hierarchical tree. In the late sixties, database systems based on a different data model where developed. This time the user view of the data was a network of data recorded. In both cases skilled programmers were required and users tended to be large organizations. The database approach was an improvement on the share file solution as the software which was used to control the data was quite powerful. The software consisted of a number of components which provided facilities for acquiring data, data security and integrity and the ability to access the data simultaneously by different users. Another character of database systems is that the underlying structure of the data is isolated from the actual data itself. The specification of the entire database is called Schema. There are various level of Schema- the conceptual schema or model is discussed below. If there is a requirement to change the structure of the data, the change will be made at the schema level. Such changes are independent of both the physical storage level and the level seen by individual users. Returning to our brief history, by the 1970’s the study of database systems had become a major academic and research area. The relational model was first proposed in 1970 by Ted Codd with his services of pioneering papers. The theory underpinning relational database is derived from the mathematical principles of set theory and predicate logic. The model is based on the familiar concepts of table, rows and columns and the manipulation of these tables is achieved through a collection of simple and well understood set theory operators. The query language SQL, based on relational algebra, was developed and has become the most important query language for relational databases. The first commercial relational product was Oracle’s DBMS and was released in 1980. The relational model has been successfully adopted for transaction processing in numerous organizations and support most of the major database systems in commercial use today. Its ability to handle efficiently simple data types, its powerful query language, and its good protection of data from programming errors make it an effective model. 1. statement of the problem/ limitation ? DBMS are expensive products, complex and quite different from many other software tools. Their introduction therefore represents a considerable investment, both direct and indirect. ? DBMS provide, in standardized form, a whole set service, which necessarily carry cost. In the cause where some of these services are not needed, it is different to extract the services actually required from the others, and this can generate inefficiencies. . purpose of the study Why use a database system? What are the advantages? To some extend the answer to these questions depends on whether the system is single or multi user. Let us use a wine seller as an example of a single use case. Its database is so small and simple that the advantages might not be all that obvious, but imagine similar datab ase for a large restaurant, with a stock of perhaps thousands of bottles and very frequent changes to that stock, or think of a liquor store, with again a very large stock and high turnover on that stock. The advantages of a database system over traditional, paper-paper methods of recordkeeping are perhaps easier to see in these cases. ? Compactness:-There is no need for possible voluminous paper files. ? Speed:- the machine can retrieve and update data far faster than a human can. In particular, adhoc, supur- of- the moment queries. ? Less drudgery:- much of the sheer tedium of mechanical tasks are always better done by machines. ? Currency:- Accurate, up- to- date information is available on demand at any time. 3. Research Question Consider a hospital information system with the following characteristic:- A patient can either be a resident patient who is admitted to the hospital or an out patient who comes to the hospital for an out patient clinic. ? For both types of patient we will need to hold the birth patient’s name, telephone number, address, date of birth and the patient doctor. ? For resident patient we will need to hold the ward name in which the patient is currently residing, the admission date of the patient, and also information about any operations that the patient has had. ? For out – patient, we will need to hold information about the outpatient, the appointment date and time. . significance of the study ? redundancy can be reduced:- in non database systems each application has its own private files. This fact can often lead to considerable reducdancy in store data, with resultant waste in storage space for example, a personnel application might both own a file that includes departmental information for employees. As suggested in those two files can be integrated, and the redundancy eliminated, so long as the data administrator is aware of the data requirements for both application i. e. so long as the enterprise has the necessary overall control. Incidentally, can or necessarily should be eliminated. Sometimes there are sound business or technical reasons for maintaining several distinct copies of the same data, however, we do mean to suggest that any such redundancy should be carefully controlled that is, the DBMS should be aware of it, if it exists and should assume responsibility for propagating updates. ? Inconsistency can be avoided This is really a corollary of the previous point. Suppose a given fact about the real word – say the fact that employee E3 works in department D8 – is represented by two distinct entries in the database. Suppose also that the DBMS is not aware of this duplication. There will necessarily be occasion on which the two entries will not agree: namely when one of the two has been updated and the other not. At such times the database is said to be inconsistent. Clearly, a database that is in an inconsistent state is capable of supplying incorrect or contradictory information to its users. ? Transaction is a logical unit of work, typically involving several database operations. The standard example involves the transfer of a cash amount from one account A to another account B. Clearly two updates are required have, one to withdraw the cash from account A and the other to deposit it to account B. if the user has stated that the two updates are part of the same transaction, then the system can effectively guarantee that either both of then are done or neither is even halfway through the process. ? Security can be enforce:- Having complete jurisdiction over the database, the DBA can ensure that the only means of access to the database is through the proper channels, and hence can define security constraints or rules to be checked whenever access is attempted to sensitive data. Different constraints can be established for each type of access to each piece of information in the database. Chapter two 2. 0 Literature Review:- Explanation of Boyce- Codd Normal Form. Definition of Boyce- Codd Normal Form. In this section, we will formalize the ideas illustrated in section 8. 1, in the light of what we have said on functional dependencies. Let us start by observing that, in our example, the tow properties causing anomalies correspond exactly to attributes involve in functional dependencies: The property â€Å"the salary of each employee is unique and depends only on the employee, independently of the project on which he or she is working† can be formalized by means of the functional dependency Employee salary. ? The property â€Å"the budget of each project is unique and depends on the project, independently of the employees who are working on it† corresponds to the functional dependency project budget. Furthermore, it is appropriate to note that th e function attribute indicates, for each tuple, the role played by the employees in the project. This role is unique, for each employee-project pair. We can model this property too using a functional dependency. ? The property â€Å"in each project , each of the employees involved can carry out only one function â€Å" corresponds to the functional dependency Employee Project function. As we have mentioned in the previous section, this also a consequence of the fact that the attributes employee and project form the key of the relation. We saw in section 8. 1 how the fist two properties (and thus the corresponding functional dependencies) generate undesirable redundancies and anomalies. The third dependency is different. It never generates redundancies because, having Employee and project as a key, the relation cannot contain two tuples with the same values of these attributes (and thus of the function attribute). Also, from a conceptual point of view, we can say that it can generate anomalies, because each employee has a salary (and only) and each project has a budget (and only one), and thus for each employee- project pair we can have unique values for all the other attributes of the relation. In some cases, such values might not be available. In these case, since they are not part of the key, we would simply replace them with null values without any problem. We can thus conclude that the dependencies: EmployeeSalary ProjectBudget Cause anomalies, whereas dependency. Employee ProjectFunction dose not. The difference, as we have mentioned, is that Employee Project is a super key of the relation. All the reasoning that we have developed with reference to this specific example, is more general. Indeed: redundancies and anomalies are caused by the functional dependencies X >Y that allow the presence of many equal tuples on the attributes in X. hat is from the functional dependencies X>Y such that X dose not contain a key. We will formalize this idea by introducing the notion of Boyce- Codd normal form (BCNF), which takes the name from its inventors. A relation r is in Boyce- Codd normal from if for every (non-trivial) functional dependency X>Y defined on it, X contain a key K of r. That is, X is a super key for r. Anomalies an d redundancies, as discussed above, do not appear in database with relations in Boyce- Codd normal form, because the independent pieces of information are separate, one per relation. Decomposition into Boyce- Codd normal form Given a relation that dose not satisfy Boyce- Codd normal form, we can often replace it with one or more normalized relations using a process called normalization. This process is based on a simple criterion: if a relation represents many real-world concepts, then it is decomposed into smaller relations, one for each concept. Let us show the normalization process by means of an example. We can eliminate redundancies and anomalies for the relation in figure 8. 1 if we replace it with the three relations in figure 8. , obtained by projections on the set of attributes corresponding respectively to the three items of information mentioned above. The three relations so that each dependency correspond to different relation, the key of which is actually the left hand side of the same dependency. In this way, the satisfaction of the Boyce- Codd normal from is guaranteed, by the definition of this normal form itself. 2. 1 Normalization Introduction O ne of the principal objectives of relational database is to ensure that each item of data is held only once within the database. For instance, if we hold customers’ address then the address of any one customer is represented only once throughout all the tables of the application. The reasons for this are, first, simply to minimize the amount of space required to hold the database, but also and more importantly to simplify maintenance of the data. If the same information is held in two or more place, then each time the data changes, each occurrence of the data must be located and amended. Also having two copies of the same data gives rise to the possibility of their being different. In many case, it is relatively easy to arrange the tables to meet this objective. There is, however, a more formal procedure called normalization that can be followed to organized data into a standard format which avoids many processing difficulties. The process of normalizing tables is described in this chapter. Overview of normalization process In order to understand the process of normalization, it is necessary to refer back to the concept, mentioned earlier, of the ‘ruling part’ and ‘dependent part’ of the rows. The ruling part, also known as the key value of the table, is the column of columns that specify or identify the entity being described by the row. For instance, the key of the project table is the project code since this value uniquely specified the project been described by the other column of the row, the dependent column. The purpose of normalization is:- †¢ To put data into a form that confirm to relations to principles, e. g. , single valued column, each relation represents one entity. †¢ To avoid redundancy by storing each fact within the database only once. †¢ To put the data into form that is more able to accommodate change. †¢ To avoid certain difficulties in updating (so-called anomalies, described later). †¢ To facilitate enforcement of constraints on the data. Normalization involves checking that the tables confirm to certain rules and, if not, re-organizing the data. This will mean creating new tables containing data drawn from the original table. Normalization is a multi-stage process, the result of each of the stage being called a ‘normal form’ successive stage produce a grater degree of normalization. There are total of seven normal form, called in increasing degree and grouped for the convenience of description. †¢ First, second and third normal forms (abbreviation to 1NF, 2NF and 3NF) †¢ Boyce-Codd (BCNF) †¢ Fourth normal form (4NF) Fifth normal form (5NF) and domain-key normal form (DK/NF) The normal forms 1NF, 2NF and 3NF are the most important and all practical database applications would be expected to conform to these. The likelihood of a set of tables requiring modification to comply with these is quite high. The Boyce-Codd normal form is a more stringent form of 3NF an again should be applied system . There is less chance of this normal form affecting the structure of the table. The fourth and fifth normal forms are unlikely to be significant in a practical system that has designed, say using the EB approach. The highest normal form, the domain-key, was devised by Fagin in 1981 (Fagin 1981). Fagin prove that this normal form was the last on higher form is possible or necessary since a relation in DK/NF can have no modification anomalies. However, this is mostly of theoretical interest since there is no known procedure for converting to this form. The first three normal forms are most significant and are usually sufficient for most applications. These will be described in some detail in the following section; the other normal forms will be covered in the subsequent sections in somewhat less detail. Normal form 1NF, 2NF and 3NF The normalization process assumes that you start with some information description of all the data attributes that the database application appears to require; this is often called ‘un-normalized data’. This set of attributes is then tested using criteria defined by each of the normalization stages. If the data fails the criteria, there is a prescribed procedure for correcting the structure of the data; this inevitably involves the creation of additional tables. The overall process of normalization for the first three stages is summarized in figure5. . To understand what these step is imply, we can return again to the example initially introduced in Chapter One (1) concerning a correspondence college. For convenience, the specification of the example is reproduced below. A small correspondence college offers courses in a range of topics. For each course, student completes a series of assignments which are sent to the office. The assignments a re gathered into batches of up to ten assignments are sent to tutors for making. (i. e) complete batches of up to ten assignments are sent to tutors). Assume that there can be infinite number of tutors. These tutors mark the assignments, then return them, retaining them within the same batches. A system is require that enables ‘tracking’ of the assignments, so that the college knows what assignment have been received, sent to tutors of marked. Also, the system should keep a running total of the number of assignments that have been marked by each tutor. UN-NORMALIZED DATA Remove all repeating groups FIRST NORMAL FORM If the primary key has more than one field, ensure that all other fields are functionally dependent on the whole key SECOND NORMAL FORM Remove all transitive dependences. i. e ensure that all fields Dependent on non-key fields THIRD NORMAL FORM As we did in chapter one (1), we can represent the data diagrammatically as shown in figure 5. 2. We view this data design as a attempt at forming a relational table to represent the application data. Naturally, we would prefer as few tables as possible so we have combined all the data into one tentative table design. The attribute Batch Number will be used as a provisional primary key. mmmmmmm Batch Number

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.