Securing your client-server or multi-tier application.


Nowadays, when mankind has explored Mars, it seems that the problem of information protection should not exist at all. But we see and read about new viruses and gaps in security systems and about hacking all the time. We can not rectify the situation if the problem is in other people's mistakes, but we can try to avoid our own ones.

Everyone does his own tasks in the modern society -- someone programs well and someone repair cars well. But such situations have large disadvantages - we become dependent on other people. Besides, when we depend on them, we need to give them very important information sometimes. Banks have information about our accounts and purchases, hospitals know everything about our health etc.

Probably everyone has some information which he would not like to lay open to the public or lose. But you cannot be sure that the softwares installed in banks have no mistakes, can you? So, we hope for the best, choose best firms, trusting that if it is the best it will never let us down! And to some extent it is as so. Trusted firms are in earnest about their clients and employees information security. Thus, if you want to see your company in the top list, you have to think of its security. Are the products you make secure? How secure can your clients feel?


Data security problem in multi-tier, client-server and network applications

Long time ago, when Novell DOS 7.0 was installed on my computer, I used its outstanding feature to lock files with password. And how upset I was when accidentally got access to protected files from Windows for Work-Groups 3.11 (with 32-bits disk access on). In general, for opening files closed with password, the only thing you needed to do was using MS DOS diskette for booting the computer. But it was long time ago. And how do matters stand now? Do you think anything has changed? Let's take for example Windows XP. Probably many people heard and even used programs that let one get access to the data by passing the operating system protection.

While working with network applications, the user exposes to danger much more information, since the attacker can access not only data stored but also information transferred over network.

Main security threats and their description are listed below.

Unauthorized data access - is a kind of threat when a unauthorized person gets access to confidential information. It can lead to situation when such information becomes public or is used against its owner.

Companies and private users use open communication channels for data transfer. So, data transfer over such channels is in extreme need of protection in order to save confidentiality.

Possible causes of unauthorized access to secret data are:

  1. Network traffic transfer in clear (not encrypted) form;
  2. Absence of authorization mechanisms for accessing to secret data;
  3. Absence of access isolation mechanisms.

Unauthorized data modifications - is a kind of threat when data can be changed or deleted accidentally or intentionally by the person that has no permissions for such actions.

A threat of this type can damage the data integrity or have an influence on the information that is not directly linked with modified data. Such modifications are especially dangerous since they can be left without attention for a long time.

Possible causes of unauthorized modification:

  1. Absence of data integrity verification in software;
  2. Password sharing or leakage;
  3. Easily-guessed passwords;
  4. Passwords keeping at easily accessible places;
  5. Identification and authentication schemes are absent or weak.

Users of Internet and other communication channels, run at the most danger when such channels are not controlled by the company that uses these channels. Even when talking about the company LAN (local area network), which might seem to be protected from outside attacks, it can turn out that some of employees would like to use a secret information to satisfy his own needs.

The worst and most dangerous thing in such situation is not the bad system security itself, but the fact that the user believes that he is protected and he's mistaken. Most users do not know computers and softwares well enough to be able to tell whether the system is secured or their data is in danger of unauthorized access. So, the developer must take care of user's security. Developers must foresee the possibility of attack on the data stored on the user's computer as well as the possibility of an attack on the data during network operations.

Data encoding and encryption

One of the most necessary steps on data protection, is the data encryption. Encryption is the process of transforming the data into some sequence of bytes using one of encryption algorithms. The primary goal of encryption is to hide the data from being visible and accessible without having a key. Very often, the protection of data is performed in that way where the algorithm for transforming the data remains unknown. In other words, the author of such "protection" thinks that if the algorithm is not known, the data is properly protected. This is not encryption, but encoding. Revealing the algorithm makes such "encryption" defeated easily. And the algorithm can be discovered from the software that uses such encoded data. Sometimes it is possible to discover the data without even knowing the algorithm details.

Encryption is what is done with encryption algorithms. Those algorithms are well known and have been carefully analyzed by cryptography specialists and mathematicians. The strength of such algorithms is tested and proved again and again. The only secret part in the encryption is the key used to encrypt and/or decrypt the data.

The level of protection is determined not only by the algorithm itself, but also in the way how the algorithm is applied. Internet security protocols, for example, take special care about how the keys are created and used.

Symmetric encryption algorithms

Special algorithms and keys are used for encryption. The same algorithms and keys are used for information decryption. That's why such encryption method is called "symmetric". Another name is also used - secret-key cryptography.

Let us assume that you want to hide some important information from a violator. You take a special program with one of the most popular cryptographic algorithms embedded, and command this program to encrypt important data. Once the program has finished, you will get an encrypted file and set of bytes which is the key. Usually the key is not large and sometimes it can be presented as a text for an easy perception. Now it's very important to store the key in a secure place. Even if the data is encrypted, it doesn't mean that the hacker will have it impossible to access and read the data without a key.

When you want to decrypt the data, you just need to give to the program the encrypted file and its key.

Fig. 1 Symmetric key is used for both encryption and decryption

The advantages of this method is that you need to keep safe only the key and not the whole data. The key size does not depend on the size of the encrypted data. But such encryption method becomes useless when you need to pass data over open communication channels. If you transfer the secret key over the same channel there is no sense in encryption (everyone who can get information can get the key as well). And if you have a channel secure enough to pass the key, you can use it to transfer the data itself without encryption. Special key-exchange algorithms are used to address this problem solving but we'll talk about them later.

Key creation

As almost, any sequence of bytes can be used as a key (assuming that the sequence length corresponds to the requirements of the algorithm), random-number generators are used for key creation. The main task during key generation is to create a unique key, since security depends on the key uniqueness very much. The better the generator, the less possible is that someone will be able to guess what numbers will be generated next. To check how good a generator is and if the sequence it generates is really random, one cryptography specialists use statistical tests for randomness.

Random-number generators. Really random number can be generated only by using special devices. Such generators get unrehearsed data from the environment. For example, some parameters of radioactive decay or surrounding atmospheric conditions can be used, or minor changes of electric current. It's easy to see that the replication of the conditions on which base random number was generated, is practically impossible. That's why such generators are good enough. An alternative is to get the random data from a computer input devices such as mouse (by asking the user to move the mouse for some time).

Pseudo-random-number generators. Pseudo-random number is generated in two steps. First, the program gets some parameters that are changing with time, for example the system time, cursor position etc. At the second step, the program calculates the digest or the hash-function. Digest calculation algorithm creates new sequence of bytes according to the data given. If we use the same parameters as input data for such algorithm we will get the same digest. But as soon as we change one bit in the input data, we will get a very different digest.

The question arises, why we have to do the second operation when we already obtained "random" numbers during the first step? But such parameters, as the time or the cursor position, can be easily enumerated and tested one by one. So such data, without further processing, cannot be called really random.

Not every hash calculation algorithm is usable for cryptography purposes. Only specially designed digest (hash) algorithms can be. Several hash algorithms are popular today. Let's see their short description.

MD2. First, Ron Rivest, created the algorithm for digest calculating and named it as MD. Then he found out how to improve this algorithm and made the next variant, MD2. This algorithm returns 128-bits digest so the number of possible variants is 2128. Unfortunately, some gaps were found in this algorithm later and it is not recommended to use it now.

MD5.After several unsuccessful attempts to create other algorithms as MD3 and MD4, Ron Rivest, was able to offer a new really good algorithm, MD5, which gained popularity. This algorithm is more secure and fast than MD2 and it creates 128-bit digest too.

SHA-1.This algorithm is like MD5 (Ron Rivest made his contribution to SHA-1) but it has better internal structure and returns longer digest of 160 bits. It was approved by Crypt-Analysts, but also the cryptography community strongly recommends it, when choosing between MD5 and SHA-1. However, recently it was discovered, that SHA-1 algorithm can be attacked, so stronger algorithm (such as SHA2) should be used, if possible.

SHA-2.This algorithm supports hash lengths of 256, 384 and 512 bits and is the most preferred algorithm at the moment.

Block and stream encryption in symmetric algorithms

You already know how to get a key and your data is ready for encryption. Algorithms of two types are used for encryption: block algorithms and stream algorithms.

Block encryption. Such algorithm split data into blocks and encrypt each block separately with the same key. If the data size is not multiple of required block size, then last block will be enlarged up to necessary size and filled with some value. When encrypting with block algorithms, if you encrypt the same data with the same key you will get identical results. Usually, such algorithms are used for files, data bases and e-mail message encryptions. There can be variations when the key for the next block is based on the output of previous blocks.

Stream encryption. Unlike the block encryption, such algorithms encrypt each byte separately. For the encryption, the pseudo-random numbers are generated based on the key. The encryption result of the bytes, usually depends on the result of the encryption of previous byte. This method has high productivity and is used for encrypting the information which is transferred over the communication channels.

Attacks on encrypted information

There are two ways to restore encrypted information. You can either try to find the key or use the algorithms vulnerabilities.

Key picking. No matter what algorithm is used, it is always possible to decrypt the data by trying all possible keys one by one. This is called "brute-force attack". The only problem here, is the time that must be spent for the exhaustive search. So, the longer the key, the better is the data protected. For example, exhaustive search of keys with 128-bits length will take several trillions of millenniums. Of course with the computer productivity which is increasing, the search time is reduced and in the near future, 128-bits key will stay secure enough.

Use of algorithm vulnerabilities. Unlike the previous methods, this one is based on the discovery and use of algorithm vulnerabilities. In other words, if the attacker can find some regularity in encrypted text or if he can bypass protection in some other way, then he will decrease the time required to find out the key or decrypting the data. As most of encryption algorithms are published, crypt-analysts all over the world work on them trying to find any vulnerability. As long as such vulnerabilities have not been found in these popular algorithms, they can be accepted as secure.

RC4 - stream algorithm. It is used most widely in SSL (secure transport layer) protocol.

DES (Digital Encryption Standard) - is a block algorithm which uses 56-bits key. This algorithm was designed in the late seventies by researchers from IBM and NSA (National Security Agency). This algorithm was investigated thoroughly and experts came to the conclusion that it has no weak points. This was on the 80 years of the last century. However, the speed processing of the computers increased enough in nineties to attack this algorithm by a complete key enumeration. Electronic Frontier Foundation decrypted DES-encrypted information in less than 24 hours in 1999.

Triple DES - this block algorithm came in the stead of DES. The principles of the work did not change, but in difference with the previous algorithm, one block of data was encrypted three times with different keys. As a result, we now have a 168-bit key. However, later they found out how to decrease attack time to complete the key enumeration of 108-bit key. In general, it is enough for today but in future this might be not enough. The algorithm has one more problem - the low speed of processing.

AES (Advanced Encryption Standard) - NIST (National Institute of Standards and Technology) announced a contest for a new algorithm. One of the main terms was that developers must renounce from the intellectual property rights. This made it possible to make a standard and let everyone use it without any royalties. All candidate algorithms were investigated thoroughly by the world community and NIST announced the winners' names on the 2nd of October, 2000. They were two Belgian researchers: Vincent Rijmen and Joan Daemen. Since then, this algorithm became the worlds cryptography standard supported by the most of applications.

Blowfish by Counterpane Systems company, Safer by Cylink, RC2 and RC5 by RSA Data Security corporation, IDEA by Ascom and Cast by Entrust - other algorithms developed by different cryptography companies.

As you can see, there are many different encryption algorithms that you can choose from. When choosing symmetric algorithm, the speed and length of the key are usually taken into account.

Asymmetric (Public Key Encryption) Algorithms

Secret-key algorithms can encrypt data but they are hard to use when you need to pass encrypted data to someone else because you need to pass the key too. If you transfer the key over public channels, it is the same as if you transfer clear data over this channel. The solution to this problem is in using the asymmetric cryptography (encryption with public key) which was developed in 1970-s.

While symmetric cryptography is based on the principle that one key is used for encryption and decryption, at the asymmetric cryptography, one key is used for encryption and another one for decryption. These keys make a pair. Keys from different pairs will never match each other.

Fig. 2 Asymmetric key consists of two parts - one for encryption and another for decryption.

One key is called private and only its owner must have access to this key. It must be kept as a tightest secret. The second key is called public and it is not a secret. Everyone can use your public key. Suppose you want to encrypt some data for another person. All you have to do is to encrypt this data with his or her public key. Now, no one but this person will be able to read this data. Even you can not decrypt it back (for example, if you have deleted original information). So if you want to get important information you have to generate two keys. You store the private key in a secure place and you distribute the public key in any way: for example, you can place your public key on your website. Now anyone can send you secret data encrypted with the public key you provide. You just have to use your private key in order to decrypt the data.

But the encryption with a public key has one disadvantage. The asymmetric algorithm works much slower than symmetric one. So, when large amounts of secret data are transferred, they are encrypted with symmetric algorithms (using symmetric key) and then the key that was used, is encrypted with asymmetric algorithm using a public key. Thus, the encryption is quick enough as the symmetric algorithm is used and there is no need to transfer a secret key as "clear text". Usually, each symmetric key is used only once and when the next document is encrypted, a new secret key is generated. As symmetric key is used only in one encryption session, it is often called as a session key. As a matter of fact, the user has no idea that the session key was used, as he only gave the public key to encryption to the program and all other actions it has done itself.

Fig. 3 After the data encryption, the symmetric key is encrypted with open key and merged with encrypted data.

Asymmetric encryption systems are based on some one-sided mathematical functions. It means that if you know the result, you cannot renew the input data. For instance, if you have a sum of two numbers you cannot tell which numbers were exactly added.

Public key algorithm security

As you already know, there are two possible ways to restore encrypted data: to find the key or to use the algorithms vulnerability.

Key picking. If the message is encrypted as described above, we have two parts: the message itself, encrypted with the (symmetric) session key and the session key encrypted with the public key. We have already discussed the attacks on symmetric algorithms and keys. And discovery of asymmetric private key is even more complicated task because asymmetric keys are much longer than symmetric keys.

Attackers can try to use the fact that only one private key corresponds to a known public key and to try to find this key. But such attack takes even more time. The point is that such attack involves the decomposition of a large number into factors. But there are no efficient algorithms now which allow such calculations infinite time. So, until such algorithm is developed, cryptography with open keys can be reckoned as secure.

The use of algorithm vulnerabilities. This attack method probably is the most efficient when we speak about open keys. The fact is that there are no public key algorithms that have no weak points for today. For all asymmetric algorithms there are methods that allow recovery of the key faster than with direct enumeration. But this fact is not critical, since it was proved that even using weak points, an attack will take too much time. And probability to be lucky enough to find the correct value soon early tends to zero. So asymmetric encryption can be treated as secure enough for all modern practical purposes. The only thing you should remember is the longer key you use the better your data is protected.

DH (Diffie-Hellman) - Stanford graduate Whitfield Diffie and Professor Martin Hellman, researched cryptographic methods and key exchange problems. As a result of this work, they offered a scheme allowing the creation of common secret key based on open information interchange. This scheme does not encrypt anything, it only makes possible for two (or more sides) to generate secret key that will depend on all members' information but will not be known for any third party.

This algorithm is not used for encryption; its aim is to generate a secret session key. Each interacting side has a secret number, there are also several public parts known to all members which can be transferred over open channels. To get a secret session key, these public parts must be combined with secret ones.

Fig. 4 Diffie-Hellman algorithm. One secret value is created using different keys.

RSA. After Diffie and Hellman have published their article in 1976 Ron Rivest (professor of Massachusetts technology institute) held an interest in this idea. So he got two his colleagues (Adi Shamir and Len Adleman) to take part in researches. They published new algorithm in 1978 named it by authors' initials. This algorithm is often used with 1024-bit or 2048-bit key and it became quite widespread.

ECDH (Elliptic Curve Diffie-Hellman). Neil Koblitz and Victor Miller working independently in 1985 came to conclusion that little-known field of mathematics, elliptic curves, can be useful in public-key cryptography. Algorithms based on elliptic curves began to spread in nineties and today they are listed in some countries standards referring to information security.


After the application exchanges the keys, it can encrypt the data being sent. But can you be sure that the application will send the data exactly where it has to? The attacker could substitute real server for his own one and just send his key during key exchange. And how can you be sure that the message you received is from the person you think to have sent it?

Digital signatures are used in order to confirm message authorship. As you already know, to encode a message, in orders that only one person can read it, you have to encrypt the message with this person's public key. Such message can be decrypted only with recipients private key. But what will happen if you encrypt the message with your private key? It could be read by anyone who has your public key so it will not be secret at all. But at the same time nobody else will be able to encrypt the data in the way that other people can decrypt it with your private key. So only you can do the encryption of the data and anyone who reads the message will be sure that the message was sent by you. As you remember public-key algorithms are slow enough and it makes no sense to encrypt the whole message in such way. Only message digest is encrypted with your private key instead. This procedure consists of two steps. First you calculate message digest and encrypt it with private key. When sending message you attach the encrypted digest to it. Recipient calculates message digest using the same algorithm as you did, decrypts attached digest and compare them. If two digests are equal, then he can be sure that message was sent by you and was not altered during transfer.

Attentive reader can ask, how can we be sure that the public key we have really belongs to a specified person. Somebody could try to break into the server with public keys and put his key instead of your partner's one.

Digital certificates are used for authentication purposes.

In brief, certificates can be represented as a number of records containing information about its owner and certain cryptographic information. Owner information is usually human-readable, for example, the name or passport data. Cryptographic information consists on public key and digital signature of certificate authority (CA). This signature confirms that the certificate belongs to the person whose name is specified in the certificate.

You can see that the scheme became more complicated but more secure. For example, you want to get a digital certificate. Depending on necessary level of certificate security you can either create a certificate request and send it to CA or go there personally so they could make sure that you are the one they give a certificate to. Then CA combines the information about you and your public key into one certificate and signs it using its private key.

To make sure that the message was sent by you, the message recipient has to do the following:

  1. get CA's public key;

  2. verify digital signature of your certificate using the public key of CA.

If the signature corresponds to CA, then the information contained in certificate is valid and can be trusted. And in case of problems, the CA will be responsible for the information contained in the certificate.

But the next question appears is - how can we know that the signature belongs to the CA? Probably it must have its own certificate, confirming its public key. Self-signed certificates are used for such purposes. Self-signed certificate, is signed with its owner's digital signature. It means that you also can create self-signed certificate. But it does not mean that other people will trust such certificate. By the way, you also should not trust most of self-signed certificates unless it belongs to the root CA.

If you create self-signed certificate for your company you can use it to sign other certificates. For instance you can generate certificates for all company employees (and for them only). This practice allows not only to get as many certificates as you need without spending much but also to increase the level of security inside of your company. Certificates can be used not only by people but by applications as well. It can be especially useful when information is transferred over open channels between applications.

If you develop a complex software application and want to protect transferred data, most likely you will have to create certificate infrastructure. Using certificates client, applications can check that they have connected to the server they planned. At the same time, the server applications can check if the client has the rights to connect to it. If you think that the support of certificates is a complicated task, you don't need to worry. There exist several reusable security libraries which help you to deal with certificate management. One of such products is SecureBlackbox. The main task when integrating certificates support into your application is to do everything with security in mind and not to make mistakes in order to avoid security flaws. The best is of course to involve security specialists in to the process.

The most commonly used standard for certificates is X.509 today. It describes certificate format and distribution principles. There exist other certificate formats used in different communication protocols.

Secure transport protocols

Internet growth made secure data transfer necessary. One of the first engineering solutions was SSL (Secure Socket Layer) developed by Netscape in 1994. It is widespread up to date and it is integrated into most browsers, web servers and other software and hardware systems dealing with Internet. There are several modifications of this protocol today: SSLv2, SSLv3 and TLSv1. Most popular is TLSv1. SSLv2 is not used due to several vulnerabilities discovered in it.

Secure Socket Layer (SSL) is a protocol for authentication and encryption on session level which represents secured communication channel between two sides (client and server). SSL provides confidentiality by generating secret common for client and server. SSL supports server authentication and optional client authentication in order to resist outside interference, messages substitution and listening in client-server applications. SSL is located on transport level (lower than application level). Most application-level protocols (such as HTTP, FTP, TELNET and so on) can be run transparently over SSL.

Let's look at simplified client and server communication scheme for a better understanding of SSL functionality principles.

Client composes client hello message before establishing connection. This message contains information about supported protocol versions, encryption methods, random number and session identifier. After that the message is sent to the server.

Server can answer either with another hello message or with error message. Server hello message is like client one but server selects encryption method that will be used using information it received from the client.

Server can send its certificate or certificate chain (several certificates where all but one sign other certificates) for authentication after its hello message has been sent. Authentication is required for key exchange except when using Anonymous Diffie-Hellman algorithm. Key exchange can be realized with the help of certificates corresponding to encryption algorithms specified during connection establishment. Usually X.509.3 certificates are use. Client obtains server's public key which can be used for session key encryption on this stage.

After the certificate is sent the server can optionally create certificate request message to request client certificate if necessary.

After the last hello message server sends handshake completion message. When the client receives such message it must check server certificates and send finalizing message which specifies that the handshake was completed. Now the sides can start encrypted data exchange.

Both server and client can send finalization (goodbye) message before communication session end. After such message was received similar message must be send in response and connection is closed. Finalization messages are needed for protection from breaking-down attack. If this message was sent before connection shutdown the client can resume this session later. Resuming the session takes less time than establishment of new session.

It is also necessary to mention SSH (Secure Shell) protocol. This protocol resembles SSL in general but has some differences. SSH was designed for message exchange between servers with UNIX and it requires authentication of both sides. SSH supports logical channels inside one secured session. SSH uses key pairs and not certificates for authentication.

Secure transport protocols are effective and tested means for data transfer over public communication channels. These technologies are used widely already. SSL protocol is an efficient solution for development of secured client-sever applications which must use open communication channels. But what you have to take into account is that SSL only provides data encryption only during transfer and the data becomes accessible in unprotected form on the client and server. So security must be comprehensive and well-designed. And communication channels must not be the only secured element.

Security in client-server and network applications

After reviewing main principles of cryptographic protection we can study how to use cryptography in action.

First let's review data transfer over network. When Internet appeared its main target was to make information available for everyone. Everything changes with time and today we want to protect most of information we transfer. We can book plane tickets or hotel rooms and we want to keep credit card number and sometimes destination or time of our trip in secret. On the one hand new technologies provide us with numerous opportunities and conveniences but on the other hand we face the danger of our data being intercepted and possibly altered. A lot of servers still use non-secure protocols for data transmission. Data transferred between local clients and servers is also threatened with interception. Anyone can intercept data transferred over local network. Usually unauthorized person appears to be company employee. Most employees have all necessary capabilities and all they have to do is install a couple of software programs to access to data that belongs to other workers. Statistics says that insiders are the cause of about 90 of 100 unauthorized access cases.

Use of SSL/TLS protocol is enough to secure the data transferred over network. As you already know even if someone can get such data decryption will take too much time. You can ask how to do it. There are many different ways to implement security of transferred data with SSL.

The cheapest way is to use STunnel application, which creates secure channel between two computers. Such communication channel is almost always transparent for application that uses it but requires tune-up and is possible not for all protocols. The main disadvantage of this mechanism is that attacker can access unprotected data on user's computer while the data is transferred between the application and STunnel.

Fig. 5 If application exchanges unencrypted data, third party application can gain access to the data.

It is necessary to say that it is the best way when client and/or server software can not be changed. In other words when you have only executable modules but not their source code. Although the attacker can get access to data on user's computer such protection is better than no protection at all. Stunnel can also be useful if you have integrated SSL support to client-side application but can do this with server for some reasons. Then STunnel can be installed on the server-side. In this case you must check security of the server itself but in general you will get a secure system.

More secure way is to use the components that allow integration of SSL right into your application, for example SecureBlackbox. This way is good when you develop your own application. The point is that integration of the protocol into application increases security. You must use integrated solution in cases when the operational environment is not known or is insecure.

Remember that you should use SSL connection not only when the data is transferred over Internet but when local networks are used too. If even one channel is insecure then the attacker can use it to get information he needs or at least something that makes decryption of information easier. So if your system transfers important data over network or at least data that can help attacks in some way, you must use secure connection. It will help you to protect data from both unauthorized access and modifications. Always remember the rule - any system is as weak as its weakest part is.

The attacker can try to access data not only during the transfer, but also when they are on some medium such as hard disk, streamer and so on. Attacker can get access to data both on client-side and on server-side.

Let's examine possible threats for the server. We cannot trust server protection though operation system developers release patches when security problems are discovered. This doesn't always save the situation and can even make it worse sometimes Thus additional protection mechanisms are used besides OS built-in facilities for server-side data protection. While careful system tune-up is up to administrator, we can examine database protection in more details.

Database can be protected in two ways. The first way is to deny access using database server. In this case database server checks all passwords and access rights. Disadvantage of this scheme is that if attacker gets access to the server he will get access to database. For example create database at one computer and protect it with password using database server. Then create database with the same name on another computer and protect it in the same way but use another password. Then copy the first database to second location. Now you can access the first database using password you've set for the second one. This happens because access control information is stored not in the database, but in database server configuration. So if you use such database server you must protect it very well and think about preventing not only database modifications but copying of the data files as well.

Another way of data protection is encryption. Some servers have built-in encryption capabilities and there are even special SQL commands for these purposes. But it must be noticed that encryption slows down performance and has certain specifics.

The most attention should be paid to security of software installed on the client side. As mentioned before, the user can have minimum or no knowledge of computer operations. So the user can be use a computer infected with Trojan application for a long time and never notice this. So when developing client application you must be ready for situation when client computer will be controlled by third parties. So if your application stores some data that might turn out to be important, such data should be encrypted.

User authentication is the keystone to security and it must be foolproof. Use of user's ID or account as a password or use of short passwords is unacceptable from security point of view. If authentication system is designed badly, it would be no problem for attacker to find password quickly. Weak authentication can nullify all security achieved with the help of cryptography. It's recommended to choose passwords longer than eight characters and use both numbers and letters. Of course it is not easy to remember such passwords especially if it is has no meaning. But this problem can be solved easily with the help of external password management applications.

For example now it has become popular to keep passwords on USB-drives and Flash cards. You can place certificate or other useful information near the password list. You should know that there are special Smart Cards and USB Dongles for keeping X.509 certificates. Such devices can increase security but only certificates can be stored there, so they can not be used as password keepers. Keeping passwords on external card has several advantages as you can carry your passwords with you and in case of danger the medium can be relatively easily destroyed. You can use different password for each application or system you use. You can easily use long passwords so it will be hard for attacker to guess it or find using brute-force attack. So we can say that only the person who has device with passwords can access system, it allows to protect computer not only from outsider attacker but if someone will try to use it during owner's absence.

Multi-tier application architecture itself allows the creation of one more barrier for protection from unauthorized access. You can restrict user access depending on the tasks he will fulfill. But that's not all. You can develop client modules in the way that operations performed by people with limited access are limited right in the application. For example there are bank branches where the set of operations performed by clerks is limited to one or two operations. In this case the client module should only be able to perform these operation. At the same time the manager of the branch can use advanced application version that allows database altering. Thus security can be increased additionally by application segmentation according to performed tasks.

You can use simple scheme to analyze potential gaps in your system security:

  • analyze security of data storage and data transfer channels;
  • check if there are times when data is not encrypted;
  • if the data is not encrypted, check if they are freely accessible;
  • if the is encrypted, check if the attacker can obtain something usable for recovery of the encryption keys

If you follow these steps and use information above you will be able to find weak places in security system yourself and create secure application.


While this article describes security basics, it is enough to understand the level of modern security systems. This level is high enough to assure data protection from main attacks for reasonable time. Unfortunately many IT companies don't have even basic knowledge about security of distributed computer systems. As a result we have a lot of vulnerable and insecure systems. We hope this article will show you importance of security in modern life and will help you to create really secure applications. Care about clients' security and your effort will be rewarded.

Ready to get started?

Learn more about SecureBlackbox or download a free trial.

Download Now