Posts Tagged ‘authentication
In most enterprises there are two types of passwords: local and domain. Domain passwords are centralized passwords that are authenticated at an authentication server (e.g., a Lightweight Directory Access Protocol server, an Active Directory server). Local passwords are passwords that are stored and authenticated on the local system (e.g., a workstation or server). Although most local passwords can be managed using centralized password management mechanisms, some can only be managed through third-party tools, scripts, or manual means. A common example is built-in administrator and root accounts. Having a common password shared among all local administrator or root accounts on all machines within a network simplifies system maintenance, but it is a widespread weakness. If a single machine is compromised, an attacker may be able to recover the password and use it to gain access to all other machines that use the shared password. Organizations should avoid using the same local administrator or root account password across many systems. Also, built-in accounts are often not affected by password policies and filters, so it may be easier to just disable the built-in accounts and use other administrator-level accounts instead.
A solution to this local password management problem is the use of randomly generated passwords, unique to each machine, and a central password database that is used to keep track of local passwords on client machines. Such a database should be strongly secured and access to it limited to only the minimum needed. Specific security controls to implement include only permitting authorized administrators from authorized hosts to access the data, requiring strong authentication to access the database (for example, multi-factor authentication), storing the passwords in the database in an encrypted form (e.g., cryptographic hash), and requiring administrators to verify the identity of the database server before providing authentication credentials to it.
Another solution to management of local account passwords is to generate passwords based on system characteristics such as machine name or media access control (MAC) address. For example, the local password could be based on a cryptographic hash of the MAC address and a standard password. A machine’s MAC address, “00:16:59:7F:2C:4D”, could be combined with the password “N1stSPsRul308” to form the string “00:16:59:7F:2C:4D N1stSPsRul308”. This string could be hashed using SHA and the first 20 characters of the hash used as the password for the machine. This would create a pseudo-salt that would prevent many attackers from discovering that there is a shared password. However, if an attacker recovers one local password, the attacker would be able to determine other local passwords relatively easily.
a. Stolen Assertion
Threat: If an eavesdropper can copy the real user’s SAML (Security Assertion Markup Language) response and included assertions, then the eavesdropper could construct an appropriate POST body and be able to impersonate the user at the destination site.
Countermeasures: Confidentiality MUST be provided whenever a response is communicated between a site and the user’s browser. This provides protection against an eavesdropper obtaining a real user’s SAML response and assertions.
If an eavesdropper defeats the measures used to ensure confidentiality, additional countermeasures are available:
- The Identity Provider and Service Provider sites SHOULD make some reasonable effort to ensure that clock settings at both sites differ by at most a few minutes. Many forms of time synchronization service are available, both over the Internet and from proprietary sources.
- When a non-SSO SAML profile uses the POST binding it must ensure that the receiver can perform timely subject confirmation. To this end, a SAML authentication assertion for the principal MUST be included in the POSTed form response.
- Values for NotBefore and NotOnOrAfter attributes of SSO (Single Sign-On) assertions SHOULD have the shortest possible validity period consistent with successful communication of the assertion from Identity Provider to Service Provider site. This is typically on the order of a few minutes. This ensures that a stolen assertion can only be used successfully within a small time window.
- The Service Provider site MUST check the validity period of all assertions obtained from the Identity Provider site and reject expired assertions. A Service Provider site MAY choose to implement a stricter test of validity for SSO assertions, such as requiring the assertion’s IssueInstant or AuthnInstant attribute value to be within a few minutes of the time at which the assertion is received at the Service Provider site.
- If a received authentication statement includes a <saml:SubjectLocality> element with the IP address of the user, the Service Provider site MAY check the browser IP address against the IP address contained in the authentication statement.
b. Man In the Middle Attack
Threat: Since the Service Provider site obtains bearer SAML assertions from the user by means of an HTML form, a malicious site could impersonate the user at some new Service Provider site. The new Service Provider site would believe the malicious site to be the subject of the assertion.
Countermeasures: The Service Provider site MUST check the Recipient attribute of the SAML response to ensure that its value matches the https://<assertion consumer host name and path>. As the response is digitally signed, the Recipient value cannot be altered by the malicious site.
c. Forged Assertion
Threat: A malicious user, or the browser user, could forge or alter a SAML assertion.
Countermeasures: The browser/POST profile requires the SAML response carrying SAML assertions to be signed, thus providing both message integrity and authentication. The Service Provider site MUST verify the signature and authenticate the issuer.
d. Browser State Exposure
Threat: The browser/POST profile involves uploading of assertions from the web browser to a Service Provider site. This information is available as part of the web browser state and is usually stored in persistent storage on the user system in a completely unsecured fashion. The threat here is that the assertion may be “reused” at some later point in time.
Countermeasures: Assertions communicated using this profile must always have short lifetimes and should have a <OneTimeUse> SAML assertion <Conditions> element. Service Provider sites are expected to ensure that the assertions are not re-used.
Threat: Replay attacks amount to re-submission of the form in order to access a protected resource fraudulently.
Countermeasures: The profile mandates that the assertions transferred have the one-use property at the Service Provider site, preventing replay attacks from succeeding.
f. Modification or Exposure of state information
Threat: Relay state tampering or fabrication, Some of the messages may carry a <RelayState> element, which is recommended to be integrity protected by the producer and optionally confidentiality- protected. If these practices are not followed, an adversary could trigger unwanted side effects. In addition, by not confidentiality-protecting the value of this element, a legitimate system entity could inadvertently expose information to the identity provider or a passive attacker.
Countermeasure: Follow the recommended practice of confidentiality and integrity protecting the RelayState data. Note: Because the value of this element is both produced and consumed by the same system entity, symmetric cryptographic primitives could be utilized.
By providing non‐reproducible technological features, nanotechnology based developments are expected to offer a significant move forward in preventing illicit copying intellectual properties and products. Ultimately, the implementation of the novel techniques will considerably reduce tax revenue losses through counterfeiting and improve citizens’ safety and quality of life.
Holograms, tamper‐evident closures, tags and markings and RFID labels are the most widely known anti‐counterfeiting technologies. The key limitation of these methods is that they can be copied. Innovations exploiting the intrinsic nature of nano materials to give items complex and unique ‘fingerprints’ results both in the development of new approaches and improvement of existing techniques.
Holography ‐ easily identifiable holograms, for example, showing the manufacturer’s logo are primarily used as first level identification devices. Two dimensional nano scale gratings, photopolymers and luminescent nano particles can be utilized to provide an additional level of security for the holograms.
Laser surface authentication ‐ a laser is used to examine the surface roughness of an object. Complexity and uniqueness of the surface roughness code is similar to iris scans and fingerprints. The advantages of the technique is that surface roughness at nanoscale cannot be replicated. Therefore,a much higher level of security is offered to products as compared to holograms and watermarks.
Radio frequency identification (RFID) ‐ is a form of automatic identification and data capture technology where data stored on a tag is transferred via a radio frequency link. An RFID reader is used to extract this data from tags. New developments exploit nanoscale variations, naturally produced during the manufacturing process of RFIDs that are unique to individual integrated circuits , which can be verified during data transfer. This is known as the Physically Uncloneable Function (PUF).
Nano barcodes ‐ three dimensional polymer patterns on the order of tens of nanometres can be made on silicon substrates to provide 3D nanoscale data encryption key, similar to barcodes. The advantages over conventional barcode/marking are difficulty of detecting presence (covert marking)and duplication. These can be applied to banknotes,security papers, art, jewellery and gemstones.
SERS and quantum dots tags – metal nano particles produce unique electromagnetic spectra (known as surface enhanced raman scattering) while certain semiconductor nano particles (known as quantum dots) have different fluorescence based on size and chemical composition. Both can be exploited as identification tools. They offer difficulty in reproducing due to infinite combinations, covert security feature, non‐toxicity and multi functionality. These nano scaled tags can be applied in inks, adhesives, laminates, paper, packaging, textiles, glass, and others.
Nano composite tags – consist of a materials‐based pattern (with magnetic and/or optical features) that forms part of a label, tag or embedded portion of an item. The nanometre sized magnetic and optical features are generated randomly during manufacturing, constituting a unique ‘fingerprint’ that is read and stored in a central database . The result is a secure identity for an individual item that is prohibitively expensive and difficult to copy. This technology can be applied in the pharmaceutical, spare parts, fashion and food and beverage industries. Incorporating encapsulated and functionalized (e.g. thermochromic) nano particles in labels is another promising solution based on the use of nano composites.
Since the SOAP binding requires that conformant applications support HTTP over TLS/SSL with a number of different bilateral authentication methods such as Basic over server-side SSL and certificate-backed authentication over server-side SSL, these methods are always available to mitigate threats in cases where other lower-level systems are not available and the above listed attacks are considered significant threats.
This does not mean that use of HTTP over TLS with some form of bilateral authentication is mandatory. If an acceptable level of protection from the various risks can be arrived at through other means (for example, by an IPsec tunnel), full TLS with certificates is not required. However, in the majority of cases for SOAP over HTTP, using HTTP over TLS with bilateral authentication will be the appropriate choice.
The HTTP Authentication RFC describes possible attacks in the HTTP environment when basic or message-digest authentication schemes are used. Note, however, that the use of transport-level security (such as the SSL or TLS protocols under HTTP)only provides confidentiality and/or integrity and/or authentication for “one hop”. For models where there may be intermediaries, or the assertions in question need to live over more than one hop, the use of HTTP with TLS/SSL does not provide adequate security.
Quantum cryptography involves a surprisingly elaborate suite of specialized protocols, which we term “QKD protocols.”Many aspects of these protocols are unusual – both in motivation and in implementation – and may be of interest to specialists in communications protocols.
Fig.1 Effects of an unbalanced interferometer on a photon
We have designed this engine so it is easy to “plug in” new protocols, and expect to devote considerable time in coming years to inventing new QKD (Quantum Key Distribution) protocols and trying them in practice. As shown in Fig. 1, these protocols are best described as sub-layers within the QKD protocol suite. Note, however, that these layers do not correspond in any obvious way to the layers in a communications stack, e.g., the OSI layers. As will be seen,they are in fact closer to being pipeline stages.
Sifting is the process whereby Alice and Bob winnow away all the obvious “failed qubits” from a series of pulses. As described in the introduction to this section, these failures include those qubits where Alice’s laser never transmitted,Bob’s detectors didn’t work, photons were lost in transmission,and so forth. They also include those symbols where Alice choose one basis for transmission but Bob chose the other for receiving.
At the end of this round of protocol interaction – i.e. after a sift and sift response transaction – Alice and Bob discard all the useless symbols from their internal storage, leaving only those symbols that Bob received and for which Bob’s basis matches Alice’s. In general, sifting dramatically prunes the number of symbols held in Alice and Bob. For instance, assume that 1% of the photons that Alice tries to transmit are actually received at Bob and that the system noise rate is 0. On average, Alice and Bob will happen to agree on a basis 50% of the time in BB84. Thus only 50% x 1% of Alice’s photons give rise to a sifted bit, i.e.,1 photon in 200. A transmitted stream of 1,000 bits therefore would boil down to about 5 sifted bits.
Error correction allows Alice and Bob to determine all the“error bits” among their shared, sifted bits, and correct them so that Alice and Bob share the same sequence of error-corrected bits. Error bits are ones that Alice transmitted as a 0 but Bob received as a 1, or vice versa. These bit errors can be caused by noise or by eavesdropping.
Error correction in quantum cryptography has a very unusual constraint, namely, evidence revealed in error detection and correction (e.g. parity bits) must be assumed to be known to Eve, and thus to reduce the hidden entropy available for key material. As a result, there is very strong motivation to design error detection and correction codes that reveal as little as possible in their public control traffic between Alice and Bob. Our first approach for error correction is a novel variant of the Cascade protocol and algorithms. The protocol is adaptive,in that it will not disclose too many bits if the number of errors is low, but it will accurately detect and correct a large number of errors (up to some limit) even if that number is well above the historical average.
Our version works by defining a number of subsets (currently 64) of the sifted bits and forming the parities of each subset. In the first message, the list of subsets and their parities is sent to the other side, which then replies with its version of the parities. The subsets are pseudo-random bit strings, from a Linear-Feedback Shift Register (LFSR) and are identified by a 32-bit seed for the LFSR. Once an error bit has been found and fixed, both sides inspect their records of subsets and subranges, and flip the recorded parity of those that contained that bit.This will clear up some discrepancies but may introduce other new ones, and so the process continues.
Since these parity fields are revealed in the interchange of “error correction” messages between Alice and Bob, these bits must be taken as known to Eve. Therefore, the QKD protocol engine records amount of information revealed (lost) due to parity fields, and later requires a compensating level of privacy amplification to reduce Eve’s knowledge to acceptable levels.
Privacy amplification is the process whereby Alice and Bob reduce Eve’s knowledge of their shared bits to an acceptable level. This technique is also often called advantage distillation.
Authentication allows Alice and Bob to guard against “man in the middle attacks,” i.e., allows Alice to ensure that she is communicating with Bob (and not Eve) and vice versa. Authentication must be performed on an ongoing basis for all key management traffic, since Eve may insert herself into the conversation between Alice and Bob at any stage in their communication. The original BB84 paper described the authentication problem and sketched a solution to it based on universal families of hash functions, introduced by Wegman and Carter. This approach requires Alice and Bob to already share a small secret key, which is used to select a hash function from the family to generate an authentication hash of the public correspondence between them. By the nature of universal hashing, any party who didn’t know the secret key would have an extremely low probability of being able to forge the correspondence, even an adversary with unlimited computational power. The drawback is that the secret key bits cannot be re-used even once on different data without compromising the security. Fortunately, a complete authenticated conversation can validate a large number of new,shared secret bits from QKD, and a small number of these maybe used to replenish the pool.
There are many further details in a practical system which we will only mention in passing, including symmetrically authenticating both parties, limiting the opportunities for Eveto force exhaustion of the shared secret key bits, and adapting the system to network asynchrony and retransmissions. Another important point: it is insufficient to authenticate just the QKD protocols; we must also apply the these techniques to authenticate the VPN data traffic.
Hash Functions: The first strategy (Figure 1.1-a) allowing to perform memory authentication consists in storing on-chip a hash value for each memory block stored off-chip(write operations). The integrity checking is done on read operations by re-computing a hash over the loaded block and by then comparing the resulting hash with the on-chip hash fingerprinting the off-chip memory location. The on-chip hash is stored on the tamper-resistant area, i.e., the processor chip and is thus inaccessible to adversaries.Therefore, spoofing, splicing and replay are detected if a mismatch occurs in the hash comparison. However, this solution has an unaffordable on-chip memory cost: by considering the common strategy of computing a fingerprint per cache line and assuming 128-bit hashes and 512-bit cache lines, the overhead is of 25% of the memory space to protect.
MAC Functions: In the second approach (Figure 1.1-b), the authentication engine embedded on-chip computes a MAC for every data block it writes in the physical memory.The key used in the MAC (Message Authentication Code) computation is securely stored on the trusted processor chip such that only the on-chip authentication engine itself is able to compute valid MACs. As a result, the MACs can be stored in untrusted memory because the attacker is unable to compute a valid MAC over a corrupted data block. In addition to the data contained by the block, the pre-image of the MAC function contains a nonce. This allows protection against splicing and replay attacks. The nonce precludes an attacker from passing a data block at address A, along with the associated MAC, as a valid (data block, MAC) pair for address B, where A 6= B. It also prevents the replay of a (data block, MAC) pair by distinguishing two pairs related to the same address, but written in memory at different points in time. On read operations, the processor loads the data to read and its corresponding MAC from physical memory. It checks the integrity of the loaded block by first re-computing a MAC over this block and a copy of the nonce used on write operation and by then comparing the result with the fetched MAC. However,to assure the resistance to replay and splicing, the nonce used for MAC re-computation must be genuine. A naive solution to assure this requirement is to store them on the trusted and tamper-evident area, the processor chip. The related on-chip memory overhead is 12.5% if we consider computing a MAC per 512-bit cache line and that we use 64-bit nonces.
H : Hash Function, D : Data, C : Ciphertext, N : Nonce
Figure 1.1: Authentication Primitives for Memory Integrity Checking
One aspect that has been overlooked in mobile research is link layer access. Most mobility solutions assume that the link layer configuration will be automatic and base trigger mechanisms in the presence of network layer connectivity.We believe that there is the need for a framework for link layer access, to standardize the operating system interface, creating an unified API to report the presence of access point in the vicinity of the mobile, and to do AAA(Authentication, Authorization and Accounting). A multiplexing transport protocol has to be aware of new link layers that become available, and of link layers that can no longer be used, to add and remove these interfaces from protocol processing. To this end, a link-layer aware transport protocol needs the following support:
Link layer management: a management entity can usedirect information (by probing or listening to the link layer for the presence of access points) or indirect information(by using an existing connection to query the infrastructure for the existence of additional access points) to find new access points. This is called link layer discovery. Management also encompasses measuring signal strength and possibly location hints to rule that a link layer is nolonger usable. This is called link layer disconnection.
Network layer management: before using a link layer,the mobile has to acquire an IP address for that interface. The most common protocol for acquiring a network addressin broadcast media is DHCP (Dynamic Host Configuration Protocol). For point-to-point links, such as infrared, acquiring a network address also entails creatinga point-to-point link. In this case, the link will only be created on demand, as creating the link precludes other mobiles from using the same access point.
Transport layer notification: the transport layer has to benotified of new access points (in the form of a new IP address it can use) and of the loss of an active access point(an IP that can no longer be used). The transport protocols can also notify a management entity about the available bandwidth of each link. Because this bandwidth is closely tied with the available bandwidth of the last hop, by controlling the maximum bandwidth each protocol instance can use the management entity to enforce usage policies for cooperating protocols.
Broadly stated, QKD(Quantum Key Distribution) offers a technique for coming to agreement upon a shared random sequence of bits within two distinct devices, with a very low probability that other devices(eavesdroppers) will be able to make successful inferences as to those bits’ values. In specific practice, such sequences are then used as secret keys for encoding and decoding messages between the two devices. Viewed in this light, QKD is quite clearly a key distribution technique, and one can rate QKD’s strengths against a number of important goals for key distribution, as summarized in the following paragraphs.
Confidentiality of Keys : Confidentiality is the main reason for interest in QKD. Public key systems suffer from an ongoing uncertainty that decryption is mathematically intractable. Thus key agreement primitives widely used in today’s Internet security architecture, e.g., Diffie-Hellman, may perhaps be broken at some point in the future. This would not only hinder future ability to communicate but could reveal past traffic.Classic secret key systems have suffered from different problems, namely, insider threats and the logistical burden of distributing keying material. Assuming that QKD techniques are properly embedded into an overall secure system, they can provide automatic distribution of keys that may offer security superior to that of its competitors.
Authentication : QKD does not in itself provide authentication.Current strategies for authentication in QKD systems include prepositioning of secret keys at pairs of devices, to be used in hash-based authentication schemes, or hybrid QKD-public key techniques. Neither approach is entirely appealing. Prepositioned secret keys require some means of distributing these keys before QKD itself begins, e.g., by human courier,which may be costly and logistically challenging. Furthermore, this approach appears open to denial of service attacks in which an adversary forces a QKD system to exhaust its stockpile of key material, at which point it can no longer perform authentication. On the other hand, hybrid QKD-public key schemes inherit the possible vulnerabilities of public key systems to cracking via quantum computers or unexpectedadvances in mathematics.
Sufficiently Rapid Key Delivery : Key distribution systems must deliver keys fast enough so that encryption devices do not exhaust their supply of key bits. This is a race between the rate at which keying material is put into place and the rate at which it is consumed for encryption or decryption activities. Today’s QKD systems achieve on the order of 1,000 bits/second throughput for keying material, in realistic settings, and often run at much lower rates. This is unacceptably low if one uses these keys in certain ways, e.g., as one-time pads for high speed traffic flows. However it may well be acceptable if the keying material is used as input for less secure (but often secure enough) algorithms such as the Advanced Encryption Standard. Nonetheless, it is both desirable and possible togreatly improve upon the rates provided by today’s QKD technology.
Robustness : This has not traditionally been taken into account by the QKD community. However, since keying material is essential for secure communications, it is extremely important that the flow of keying material not be disrupted, whether by accident or by the deliberate acts of an adversary (i.e. by denial of service). Here QKD has provided a highly fragile service to date since QKD techniques have implicitly been employed along a single point-to-point link. If that link were disrupted,whether by active eavesdropping or indeed by fiber cut, all flow of keying material would cease. In our view a meshed QKD network is inherently far more robust than any single point-to-point link since it offers multiple paths for key distribution.
Distance- and Location-Independence : In the ideal world,any entity can agree upon keying material with any other(authorized) entity in the world. Rather remarkably, the Internet’s security architecture does offer this feature – any computer on the Internet can form a security association with any other, agreeing upon keys through the Internet IPsec protocols. This feature is notably lacking in QKD, which requires the two entities to have a direct and unencumbered path for photons between them, and which can only operate fora few tens of kilometers through fiber.
Resistance to Traffic Analysis : Adversaries may be able to perform useful traffic analysis on a key distribution system,e.g., a heavy flow of keying material between two points might reveal that a large volume of confidential information flows, or will flow, between them. It may thus be desirable to impede such analysis. Here QKD in general has had a rather weak approach since most setups have assumed dedicated, point-to-point QKD links between communicating entities which thus clearly lays out the underlying key distribution relationships.
The DARPA Quantum Network aims to strengthen QKD’s performance in these weaker areas. In some instances, this involves the introduction of newer QKD technologies; for example, we hope to achieve rapid delivery of keys by introducing a new, high-speed source of entangled photons. In other instances, we rely on an improved system architecture to achieve these goals; thus, we tackle distance- and location independence by introducing a network of trusted relays. Whereas most work to date has focused on the physical layer of quantum cryptography – e.g. the modulation, transmission, and detection of single photons – our own research effort aims to build QKD networks. As such, it is oriented to a large extent towards novel protocols and architectures for highly-secure communications across a heterogenous variety of under lying kinds of QKD links.
Figure 1. A Virtual Private Network (VPN) based on Quantum Key Distribution
Our security model is the cryptographic Virtual Private Network (VPN). Conventional VPNs use both public-key and symmetric cryptography to achieve confidentiality and authentication/integrity. Public-key mechanisms support key exchange or agreement, and authenticate the endpoints. Symmetric mechanisms (e.g. 3DES, SHA1) provide traffic confidentiality and integrity. Thus VPN systems can provide confidentiality and authentication / integrity without trusting the public network interconnecting the VPN sites. In our work, existing VPN key agreement primitives are augmented or completely replaced by keys provided by quantum cryptography. The remainder of the VPN construct is left unchanged; see Fig. 1. Thus our QKD-secured network isfully compatible with conventional Internet hosts, routers, firewalls, and so forth.
At time of writing, we are slightly over one year into a projected five-year effort to build the full DARPA Quantum Network. In our first year, we have built a complete quantum cryptographic link, and a QKD protocol engine and working suite of QKD protocols, and have integrated this cryptographic substrate into an IPsec-based Virtual Private Network. This entire system has been continuously operational since December 2002, and we are now in the process of characterizing its behavior and tuning it. In coming years, we plan to build a second link based on two photon entanglement, and to build various forms of end-to-end networks for QKD across a variety of kinds of links. We expect the majority of our links to be implemented in dark fiber but some may also be implemented in free space, either in the labor outdoors.
ColdFusion features built-in server-side file search, Adobe Flash and Adobe Flex application connectivity, web services publishing, and charting capabilities. ColdFusion is implemented on the Java platform and uses a Java 2 Enterprise Edition (J2EE) application server for many of its runtime services. ColdFusion can be configured to use an embedded J2EE server (Adobe JRun), or it can be deployed as a J2EE application on a third party J2EE application server such as Apache Tomcat, IBM WebSphere, and BEA WebLogic.
Web and application server authentication can be thought of as two different controls (see Figure 1). Web server authentication is controlled by the web server administration console or configuration files. These controls do not need to interact with the application code to function. For example, using Apache, you modify the http.conf or .htaccess files; or for IIS, use the IIS Microsoft Management Console. Basic authentication works by sending a challenge request back to a user’s browser consisting of the protected URI. The user must then respond with the user ID and password, separated by a single colon, and encoded using base64 encoding. Application-level authentication occurs at a layer after the web server access controls have been processed. This section examines how to use ColdFusion to authenticate and authorize users to resources at the application level.
Figure 1: Web server and application server authentication occur in sequence before accessis granted to protected resources.
ColdFusion enables you to authenticate against multiple system types. These types include LDAP, text files, Databases, NTLM, Client-Side certificates via LDAP, and others via custom modules. The section below describes using these credential stores according to best practices.
- When a user enters an invalid credential into a login page, do NOT return which item was incorrect; instead, show a generic message. For example,”Your login information was invalid!”
- Never submit login information via a GET request; always use POST.
- Use SSL to protect login page delivery and credential transmission.
- Remove dead code and client-side viewable comments from all pages.
- Set application variables in the Application.cfc file. The values you use ultimately depend on the function of your application; however, for best practices use the following:
- applicationTimeout = #CreateTimeSpan(0,8,0,0)#
- loginStorage = session
- sessionTimeout = #CreateTimeSpan(0,0,20,0)#
- sessionManagement = True
- scriptProtect = All
- setClientCookies = False (Use JSESSIONID)
- setDomainCookies = False
- name (This value is application-dependent; however, it should be set)
- Do not depend on client-side validation. Validate input parameters for typeand length on the server, using regular expressions or string functions.
- Database queries must use parameterized queries (<cfqueryparam>) or properly constructed stored procedures parameters (<cfprocparam>).
- Database connections should be created using a lower privileged account.Your application should not log into the database using sa or dbadmin.
- Hash passwords in a database or flat file using SHA-256 or greater with a random salt value for each password. For example, Hash(password + salt,”SHA-256″).
- Call StructClear(Session) to completely clear a user’s session. Issuing <cflogout> when using LoginStorage=Session removes the SESSION.cfauthorization variable from the Session scope, but does not clearthe current user’s session object.
- Prompt the user to close the browser session to ensure that header authentication information has been flushed.