Superthreading with a multithreaded processor

One of the ways that ultra-high-performance computers eliminate the waste associated with the kind of single-threaded SMP described above is to use a technique called time-slice multithreading, or superthreading. A processor that uses this technique is called a multithreaded processor, and such processors are capable of executing more than one thread at a time. If you’ve followed the discussion so far, then this diagram should give you a quick and easy idea of how superthreading works:

You’ll notice that there are fewer wasted execution slots because the processor is executing instructions from both threads simultaneously. I’ve added in those small arrows on the left to show you that the processor is limited in how it can mix the instructions from the two threads. In a multithreaded CPU, each processor pipeline stage can contain instructions for one and only one thread, so that the instructions from each thread move in lockstep through the CPU.

To visualize how this works, take a look at the front end of the CPU in the preceding diagram. In this diagram, the front end can issue four instructions per clock to any four of the seven functional unit pipelines that make up the execution core. However, all four instructions must come from the same thread. In effect, then, each executing thread is still confined to a single “time slice,” but that time slice is now one CPU clock cycle. So instead of system memory containing multiple running threads that the OS swaps in and out of the CPU each time slice, the CPU’s front end now contains multiple executing threads and its issuing logic switches back and forth between them on each clock cycle as it sends instructions into the execution core.

Multithreaded processors can help alleviate some of the latency problems brought on by DRAM memory’s slowness relative to the CPU. For instance, consider the case of a multithreaded processor executing two threads, red and yellow. If the red thread requests data from main memory and this data isn’t present in the cache, then this thread could stall for many CPU cycles while waiting for the data to arrive. In the meantime, however, the processor could execute the yellow thread while the red one is stalled, thereby keeping the pipeline full and getting useful work out of what would otherwise be dead cycles.

While superthreading can help immensely in hiding memory access latencies, it does not, however, address the waste associated with poor instruction-level parallelism within individual threads. If the scheduler can find only two instructions in the red thread to issue in parallel to the execution unit on a given cycle, then the other two issue slots will simply go unused.

Tags : , , , , , ,

Honeypots Applied to Open Proxies

Honeypots apply to open mail relays in exactly the same way that they apply to open proxies. For this reason, in the following section, I will briefly describe honeypots only as they apply to (1) open proxies and (2) bot-networks.

Recall that an open proxy enables spammers to fully conceal their identities by making all email messages appear to come from the proxy. A cybersleuth could set up an open proxy honeypot and wait for spammers to start using it. This fake open proxy would record the source address of all connections to it along with all traffic routed through it. This could potentially provide significant leads for catching the spammer. The Proxypot Project is an example of an open proxy honeypot specifically designed to catch spammers. It accepts connections from any computer on the Internet, and logs all relevant information about the connection. Most importantly, it logs the address of the computer that initiates each connection. The project also provides tools to search these log files for spam activity. Note that Proxypot actually stops short of sending spam traffic to its destination. It only logs the fact that an attempt to send spam has occurred. By blocking spam routed through it, Proxypot ensures that it does not contribute to the prevalence of spam email.

Honeypot logs can reveal the network address of a spammer. Once spammers discover the open proxy honeypot, they begin to route spam email through it. Unless the spammers take extra precautions, they cannot tell that the open proxy they are using is a honeypot. A few days later, after examining the honeypot’s logs, the cybersleuth can expose the spammer’s network address. Spammers are already implementing methods to evade honeypots. Although open proxy honeypots would seem to be a powerful technique for catching spammers, there are a number of significant drawbacks to this approach. First, spammers are well aware of the existence of honeypots and are implementing counter-measures to avoid them. For example, the Send safe tool is capable of detecting honeypots by sending a test spam email to itself. Since most honeypots block spam email routed through them, the test message will not be delivered and Send-safe will stop routing email through the open proxy honeypot.

Second, spammers can completely fool an open proxy honeypot by using proxy chains. Suppose the spammer identifies three open proxies called A, B, and C. The odds are that at most only one of them will be a honeypot. The spammer then sends email by creating a path through all three servers. The email will travel from the spammer’s machine first to server A, then to server B, then to server C, and finally to the spam recipient. Now, suppose that server C is the honeypot. It only “sees” connections from server B, not from the spammer.As a result, the honeypot’s log would falsely incriminate B as the spammer. In fact, when spammers use proxy chains in this manner, a honeypot log will record absolutely no useful information, unless the honeypot happens to be the first server in the chain. Spammers find proxy chains inconvenient to use since they slow down email delivery and require spammers to identify a greater number of open proxies. Nevertheless, if honeypots become prevalent,it is likely that spammers will simply switch to using proxy chains to evade detection.

Tags : , , , , , ,

Virtual Property and Virtual Economies

For any object or structure found in a virtual world, one may ask the question: who owns it? This question is already ambiguous, however, because there may be both virtual and real-life owners of virtual entities. For example, a user may be considered to be the owner of an island in a virtual world by fellow users, but the whole world, including the island, may be owned by the company that has created it and permits users to act out roles in it. Users may also become creators of virtual objects, structures,and scripted events, and some put hundreds of hours of work into their creations. May they therefore also assert intellectual property rights to their creations? Or can the company that owns the world in which the objects are found and the software with which they were created assert ownership? What kind of framework of rights and duties should be applied to virtual property?

The question of property rights in virtual worlds is further complicated by the emergence of so-called virtual economies.Virtual economies are economies that exist within the context of a persistent multi user virtual world. Such economies have emerged in virtual worlds like Second Life and The Sims Online, and in massively multi player online role-playing games(MMORPGs) like Entropia Universe,World of Warcraft, Everquest, and EVE Online. Many of these worlds have millions of users.Economies can emerge in virtual worlds if there are scarce goods and services in them for which users are willing to spend time, effort, or money, if users can also develop specialized skills to produce such goods and services, if users are able to assert property rights on goods and resources, and if they can transfer goods and services between them.

Some economies in these worlds are primitive barter economies, whereas other make use of recognized currencies. Second Life, for example, makes use of the Linden Dollar (L$) and Entropia Universe has the Project Entropia Dollar (PED), both of which have an exchange rate against real U.S. dollars. Users of these worlds can hence choose to acquire such virtual money by doing work in the virtual world (e.g., by selling services or opening a virtual shop) or by making money in the real world and exchanging it for virtual money. Virtual objects are now frequently traded for real money outside the virtual worlds that contain them, on online trading and auction sites like eBay. Some worlds also allow for the trade of land. In December 2006, the average price of a square meter of land in Second Life was L$ 9.68 or U.S. $0.014 (up from L$6.67 in November), and over 36,000,000 square meters were sold. Users have been known to pay thousands of dollars for cherished virtual objects, and over $100,000 for real estate.

The emergence of virtual economies in virtual environments raises the stakes for their users, and increases the likelihood that moral controversies ensue. People will naturally be more likely to act immorally if money is to be made or if valuable property is to be had. In one incident that took place in China, a man lent a precious sword to another man in the online game Legend of Mir 3, who then sold it to a third party. When the lender found out about this, he visited the borrower at his home and killed him. Cases have also been reported of Chinese sweatshop laborers who work day and night in conditions of practical slavery to collect resources in games like World of Warcraft and Lineage, which are then sold for real money.

There have also been reported cases of virtual prostitution, for instance on Second Life, where users are paid to (use their avatar to) perform sex acts or to serve as escorts.There have also been controversies over property rights. On Second Life, for example,controversy ensued when someone introduced a program called CopyBot that could copy any item in the world. This program wreaked havoc on the economy, under mining the livelihood of thousands of business owners in Second Life, and was eventually banned after mass protests. Clearly, then, the emergence of virtual economies and serious investments in virtual property generates many new ethical issues in virtual worlds. The more time, money, and social capital people invest in virtual worlds, the more such ethical issues will come to the front.

Tags : , , , , , , , ,

HTTP Redirect/POST binding

a. Stolen Assertion

Threat: If an eavesdropper can copy the real user’s SAML (Security Assertion Markup Language) response and included assertions, then the eavesdropper could construct an appropriate POST body and be able to impersonate the user at the destination site.

Countermeasures: Confidentiality MUST be provided whenever a response is communicated between a site and the user’s browser. This provides protection against an eavesdropper obtaining a real user’s SAML response and assertions.

If an eavesdropper defeats the measures used to ensure confidentiality, additional countermeasures are available:

  1. The Identity Provider and Service Provider sites SHOULD make some reasonable effort to ensure that clock settings at both sites differ by at most a few minutes. Many forms of time synchronization service are available, both over the Internet and from proprietary sources.
  2. When a non-SSO SAML profile uses the POST binding it must ensure that the receiver can perform timely subject confirmation. To this end, a SAML authentication assertion for the principal MUST be included in the POSTed form response.
  3. Values for NotBefore and NotOnOrAfter attributes of SSO (Single Sign-On) assertions SHOULD have the shortest possible validity period consistent with successful communication of the assertion from Identity Provider to Service Provider site. This is typically on the order of a few minutes. This ensures that a stolen assertion can only be used successfully within a small time window.
  4. The Service Provider site MUST check the validity period of all assertions obtained from the Identity Provider site and reject expired assertions. A Service Provider site MAY choose to implement a stricter test of validity for SSO assertions, such as requiring the assertion’s IssueInstant or AuthnInstant attribute value to be within a few minutes of the time at which the assertion is received at the Service Provider site.
  5. If a received authentication statement includes a <saml:SubjectLocality> element with the IP address of the user, the Service Provider site MAY check the browser IP address against the IP address contained in the authentication statement.

b. Man In the Middle Attack

Threat: Since the Service Provider site obtains bearer SAML assertions from the user by means of an HTML form, a malicious site could impersonate the user at some new Service Provider site. The new Service Provider site would believe the malicious site to be the subject of the assertion.

Countermeasures: The Service Provider site MUST check the Recipient attribute of the SAML response to ensure that its value matches the https://<assertion consumer host name and path>. As the response is digitally signed, the Recipient value cannot be altered by the malicious site.

c. Forged Assertion

Threat: A malicious user, or the browser user, could forge or alter a SAML assertion.

Countermeasures: The browser/POST profile requires the SAML response carrying SAML assertions to be signed, thus providing both message integrity and authentication. The Service Provider site MUST verify the signature and authenticate the issuer.

d. Browser State Exposure

Threat: The browser/POST profile involves uploading of assertions from the web browser to a Service Provider site. This information is available as part of the web browser state and is usually stored in persistent storage on the user system in a completely unsecured fashion. The threat here is that the assertion may be “reused” at some later point in time.

Countermeasures: Assertions communicated using this profile must always have short lifetimes and should have a <OneTimeUse> SAML assertion <Conditions> element. Service Provider sites are expected to ensure that the assertions are not re-used.

e. Replay

Threat: Replay attacks amount to re-submission of the form in order to access a protected resource fraudulently.

Countermeasures: The profile mandates that the assertions transferred have the one-use property at the Service Provider site, preventing replay attacks from succeeding.

f. Modification or Exposure of state information

Threat: Relay state tampering or fabrication, Some of the messages may carry a <RelayState> element, which is recommended to be integrity protected by the producer and optionally confidentiality- protected. If these practices are not followed, an adversary could trigger unwanted side effects. In addition, by not confidentiality-protecting the value of this element, a legitimate system entity could inadvertently expose information to the identity provider or a passive attacker.

Countermeasure: Follow the recommended practice of confidentiality and integrity protecting the RelayState data. Note: Because the value of this element is both produced and consumed by the same system entity, symmetric cryptographic primitives could be utilized.

Tags : , , , , , , , , , , , , , , , , , , , ,

Trust challenges of Cloud computing

The Security and Privacy challenges discussed above are also relevant to the general requirement upon Cloud suppliers to provide trustworthy services. If Cloud providers find adequate solutions to address the data privacy and security specificities of their business model,they will have met in a certain way the requirement of offering trusted services. Yet, there are a few other challenges which, if tackled properly, would enhance users confidence in the application of Cloud computing and would build market trust in the Cloud service offerings.

Continuity and Provider Dependency - The increasing complexity of Cloud architectures and the resulting lack of transparency also increase the security risk. In many Cloud implementations, the centralized management and control introduces several so-called single points of failure. These could threaten the availability of Cloud users’ data or computing capabilities indirectly, as a small incident in the Cloud could have an exponential impact.

Compliance with applicable regulations and good practices - If privacy is one regulatory area particularly relevant to Cloud computing, it is certainly not the only area. Once the applicable law to a Cloud service is determined, the provider will need to comply with other regulations than privacy, such as: General civil law and contract law, Consumer protection law, “e-commerce regulation”, Fair trade practices law.

Change in Cloud ownership and “Force Majeure”- The Cloud market is still immature and the situation of global economy may affect some of the Cloud industry players too in the coming months or year(s). Accordingly, users of the Cloud must be confident that the services externalized to the Cloud provider, including any important assets (personal data, confidential information)will not be disrupted as it was discussed above(“Continuity and Provider Dependency”).

Trust enhancement through assurance mechanisms – By definition, the Cloud-computing concept cannot guarantee full, continuous and complete control of the Cloud users over their assets. For these reasons, the establishment of appropriate “checks and controls” to ascertain that Cloud providers meet their obligations becomes very relevant for Cloud users (for example,through adherence to generally-accepted standards).

Despite security, privacy and trust concerns, the benefits offered by Cloud computing are too significant to ignore. Thus, rather than discarding cloud computing because of the risks involved, the Cloud participants should work to overcome them so that they can maximize the benefits (e.g. reduced cost, increased storage, flexibility, mobility, etc.). Cloud users should become Risk Intelligent by taking a proactive approach to managing risks and challenges in Privacy, Security and Trust. Risk will become an even more important part of doing business when adopting Cloud concepts.

Risk can then provide both opportunity and peril: poorly managed, it allows a security breach by a hacker or a disgruntled employee, exposing an organisation to potential loss and liability. Effectively addressed, it enables management to exploit e-channels, mobile offices and process efficiency gains and positive results. The Risk Intelligent C-suite should manage information security from the perspective of making money by taking intelligent risks, avoiding losing money by failing to manage risk intelligently.

 

Tags : , , , , , , , , , ,

Nano strengthens barriers to counterfeiting

By providing non‐reproducible technological features, nanotechnology based developments are expected to offer a significant move forward in preventing illicit copying intellectual properties and products. Ultimately, the implementation of the novel techniques will considerably reduce tax revenue losses through counterfeiting and improve citizens’ safety and quality of life.

Holograms, tamper‐evident closures, tags and markings and RFID labels are the most widely known anti‐counterfeiting technologies. The key limitation of these methods is that they can be copied. Innovations exploiting the intrinsic nature of nano materials to give items complex and unique ‘fingerprints’ results both in the development of new approaches and improvement of existing techniques.

Holography ‐ easily identifiable holograms, for example, showing the manufacturer’s logo are primarily used as first level identification devices. Two dimensional nano scale gratings, photopolymers and luminescent nano particles can be utilized to provide an additional level of security for the holograms.

Laser surface authentication ‐ a laser is used to examine the surface roughness of an object. Complexity and uniqueness of the surface roughness code is similar to iris scans and fingerprints. The advantages of the technique is that surface roughness at nanoscale cannot be replicated. Therefore,a much higher level of security is offered to products as compared to holograms and watermarks.

Radio frequency identification (RFID) ‐ is a form of automatic identification and data capture technology where data stored on a tag is transferred via a radio frequency link. An RFID reader is used to extract this data from tags. New developments exploit nanoscale variations, naturally produced during the manufacturing process of RFIDs that are unique to individual integrated circuits , which can be verified during data transfer. This is known as the Physically Uncloneable Function (PUF).

Nano barcodes ‐ three dimensional polymer patterns on the order of tens of nanometres can be made on silicon substrates to provide 3D nanoscale data encryption key, similar to barcodes. The advantages over conventional barcode/marking are difficulty of detecting presence (covert marking)and duplication. These can be applied to banknotes,security papers, art, jewellery and gemstones.

SERS and quantum dots tags – metal nano particles produce unique electromagnetic spectra (known as surface enhanced raman scattering) while certain semiconductor nano particles (known as quantum dots) have different fluorescence based on size and chemical composition. Both can be exploited as identification tools. They offer difficulty in reproducing due to infinite combinations, covert security feature, non‐toxicity and multi functionality. These nano scaled tags can be applied in inks, adhesives, laminates, paper, packaging, textiles, glass, and others.

Nano composite tags – consist of a materials‐based pattern (with magnetic and/or optical features) that forms part of a label, tag or embedded portion of an item. The nanometre sized magnetic and optical features are generated randomly during manufacturing, constituting a unique ‘fingerprint’ that is read and stored in a central database . The result is a secure identity for an individual item that is prohibitively expensive and difficult to copy. This technology can be applied in the pharmaceutical, spare parts, fashion and food and beverage industries. Incorporating encapsulated and functionalized (e.g. thermochromic) nano particles in labels is another promising solution based on the use of nano composites.

Tags : , , , , , , , , , , , , , , , , , , , , , , , , ,

Extensions of Relational and Object oriented Database Systems

In this approach a relational or object-oriented database system is extended to support SGML/XML data management. The proposed SGML extensions included, for example, a system where SGML files were mapped to the O2 database management system, and the extension of operators of SQL to accommodate structured text. All current commercial database systems provide some XML support. Examples of commercial systems are Oracle’s XML SQL Utility and IBM’s DB2 XML Extender. For the sake of discussion, we consider IBM’s DB2 XML Extender as representative of the many systems following this approach.

Data model: When conventional database systems are used for XML, data structuring is systematic and explicitly defined by a database schema. The data model of the original system is typically extended to encompass XML data, but the extensions define simplified tree models rather than rich XML documents.The XML extensions are intended primarily to support the management of enterprise data, wrapped as elements and attributes in an XML document. A problem in using the systems is the need for parallel understanding of two different kinds of data models.

Data definition: The extended systems require explicit definition of transformation of a DTD to the internal structures. XML elements are typically mapped to objects in object-oriented systems, but relational systems require more elaborate transformations to represent hierarchic and ordered structures in unordered tables. In the DB2 XML Extender the whole document can be stored either externally as a file or as a whole in a column of a table. Elements and attributes can also be stored separately inside tables, which can be accessed independently or used for selecting whole documents (as if the side tables were indexes). DTDs, which are stored in a special table, can be associated with XML documents and used to validate them.

Data manipulation: In relational extensions, whole documents and DTDs that are stored in tables can be accessed and manipulated through the SQL database language. As explained above, specific elements of XML data can be extracted when documents are loaded, maintained separately, and accessed directly through SQL. Support for accessing elements that have not been extracted as part of document loading is provided through limited XPath queries, and the DB2 XML Extender can be used together with DB2 UDB Text for full-text search. DB2 also provides document assembly via a function call that can be embedded in an SQL query.

Tags : , , , , , , , , , , , ,

Use of SOAP over HTTP

Since the SOAP binding requires that conformant applications support HTTP over TLS/SSL with a number of different bilateral authentication methods such as Basic over server-side SSL and certificate-backed authentication over server-side SSL, these methods are always available to mitigate threats in cases where other lower-level systems are not available and the above listed attacks are considered significant threats.

This does not mean that use of HTTP over TLS with some form of bilateral authentication is mandatory. If an acceptable level of protection from the various risks can be arrived at through other means (for example, by an IPsec tunnel), full TLS with certificates is not required. However, in the majority of cases for SOAP over HTTP, using HTTP over TLS with bilateral authentication will be the appropriate choice.

The HTTP Authentication RFC describes possible attacks in the HTTP environment when basic or message-digest authentication schemes are used. Note, however, that the use of transport-level security (such as the SSL or TLS protocols under HTTP)only provides confidentiality and/or integrity and/or authentication for “one hop”. For models where there may be intermediaries, or the assertions in question need to live over more than one hop, the use of  HTTP with TLS/SSL does not provide adequate security.

Tags : , , , , , , , , , , , ,

User login protocol

Initialization: Once the user has successfully logged into an account, the server places in the user’s computer a cookie that contains an authenticated record of the username, and possibly an expiration date. (“Authenticated”means that no party except for the server is able to change the cookie data without being detected by the server. This can be ensured, for example, by adding a MAC that is computed using a key known only to the server. Cookies of this type can be stored in several computers, as long as each of them was used by the user.

Login:

1. The user enters a username and a password. If his computer contains a cookie stored by the login server then the cookie is retrieved by the server.

2. The server checks whether the username is valid and whether the password is correct for this username.

3. If the username/password pair is correct, then

4. If the username/password pair is incorrect, then

Tags : , , , , , , ,

TCP/Process Communication

In order to send a message, a process sets up its text in a buffer region in its own address space,inserts the requisite control information (described in the following list) in a transmit control block (TCB) and passes control to the TCP. The exact form of a TCB is not specified here, but it might take the form of a passed pointer, a pseudo interrupt, or various other forms. To receive a message in its address space, a process sets up a receive buffer, inserts the requisite control information in a receive control block (RCB) and again passes control to the TCP.

Fig. 1. Conceptual TCB format.

In some simple systems, the buffer space may in fact be provided by the TCP. For simplicity we assume that a ring buffer is used by each process,but other structures (e.g., buffer chaining) are not ruled out. A possible format for the TCB is shown in Fig.1. The TCB contains information necessary to allow the TCP to extract and send the process data.Some of the information might be implicitly known,but we are not concerned with that level of detail.The various fields in the TCB are described as follows.

  1. Source Address: This is the full net/HOST/TCP/port address of the transmitter.
  2. Destination Address: This is the full net/HOST/TCP/port of the receiver.
  3. Next Packet Sequence Number: This is the sequence number to be used for the next packet the TCP will transmit from this port.
  4. Current Buffer Size: This is the present size of the process transmit buffer.
  5. Next Write Position: This is the address of the next position in the buffer at which the process can place new data for transmission.
  6. Next Read Position: This is the address at which the TCP should begin reading to build the next segment for output.
  7. End Read Position: This is the address at which the TCP should halt transmission. Initially 6 and 7 bound the message which the process wishes to transmit.
  8. Number of Re-transmissions/Maximum Re-transmissions: These fields enable the TCP to keep track of the number of times it has re-transmitted the data and could be omitted if the TCP is not to give up.
  9. Timeout/Flags: The timeout field specifies the delay after which unacknowledged data should be re-transmitted. The flag field is used for semaphores and other TCP/process synchronization status reporting, etc.
  10. Current Acknowledgment/Window: The current acknowledgment field identifies the first byte of data still unacknowledged by the destination TCP.

The read and write positions move circularly around the transmit buffer, with the write position always to the left (module the buffer size) of the read position. The next packet sequence number should be constrained to be less than or equal to the sum of the current acknowledgment and the window fields. In any event, the next sequence number should not exceed the sum of the current acknowledgment and half of the maximum possible sequence number (to avoid confusing the receiver’s duplicate detection algorithm). A possible buffer layout is shown in Fig.2.

 

Fig. 2. Transmit buffer layout.

The RCB is substantially the same, except that the end read field is replaced by a partial segment check-sum register which permits the receiving TCP to compute and remember partial check sums in the event that a segment arrives in several packets. When the final packet of the segment arrives, the TCP can verify the check sum and if successful, acknowledge the segment.

Tags : , , , , , , , , , , , , , , ,