Posts Tagged ‘IT Solutions

Profiling the TCP switch implementation in userspace

We measured the performance of the TCP switch(user space) & observed that in average, the forwarding is fast, but there is high fluctuation of forwarding time performance.Also, the connection setup and teardown is slow. The fluctuation of performance happens when garbage collector runs to reclaim the unused resources. We profiled each part of the system tofigure out the reasons of such behavior. First, we measured the time to perform a function to insert or delete an entry in the classification table of iptables and to insert or delete a queue and a filter for a particular connection in output scheduler. Each function call takes about 300us. It’s partly due to the communication overhead between kernel and user space. To setup and tear down a connection,we need to do the transaction twice for iptables and twice for traffic control. This gives us motivation to implement the TCP switch inside the Linux kernel.

The more important issue is to make the forwarding performance more predictable(less variable). To find the reason of the fluctuation, we measured thread context switching time and insertion, deletion, and lookup time of iptables in the kernel. One factor of the fluctuation may be the conflict between lookup of forwarding data and operation of the admission controller and the garbage collector. The thread context switching time measurement program creates the given number of threads and each thread continuously yields. Then, we counted the number of yields in given time. The machine we experiment is Pentium III 1GHz PC. The process context switching time is almost same as the thread context switching time.

In average, it’s smaller than 1 us, although there is some variability in the number of context switches in different threads. From this result, we conclude that the thread context switching time is not the major factor of the fluctuation of the forwarding performance. Next, we examined the locking conflict between code that read iptables and code that writes iptables. iptables use read-write lock which is the spin lock with multiple readers and one writer support. When no reader has the lock, writer can get the lock. When writer has the lock, any reader cannot get the lock. The measured lookup (read) time of iptables is 1~2us, but the insertion or deletion of an entry in iptables takes about 100us. When a new entry is inserted oran entry is deleted, a new table is created by vmalloc, modification is done, old table is copiedto the new table by memcpy, and old table is freed. This seems to be an inefficent table management. But it’s a good design considering the usage of iptables. In general, iptables entries are static (i.e. they do not change often). So, the operation of copying whole table is not a problem. By creating a new table and deleting an old one, the memory allocated is compact. However, this is not suitable for the TCP switch operation. The TCP switch inserts or deletes entries dynamically. And this latency in modification of iptables delays the forwarding of packets. When garbage collector runs, it tries to delete inactive entries. Because this deletion is slow, the forwarding of packet gets delayed in this period. The reason of the fluctuation of forwarding time can be explained with this slow modification of iptables. This gives the motivation to change iptables to the data structure suitable for the TCP switch.

 

Tags : , , , ,

Applying the Lessons of eXtreme Programming

 

The benefits available from adopting XP style unit testing on aproject, and then moves on to identify useful lessons that can be learned from other XP (eXtreme Programming) practices. This concludes by asking questions about the nature of process improvement in software development and how we can make our software serve society.

Applying JUnit

JUnit is a deceptively simple testing framework that can create a major shift in your personal development process and the enjoyment of programming. Prior to using JUnit, developers typically have resistance to making changes late in a project “because something might break.”With JUnit the worry associated with breaking something goes away. Yes, it might still break, but the tests will detect it. Because every method has a set of tests, it is easy to see which methods have been broken by the changes, and hence make the necessary fix. So changes can bemade late on in a project with confidence, since the tests provide a safety net.

Lesson: Automated tests pay off by improving developer confidence in their code.

In order to create XP style Unit Tests however, developers need to make sure that their classes and methods are well designed. This means that there is minimum coupling between the class being tested and the rest of the classes in the system. Since the test case subclass needs to be able to create an object to test it, the discipline of testing all methods forces developers to create classes with well-defined responsibilities.

Lesson: Requiring all methods to have unit tests forces developers to create better designs.

It is interesting to compare classes that have unit tests with those that do not have unit tests.By following the discipline of unit tests, methods tend to be smaller but more numerous. The implementations tend to be simpler, especially when the XP practice of  “Write the Unit Tests first”  is followed. The big difference  that the really long methods full of nested and twisted conditional code don’t exist in the unit tested code. That kind of code is impossible to write unit tests for, so it ends up being re-factored into a better design.

Lesson: Design for testability is easier if you design and implement the tests first.

Adopting XP style unit testing also drastically alters the minute by minute development process. We write a test, compile and run (after adding just enough implementation to make itrun) so that the test will fail. It notes “This may seem funny – don’t we want the tests to pass? Yes, we do. But by seeing them fail first, we get some assurance that the test isvalid.”  Now that we have a failing test, we can implement the body of the method and run the test again. This time it will probably pass, so now it’s time to run all the other unit tests to see ifthe latest changes broke anything else. Now that we know that it works, we can now tidy up the code, Refactor as needed and possibly optimize this correct implementation. Once we have done this we are ready to restart the cycle by writing the next unit test.

Lesson: Make it Run, Make it Right, Make it Fast (but only if you need to).

Early successes with JUnit encouraged me to experiment with other XP Practices. Just havingthe unit tests in place made programming fun again, and if that practice was valuable, may be other were as well.

Tags : , , , ,

The Open Digital Library

Technology Platform Requirements

By its nature, digital collection development requires extensive use of technological resources. In the early days of digital library development, when collections were typically small and experimental, a wide variety of hardware and software was utilized.Today, the leading digital library developers are putting substantial collections online. Some of these collections include millions of digital objects; collections are being planned that will require storage measured in petabytes—the equivalent of more than 50,000 desktop computers with 20-gigabyte hard drives. As digital libraries scale in size and functionality, it is critical for the underlying technology platform to deliver the performance and reliability required. Many digital libraries are considered “mission critical” to the overall institution. In addition, patrons expect high service levels, which means that downtime and poor response time are not tolerable. Moreover, because cost is a foremost concern, scalability and efficiency with a low total cost of ownership are also key requirements. This type of digital library implementation requires a scalable enterprise-level technology solution with built-in reliability, availability, and service ability (RAS) features.

Storage capacity also must be scalable to adapt to rapid growth in demand, and must be adapted to the mix of media types that may be stored in a digital library, such as:

• Text, which is relatively compact.

• Graphics, which can be data-intensive.

• Audio, which is highly dynamic.

• Video, which is highly dynamic and data intensive.

Storage capacity should be expandable in economical increments and should not require redesign or re-engineering of the system design as requirements grow. An open systems architecture provides both a robust platform and the best selection of digital media management solutions and development tools. The inherent reliability and scalability of open platforms have made them the most popular choice of IT professionals for Internet computing. This computing model features an architecture that is oriented totally around Internet protocols and stresses the role of Websites for a vast and diverse array of services that follow a utility model.

Evolution to Web Services

The digital library of the future will deliver “smart media services”; that is, Webservices that can match “media content” to user “context” in a way that provides a customized, personalized experience. Media content is digital content that includes elements of interactivity. Context includes such information as the identity and location of the user.

Several key technologies must interact to allow Web services to work in this way.Extensible Markup Language (XML) and Standard General Markup Language (SGML)are important standards influencing our ability to create broadly interoperable Webbased applications. SGML is an international standard for text markup systems and is very large and complex, describing thousands of different document types in many fields of human activity. XML is a standard for describing other languages, written in SGML. XML allows the design of customized markup languages for limitless types of documents, providing a very flexible and simple way to write Web-based applications.This differs from HTML, which is a single, predefined markup language and can be considered one application of XML. The primary standards powering Web services today are XML-based. These include:

• Simple Object Access Protocol (SOAP).

• Universal Description, Discovery and Integration (UDDI).

• Web Services Description Language (WSDL).

• Electronic Business XML (ebXML).

These standards are emerging as the basis for the new Web services model. While not all are fully defined standards, they are maturing quickly with broad industry support.

 

Tags : , , , ,

Stronger Password Authentication Using Browser Extensions

Keystream monitor

A natural idea for anyone who is trying to implement web password hashing is a keystream monitor that detects unsafe user behavior. This defense would consist of a recording component and a monitor component. The recording component records all passwords that the user types while the extension is in password mode and stores a one-way hash of these passwords on disk. The monitor component monitors the entire keyboard key stream for a consecutive sequence of keystrokes that matches one of the user’s passwords. If such a sequence is keyed while the extension is not in password mode, the user is alerted.

We do not use a keystream monitor in Pwd-Hash, but this feature might be useful for an extension that automatically enables password mode when a password field is focused, rather than relying on the user to press the password-key or password-prefix. However, this approach suffers from several limitations.The most severe is that the keystream monitor does not defend against an online mock password field. By the time the monitor detects thata password has been entered, it is too late — the phisher has already obtained all but the last characterof the user’s password. Another problem is that storing hashes of user passwords on disk facilitates an offline password dictionary attack if the user’s machine is infiltrated. However, the same is true of the browser’s auto-complete password database. And finally,novice users tend to choose poor passwords that might occur naturally in the keystream, when the extension is not in password mode. Although the threat of constant warnings might encourage the userto choose unique and unusual passwords, excessive false alarms could also cause the user to disregard monitor warnings.

Tags : , , ,

Privacy and accountability in database systems

Databases that preserve a historical record of activities and data offer the important benefit of system accountability, past events can be analyzed to detect breaches and maintain data quality. But the retention of history can also pose a threat to privacy. System designers need to carefully balance the need for privacy and accountability by controlling how and when data is retained by the system and who will be able to recover and analyze it. This  describes the technical challenges faced in enhancing database systems so that they can securely manage history. These include  first, assessing the unintended retention of data in existing database systems that can threaten privacy; second, redesigning system components to avoid this unintended retention; and third,developing new system features to support accountability when it is desired.

Computer forensics is an emerging field which has studied the recovery of data from file systems, and the unintended retention of data by applications like web browsers and document files. Forensic tools like the Sleuth Toolkit and EnCASE Forensic  are commonly used by investigators to recover data from computer systems. These tools are sometimes able to interpret common file types but, to our knowledge, none provide support for analysis of database files. Forensic analysts typically have unrestricted access to storage on disk. We consider as our threat model adversaries with such access, as this models the capabilities of system administrators, a hacker who has gained privileges on the system, or an adversary who has breached physical security.We also note that database storage is increasingly embedded into a wide range of common applications for persistence.For example, embedded database libraries like BerkeleyDB and SQLite  are used as the underlying storage mechanisms for email clients, web browsers, LDAP implementations, and Google Desktop. For example, Apple Mail.appuses an SQLite database file to support searches on subject,recipient, and sender (stored as !/Library/Mail/EnvelopeIndex). Recently, Mozilla has adopted SQLite as a unified storage model for all its applications. In Firefox 2.0, remote sites can store data that persists across sessions in an SQLite database as a sophisticated replacement of cookies. The forensic analysis of such embedded storage is particularly interesting because it impacts everyday users of desktop applications, and because embedded database storage is harder to protect from such investigation.

Forensic analysis can be applied to various components of a database system, and it reveals not only data currently active in the database, but also previously deleted data and historical information about operations performed on thesystem. Record storage in databases contains data that has been logically deleted, but not destroyed. Indexes also contain such deleted values, and in addition may reveal, through their structure, clues about the history of operations that led to their current state. Naturally, the transaction log contains a wealth of forensic information since it often includes before- and after-images of each database update. Other sources of forensic information include temporary relations (often written to disk for large sort operations), the database catalog, and even hidden tuple identifiers that may reveal the order of creation of tuples in the database. The goal here is to understand the magnitude of data retention, and to measure (or bound) the expected lifetime of data.

As a practical matter encrypted storageis not widely used. In databases, encryption often introduces unacceptable performance costs. In addition, forensic investigators or adversaries may recover cryptographic keys because they are shared by many employees, easily subpoenaed,or stored on disk and recovered.

Tags : , , , ,

Metadata for digital libraries: state of the art and future directions

At a time when digitization technology has become well established in library operations, the need for a degree of standardization of metadata practices has become more acute, in order to ensure digital libraries the degree of interoperability long established in traditional libraries. The complex metadata requirements of digital objects, which include descriptive, administrative and structural metadata, have so far mitigated against the emergence of a single standard. However, a set of already existing standards, all based on XML architectures, can be combined to produce a coherent, integrated metadata strategy.

An overall framework for a digital object’s metadata can be provided by either METS or DIDL, although the wider acceptance of the former within the library community makes it the preferred choice. Descriptive metadata can be handled by either Dublin Core or the more sophisticated MODS standard. Technical metadata, which is contingent on the type of files that make up a digital object, is covered by such standards as MIX (still images), AUDIOMD (audio files), VIDEOMD or PBCORE (video) and TEI Headers (texts). Rights management may be handled by the METS Rights schema or by more complex schemes such as XrML or ODRL. Preservation metadata is best handled by the four schemas that make up the PREMIS standard. Integrating these standards using the XML namespace mechanism is straight forward technically although some problems can arise with namespaces that are defined with different URIs, or as a result of duplication and consequent redundancies between schemas: these are best resolved by best practice guidelines, several of which are currently under construction.

The next ten years are likely to see further degrees of metadata integration, probably with the consolidation of these multiple standards into a single schema. The digital library community will also work towards firmer standards for metadata content (analogous to AACR2), and software developers will increasingly adopt these standards. The digital library user will benefit from developments in enhanced federated searching and consolidated digital collections. The same developments are likely to take place in the archives and museums sectors, although the different metadata traditions that apply here are likely to make the formthey take some what different.

The combined benefits of the shared XML platform and the fact that they have already proved themselves in major projects makes these standards the best strategic choices for digital libraries. Although their adoptionin integrated environments is still at a relatively early stage, particularly amongst software developers,increasing community-wide use of these will render the production of digital collections easier by freeing resources from metadata to object creation, and facilitate the adoption of service-oriented approaches to core infrastructures. The adoption of integrated metadata strategies should be pressed for at the highest managerial levels.

Tags : , , ,

Fuzzy Logic Based Intelligent Negotiation Agent (FINA) in E-Commerce

The fuzzy logic based intelligent negotiation agent. This is able to interact autonomously and consequently save human labor in negotiations. The aim of modeling a negotiation agent is to reach mutual agreement efficiently and intelligently. The negotiation agent is able to negotiate with other such agents, over various sets of issues, on behalf of the real-world parties they represent, i.e. it can handle multi-issue negotiation.

The reasoning model of the negotiation agent has been implemented partially by using c# based on Microsoft .NET. The reliability and the flexibility of the reasoning model are finally evaluated. The results show that performance of the proposed agent model is acceptable for negotiation parties to achieve mutual benefits.

Software agent technology is widely used in agent-based e-Commerce. These software agents have a certain degree of intelligence, i.e. they can make their own decisions. The agents interact with other agents to achieve certain goals. However,software agents can not directly control other agents because every agent is an independent decision maker. Negotiation becomes the necessary method to achieve mutual agreement between agents.This focuses on modeling multi-issue, one-to-one negotiation agents for a third party driven virtual market place.We consider one-to-one negotiation because it is the characteristic of individual negotiations and because it allows cooperative negotiation which is not suitable for many-to-many auction based negotiations.

When building autonomous negotiation agents which are capable of flexible and sophisticated negotiation, three broadareas need to be considered:

Negotiation protocols – the set of rules which govern the interaction

Negotiation issues – the range of issues over which agreement must be reached

Agent reasoning models – the agents employ to act in line with the negotiation protocol in order to achieve their negotiation objectives.

This reasoning model aims at the negotiation process. The process of matching and hand shaking in a pre-negotiation process has been solved in several papers. We assume that the buyer agent and vendor agent have roughly matched their similarity and start a negotiation on issues which they have not reached agreement. In a certain round of negotiation, the negotiation agent can pre-prepare a counter offer for the next round of negotiation. The counter offer is generated by the new offer generation engine. Both the incoming offer from the opponent negotiation agent and the counter offer are sent to the offer evaluation block. This does the analysis of the offer and calculates the degree of satisfaction (acceptance of the agent)of the incoming offer and the counter offer. The result is scaled over the range from 0 to 100. Finally, the decision making block makes the decision. It could be acceptance of the current incoming offer, rejection of the current incoming offer, or counter offer.

Tags : , , , ,

Sedna : A Native XML DBMS

Sedna is an XML database system. It implements XQuery and its data model exploiting techniques developed specially for this language.

Sedna is designed with having two main goals in mind.First, it should be the full-featured database system. That requires support for all traditional database services such as external memory management, query and update facilities,concurrency control, query optimization, etc. Second, itshould provide a run-time environment for XML-data intensive applications. That involves tight integration of database management functionality and that of a programming language. Developing Sedna,  not to adopt any existing database system. Instead of building a super structure upon an existing database system, use a native system from scratch. It took more time and eff ort but gave us more freedom in making design decisions and allowed avoiding undesirable run-time overheads resulting from interfacing with the data model of the underlying database system.

We take the XQuery 1.0 language  and its data mode as a basis for our implementation. In order to support updates we extend XQuery with an update language named XUpdate. Sedna is written in Scheme and C++. Static query analysis and optimization is written in Scheme. Parser, executor,memory and transaction management are written in C++.The implementation platform is Windows.

Tags : , , , ,

Middleware Layers and R&D Efforts

Just as networking protocol stacks can be decomposed into multiple layers, middleware can also be decomposed into multiple layers

1.Host Infrastructure Middleware

2. Distribution Middleware

3. Common Middleware Services

4. Domain specific Middleware Services

Each of these middleware layers is described below, along with a summary of key R&D efforts at each layer that are helping to evolve the ability of middleware to meet the stringent QoS demands of DRE (Distributed real-time and embedded) systems.

Host infrastructure middleware encapsulates and enhances native OS communication and concurrency mechanisms to create portable and reusable network programming components, such as reactors, acceptor connectors, monitor objects, active objects, and component configurators. These components abstract away the accidental incompatibilities of individual operating systems, and help eliminate many tedious, error-prone, and non-portable aspects of developing and maintaining networked applications via low-level OS programming API, such as Sockets or POSIX Pthreads. The OVM virtual machine is written entirely in Java and its architecture emphasizes customizability and pluggable components. Its implementation strives to maintain a balance between performance and flexibility, allowing users to customize the implementation of operations such as message dispatch, synchronization, field access, and speed.

Distribution middleware defines higher-level distributed programming models whose reusable APIs and mechanisms automate and extend the native OS network programming capabilities encapsulated by host infrastructure middleware. Distribution middleware enables developers to program distributed applications much like stand-alone applications, i.e., by invoking operations on target objects without hard coding dependencies on their location, programming language,OS platform, communication protocols and interconnects,and hardware characteristics. At the heart of distribution middleware are QoS-enabled object request brokers(ORBs), such as CORBA, COM+, and Java RMI. These ORBs allow objects to interoperate across networks regardless of the language in which they were written or the OS platform on which they are deployed.

DRE applications to reserve and manage

Processor resources via thread pools, priority mechanisms, intra-process mutexes, and a global scheduling service for real-time systems with fixed priorities

Communication resources via protocol properties and explicit bindings to server objects using priority bands and private connections

Memory resources via buffering requests in queuesand bounding the size of thread pools.

Common Middleware Services augment distribution middleware by defining higher-level domain independent components that allow application developers to concentrate on programming application logic, without the need to write the “plumbing” code needed to develop distributed applications by using lower level middleware features directly. Whereas distribution middleware focuses largely on managing end-system resources in support of an object-oriented distributed programming model, common middleware services focus on allocating,scheduling, and coordinating various end-to-end resources throughout a distributed system using a component programming and scripting model. Developers can reuse these services to manage global resources and perform recurring distribution tasks, such as event notification,logging, persistence, real-time scheduling, fault tolerance,and transactions, that would otherwise be implemented in an ad hoc manner by each application or integrator.

The QuO architecture decouples DRE middleware and applications along the following two dimensions:

a) Functional paths, which are flows of information between client and remote server applications. In distributed systems, middleware ensures that this information is exchanged efficiently, predictably,scaleably, dependably, and securely between remote peers. The information itself is largely application specific and determined by the functionality being provided (hence the term “functional path”).

b) QoS paths, which are responsible for determining how well the functional interactions behave end-to-end  with respect to key DRE system QoS properties, such as

1. How and when resources are committed toclient/server interactions at multiple levels of DREsystems

2. The proper application and system behavior ifavailable resources do not satisfy the expected resources

3. The failure detection and recovery strategies necessary to meet end-to-end dependability requirements.

The QuO middleware is responsible for collecting,organizing, and disseminating QoS-related meta-information needed to monitor and manage how well the functional interactions occur at multiple levels of DRE systems. It also enables the adaptive and reflective decision-making needed to support non-functional QoS properties robustly in the face of rapidly changing application requirements and environmental conditions,such as local failures, transient overloads, and dynamic functional or QoS reconfigurations.

Domain-specific middleware services are tailored to the requirements of particular DRE system domains, such as avionics mission computing, radar processing, online financial trading, or distributed process control. Unlike the previous three middleware layers—which provide broadly reusable “horizontal” mechanisms and services domain-specific middleware services are targeted at vertical markets. From both a COTS and R&D perspective, domain-specific services are the least mature of the middleware layers, due in part to the historical lackof distribution middleware and common middleware service standards needed to provide a stable base upon which to create domain-specific middleware services. Since they embody knowledge of a domain, however,domain-specific middleware services have the most potential to increase the quality and decrease the cycle-time and effort that integrators require to develop particular classes of DRE systems.

The domain specific middleware services in Bold Stroke are layered upon COTS processors (PowerPC), network interconnects(VME), operating systems (VxWorks), infrastructure middleware (ACE), distribution middleware (TAO), and common middleware services (QuO and the CORBA Event Service).

Tags : , , ,

Transactional Memory Coherence and Consistency

Transactional memory Coherence and Consistency (TCC). TCC providesa model in which atomic transactions are always the basic unit of parallel work, communication, memory coherence, and memory reference consistency. TCC greatly simplifies parallel software by eliminating the need for synchronization using conventional locks and semaphores, along with their complexities.

TCC  hardware must combine all writes from each transaction region in a program into a single packet and broadcast this packet to the permanent shared memory state atomically as a large block.This simplifies the coherence hardware because it reduces the need for small, low-latency messages and completely eliminates the need for conventional snoopy cache coherence protocols, as multiple speculatively written versions of a cache line may safely coexist within the system. Meanwhile, automatic, hardware-controlled rollback of speculative transactions resolves any correctness violations that may occur when several processors attempt to read and write the same data simultaneously. The cost of this simplified scheme is higher interprocessor bandwidth.

TCC model continually execute speculative transactions. A transaction is a sequence of instructions that is guaranteed to execute and complete only as an atomic unit. Each transaction produces a block of writes called the write state which are committed to shared memory only as an atomic unit, after the transaction completes execution.Once the transaction is complete, hardware must arbitrate system-wide for the permission to commit its writes. After this permission is granted, the processor can take advantage of high system interconnect bandwidths to simply broadcast all writes for the entire transaction out as one large packet to the rest of the system. The broadcast can be over an unordered interconnect,with individual stores separated and reordered, as long as stores from different commits are not reordered or overlapped.Snooping by other processors on these store packets maintains coherence in the system, and allows them to detect when they have used data that has subsequently been modified by anothertransaction and must rollback — a dependence violation. Combiningall writes from the entire transaction together minimizesthe latency sensitivity of this scheme, because fewer interprocessormessages and arbitrations are required, and because flushingout the write state is a one-way operation. At the same time, sincewe only need to control the sequencing between entire transactions,instead of individual loads and stores, we leverage the commit operation to provide inherent synchronization and a greatly simplified consistency protocol. This continual speculative buffering, broadcast, and potential violation cycle  allows us to replace conventional coherence and consistence protocols simultaneously:

Consistence:

To impose some sort of ordering rules between individual memory reference instructions, as with most consistency models, TCC just imposes a sequential ordering between transaction commits. This can drastically reduce the number of latency-sensitive arbitration and synchronization events required by low-level protocols in a typical multiprocessor system. As far as the global memory state and software is concerned, all memory references from aprocessor that commits earlier happened “before” all memory references from a processor that commits afterwards, even if the references actually executed in an interleaved fashion.A processor that reads data that is subsequently updated by another processorʼs commit, before it can commit itself, is forced to violate and rollback in order to enforce this model.Interleaving between processorsʼ memory references is only allowed at transaction boundaries, greatly simplifying the process of writing programs that make fine-grained access to shared variables. In fact, by imposing an original sequential programʼs original transaction order on the transaction commits, this can effectively let the TCC system provide an illusionof uniprocessor execution to the sequence of memory references generated by parallel software.

Coherence:

Stores are buffered and kept within the processor node for the duration of the transaction in order to maintainthe atomicity of the transaction. No conventional, MESI-style cache protocols are used to maintain lines in “shared” or “exclusive”states at any point in the system, so it is legal for many processor nodes to hold the same line simultaneously in either an unmodified or speculatively modified form. At the end of each transaction, the broadcast notifies all other processors about what state has changed during the completing transaction. During this process, they perform conventional invalidation(if the commit packet only contains addresses) or update(if it contains addresses and data) to keep their cache state coherent. Simultaneously, they must determine if they may have used shared data too early. If they have read any datamodified by the committing transaction during their currently executing transaction, they are forced to restart and reload the correct data. This hardware mechanism protects against true data dependencies automatically, without requiring programmers to insert locks or related constructs. At the same time, data antidependencies are handled simply by the fact that later processors will eventually get their own turn to flush out data to memory. Until that point, their “later” results are not seenby transactions that commit earlier (avoiding WAR dependencies) and they are able to freely overwrite previously modified data in a clearly sequenced manner (handling WAW dependenciesin a legal way). Effectively, the simple,  sequentialized consistence model allows the coherence model to be greatly simplified, as well.

Tags : , , ,