Secure Sockets Layer Protocol

SSL is the secure communications protocol of choice for a large part of the Internet community. There are many applications of  SSL in existence, since it is capable of securing any transmission over TCP. Secure HTTP, or HTTPS, is a familiar application of SSL in e-commerce or password transactions.

According to the Internet Draft of the SSL Protocol, the point of the protocol “is toprovide privacy and reliability between two communicating applications”. The protocol release further explains that three points combine to provide connection security. These points are:

a·  Privacy – connection through encryption

b·  Identity authentication – identification through certificates, and

c·  Reliability –dependable maintenance of a secure connection throughmessage integrity checking.

The Internet Engineering Task Force (IETF) has created a similar protocol in an attempt tostandardize SSL within the Internet community. Using a series of nine messages, the server authenticates itself to a client that is transmitting information. Though it is a good idea for the user to hold a digital certificate, it is not required for the SSL connection to be established. Keep the following scenario in mind, as it shows a common application of SSL: A user without a certificate wishes to check her e-mail on a web-based e-mail system. Since she has requested a secure connection from the e-mail web page, she expects to send her username and password to the e-mail site. The identification of the e-mail server to her current workstation is critical. To the e-mail server though, it is not critical that the user has an identifying certificate on her machine because she can check her e-mail from any computer. For this reason, SSL does not require a client certificate.

The need to send sensitive information over the Internet is increasing, and so is the necessity to secure information in transit through the Internet. A common application of SSL with a web system is an online store where a client machine is sending a request to a merchant’s server. In order to apply the SSL protocol to a websystem, some requirements must be met. Since the SSL protocol is integrated into most web browsers, and those browsers are normally used to access web applications, no further configuration is required from the client’s side of the SSL connection. Configuration is relatively simple from the server side of the communication equation. First, the web server administrator must acquire a digital certificate. This can be obtained from a Certification Authority (CA) such as VeriSign or RSA Data Security. CAs require that certificates be renewed after a set length of time, as a mechanism for ensuring the identity of the owner of the application’s server.

The second requirement is the proper configuration of the web server to allow SSL connections. For example, the iPlanet Web Server has the capability to store multiple certificates for multiple sites on one web server. This capability allows the administrators to prove the identity of each application hosted by this server, and allows the application users to correctly identify each application separately.The third piece of the puzzle is not necessarily a requirement, but a strong suggestion: to add an accelerator to the web server.  SSL accelerators are PCI cards sold by several companies (Cisco, Broadcom, etc) to speed up the processing actions required to encrypt information for secure communications. There is a balance struck frequently between security and functionality, and this balance changes on a case-by case basis. SSL connections do slow communications, mostly due to the exchanging of keys and other information during the startup phase of the session. The use of publickey cryptography requires a “sizeable amount of information” to be passed between the client and server machines. Though there are several ways to mitigate thisissue, but the most commonly accepted strategy is to use an SSL accelerator.

How SSL Works

The four protocol layers of the SSL protocol (Record Layer, ChangeCipherSpec Protocol, Alert Protocol, and Handshake Protocol) encapsulate all communication between the client machine and the server.

Tags : , , , , , , , ,

Secure Statistical Analysis of Distributed Databases

The significant technical obstacles,  not the least of which is poor data quality,  proposals for large-scale integration of multiple databases have engendered significant public opposition. Indeed, the out cry has been so strong that some plans have been modified or even abandoned. The political opposition to “mining” distributed databases centers on deep, if not entirely precise, concerns about the privacy of database subjects and, to a lesser extent, database owners. The latter is an issue, for example, for databases of credit card transactions or airline ticket purchases. Integrating the data without protecting ownership could be problematic for all parties: the companies would be revealing who their customers are, and where a person is a customer would also be revealed.

For many analyses, however, it is not necessary actually to integrate the data. Instead, as we show using techniques from computer science known generically as secure multi-party computation, the database holders can share analysis-specific sufficient statistics anonymously, but in a way that the desired analysis can be performed in a principled manner. If the sole concern is protecting the source rather than the content of data elements, it is even possible to share the data themselves, in which case any analysis can be performed. The same need arises in non-security settings as well, especially scientific and policy investigations. Forexample, a regression analysis on integrated state databases about factors influencing student performance would be more insightful than individual analyses, or complementary to them. Yet another setting is proprietary data: pharmaceutical companies might all benefit, for example, from a statistical analysis of their combined chemical libraries, but do not wish to reveal which chemicals are in the libraries.

The barriers to integrating databases are numerous. One is confidentiality:  the database holders—weterm them “agencies”—almost always wish to protect the identities of their data subjects. Another is regulation: agencies such as the Census Bureau (Census) and Bureau of  Labor Statistics (BLS) are largely forbidden by law to share their data, even with each other,  let alone with a trusted third party.  A third is scale:  despite advances in networking technology, there are few ways to move a terabyte of data from point A today to point B tomorrow.

The regression setting is important because of its prediction aspect; for example, vulnerable critical infrastructure components might be identified using a regression model. Linear regression is treated for “horizontally partitioned data”. Two methods for secure data integration and an application to secure contingency.Various assumptions are possible about the participating parties, for example, whether they use “correct”values in the computations, follow computational protocols or collude against one another. The setting  of agencies wishing to cooperate but to preserve the privacy of their individual databases. While each agency can “subtract” its own contribution from integrated computations, it should not be able to identify the other agencies’ contributions. Thus, for example, if data are pooled, an agency can of course recognize data elements that are not its own, but should not be able to determine which other agency owns them. In addition, we assume that the agencies are “semi-honest:” each follows the agreed-on computational protocols, but may retain the results of intermediate computations.

Tags : , , , , , , , ,

Performance Evaluation of Web Proxy Cache Replacement Policies

The continued growth of the World-Wide Web and the emergence of new end-user technologies such as cable modems necessitate the use of proxy caches to reduce latency, network traffic and Web server loads. Here we analyze the importance of different Web proxy workload characteristics in making good cache replacement decisions. We evaluate workload characteristics such as object size,  frequency of reference, and turnover in the active set of objects. Trace-driven simulation is used to evaluate the effectiveness of various replacement policies for Webproxy caches. The extended duration of the trace (117 million requests collected over five months) allows long term side effects of replacement policies to be identified and quantified.

The World-Wide Web is based on the client-server model. Web browsers areused by people to access information that is available on the Web. Web servers provide the objects that are requested by the clients. Information on the format of Web requests and responses is available in the HTTP specification. All of the clients communicate with the proxy using the HTTP protocol. The proxy then communicates with the appropriate origin server using the protocol specified in the URL of the requested object (e.g., http, ftp, gopher).

When the proxy receives a request from a client the proxy attempts to fulfill the request from among the objects stored in the proxy’s cache. If the requestedobject is found (a cache hit) the proxy can immediately respond to the client’s request. If the requested object is not found (a cache miss) the proxy then attempts to retrieve the object from another location, such as a peer or parent proxy cache or the origin server. Once a copy of the object has been retrieved the proxy can complete its response to the client. If the requested object is cacheable (based on information provided by the origin server or determined from theURL) the proxy may decide to add a copy of the object to its cache. If the objectis uncacheable (again determined from the URL or information from the origin server) the proxy should not store a copy in its cache.

Two common metrics for evaluating the performance of a Web proxy cache arehit rate and byte hit rate. The hit rate is the percentage of all requests that can be satisfied by searching the cache for a copy of the requested object. The byte hit rate represents the percentage of all data that is transferred directly from the cache rather than from the origin server. The results of our workload characterization study indicate that a trade off exists between these two metrics. Our workload results show that most requests are for small objects, which suggests that the probability of achieving a high hit rate would be increased if the cache were used to store a large number of small objects. However, our workload results also revealed that a significant portion of the network traffic is caused by the transfer of very large objects. Thus to achieve higher byte hit rates a few larger objects must be cached at the expense of many smaller ones. Our characterization study also suggested that a wide-scale deployment of cable modems (orother high bandwidth access technologies) may increase the number of large object transfers. One of the goals of this study is to determine how existing replacement policies perform under these changing workloads.

A proxy cache that is primarily intended to reduce response times for users should utilize a replacement policy that achieves high hit rates. In an environment where saving bandwidth on the shared external network is of utmost importance, the proxy cache should use a replacement policy that achieves high byte hitrates. A proxy cache could also utilize multiple replacement policies. For example, a replacement policy that achieves high hit rates could be used to manage the proxy’s memory cache in order to serve as many requests as quickly as possible and to avoid a disk I/O bottleneck. The proxy’s much larger disk cache could be managed with a policy that achieves higher byte hit rates, in order to reduce external network traffic.

Tags : , , , , , , , ,

Mapping between Relational Databases and OWL Ontologies

R2O approach defines declarative and extensible language (in xml) to describe mapping between given RDB and an OWL ontology or RDFS schema so that tools can process this mapping and generate triples that correspond to source RDB data. D2RQ technology is another bridging technology where one can use SQL to describe the mapping information. This language is closer to SQL level and is not as declarative as R2O. Both D2RQ and Virtuoso RDF Views allow retrieving instance data from RDB on-the-fly during the execution of SPARQL queries over the RDF data store. The aim is to demonstrate a very simple standard SQL-based RDB to RDF/OWL mapping approach that is based on defining correspondence between the tables of the database and the classes of the ontology, as well as between table fields/links in the database and datatype/object properties in the ontology (with possible addition of filters and linked tables in the mapping definition), and later automatically generating SQL statements that generate the RDF triples that correspond to the source database data.

There is at least a conceptual possibility to create the mapping between the source RDB schema and target OWL ontology by means of model transformations described in some transformation language (e.g. MOF QVT, ATL, or MOLA); however, typically these translations are not supported on data in RDB or RDF formats and require the use of an intermediate format (a so-called model repository, such as EMF) which may not be feasible for large data sets. It’s approach to go for direct translation of RDB-stored data into (conceptual) OWL ontology format that can be executed on the level of DBMS.

We propose a bridging mechanism between relational databases and OWL ontologies. We assume that the ontology and the database have been developed separately. Most often the database is of legacy type but the ontology reflects the semantic concerns regarding the data contents. Our approach is to make a mapping between these structures and store the mapping in meta-level relational. This approach allows us to use relational database engine to process mapping information and generate SQL sentences that, when executed, will create RDF/OWL-formatted data (RDF triples) describing instances of OWL classes and OWL datatype and OWL object properties that correspond to the source RDB data. In the simplest case, an OWL class corresponds to a RDB table, an OWL datatype property corresponds to a table field, and an OWL object property corresponds to a foreign key. In real life examples the mappings are not so straightforward.

 

Tags : , , , , , , , ,

Heating Up with Cloud Computing

Servers can be sent to homes and office buildings and used as a primary heat source. To call this approach the Data Furnace or DF. Data Furanceshave three advantages over traditional data centers: 1) a smaller carbon footprint 2) reduced total costof ownership per server 3) closer proximity to the users. From the home owner’s perspective, a DF is equivalent to a typical heating system: a metal cabinet is shipped to the home and added to the ductwork or hot water pipes. From a technical perspective, DFs create new opportunities for both lower cost and improved quality of service, if cloud computing applications can exploit the differences in the cost structure and resource profile between Data Furances and conventional data centers.

Cloud computing is hot, literally. Electricity consumed by computers and other IT equipment has been skyrocketing in recent years, and has become a substantial part of the global energy market. The emergence of cloud computing, online services, and digital media distribution has lead to more computing tasks being off loaded to service providers and increasing demand on data centers infrastructure. For this reason, it is not surprising that data center efficiency has been one of the focuses of cloud computing and data center design and operation.

Physically, a computer server is a metal box that converts electricity into heat. The temperature of the exhaust air (usually around 40-50°C) is too low to regenerate electricity efficiently, but is perfect for heating purposes, including home/building space heating, cloth dryers, water heaters, and agriculture. We propose to replace electric resistive heating elements with silicon heating elements, there by reducing societal energy foot print by using electricity for heating to also perform computation.The energy budget allocated for heating would provide an ample energy supply for computing. For example, home heating alone constitutes about 6% of theU.S. energy usage. By piggy-backing on only half ofthis energy, the IT industry could double in size without increasing its carbon footprint or its load on the power grid and generation systems. After years of development of cloud computing infrastructure, system management capabilities are getting mature. Servers can be remotely re-imaged, re-purposed, and rebooted. Virtual machine encapsulation ensures certain degree of isolation. Secure executions on untrusted devices are feasible. Sensor networks have make high physical security within reach. At the same time, computers are getting cheaper, network connectivity is getting faster, yet energy becomes a scarce resource and its price is on the trend of fast rise.

From a manage ability and physical security point of view, the easiest adopters of this idea are office buildings and apartment complexes. A mid-sized data center (e.g.hundreds of killowatts) can be hosted inside the building and the heat it generates will be circulated to heat the building. Dedicated networking and physical security infrastructure can be built around it, and a dedicated operator can be hired to manage one or more of them. Their operation cost will be similar to operating other urban data centers, and can leverage the current trend toward sealed server containers that are replaced as a unit to save repair/replacement costs. As a thought provoking exercise, we push this vision to the extreme in this paper. We investigate the feasibility of Data Furances or DFs, which are micro-data centers, on the order of 40 to 400 CPUs, that serve as the primary heat source for a single-family home. These micro data centers use the home broadband network to connect to the cloud, and can be used to host customer virtual machines or dedicated Internet services. They are integrated into the home heating system the same way asa conventional electrical furnace, using the same power system, ductwork, and circulation fan. Thus, DFs reduce the cost per server in comparison to conventional data centers by leveraging the home’s existing infrastructure, and precluding the cost of real estate and construction of new brick and mortar structures. Further more, they naturally co-locate computational power and storage close tothe user population.

DFs are managed remotely and do not require more physical maintenance than conventional furnaces. The cloud service operators may further incentivize the host families by providing free heat in exchange for occasion physical touches such as replacing air filters and, in extreme cases, turning on/off servers. By bringing micro data centers close to the users, including the idea of renting condos to host servers and to use home routers as a nano-datacenter for content caching. Here a quantum leap is achieved when this idea scaled to the size of a furnace; at this scale, the micro-data center cannot only leverage existing residential infrastructure for power, networking, and air circulation, but it can also reuse the energy that would otherwise be consumed for home heating.

Tags : , , , , , , , ,

Performance Analysis of a Client Side Caching for Web Traffic

For Web users, congestion manifests itself in unacceptably long response times. One possible remedy to the latency problem isto use caching at the client, at the proxy server, or within the Internet. However, Web documents are becoming increasingly dynamic (i.e., have short lifetimes), which limits the potential benefit of caching. The performance of a Web caching system can be dramatically increased by integrating document prefetching (a.k.a. “proactive caching”) into its design. Although prefetching reduces the response time of a requested document, it also increases the network load, as some documents will be unnecessarily prefetched (due to the imprecision in the prediction algorithm). In this study, we analyze the confluence of the two effects through a tractable mathematical model that enables usto establish the conditions under which prefetching reduces the average response time of a requested document. The model accommodates both passive client and proxy caching along with prefetching. Our analysis is used to dynamically compute the “optimal” number of documents to prefetch in the subsequent client’s idle (think) period.In general, this optimal number is determined through a simple numerical procedure. Closed-form expressions forthis optimal number are obtained for special yet important cases. Simulations are used to validate our analysis and study the interactions among various system parameters.

Caching is considered an effective approach for reducing the response time by storing copies of popular Web documents in a local cache, a proxy server cache close to the end user, or even within the Internet. However, the benefit of caching diminishes as Web documents become more dynamic. A cached document may be stale at the time of its request, given that most Web caching systems in use today are passive (i.e., documents are fetched or validated only when requested).

The effectiveness of prefetching in addressing the limitations of passive caching. Prefetched documents may include hyperlinked documentsthat have not been requested yet as well as dynamic objects. Stale cached documents mayalso be updated through prefetching. In principle, a prefetching scheme requires predicting the documentsthat are most likely to be accessed in the near future and determining how many documents to prefetch. Most research on Web prefetching focused on the prediction aspect. In many of these studies (e.g, a fixed-threshold-based approach is used, whereby a set of candidate files and their access probabilities are first determined. Among these candidate files, those whose access probabilities exceed acertain prefetching threshold are prefetched. Other prefetching schemes involve prefetching a fixed numberof popular documents. The Integration of Web Caching and Prefetching(IWCP) cache replacement policy, which considers both demand requests and prefetched documents for caching based on a normalized profit function. The work in  focuses on prefetching pages of query results of search engines. The authors proposed three prefetching algorithms to be implementedat the proxy server: (1) the hit-rate-greedy algorithm, which greedily prefetches files so as to optimizethe hit rate; (2) the bandwidth-greedy algorithm, which optimizes bandwidth consumption; and (3) the H/B-greedy algorithm, which optimizes the ratio between the hit rate and bandwidth consumption. The negative impact of prefetching on the average access time was not considered.

Numerous tools and products that support Web prefetching have been developed. Wcol prefetches embedded hyperlinks and images, with a configurable maximum number of prefetched objects. PeakJet2000 is similar to Wcol with the difference that it prefetches objects only if the client has accessed the object before. NetAccelerator works as PeakJet2000, but does not use aseparate cache for prefetching as in PeakJet2000. Google’s Web accelerator collects user statistics, andbased on these statistics it decides on what links to prefetch. It also can take a prefetching action basedon the user’s mouse movements. Web browsers based on Mozilla Version 1.2 and higher also supportlink prefetching. These include Firefox, FasterFox, and Netscape 7.01+. In these browsers, Web developers need to include html link tags or html meta-tags that give hints on what to prefetch. Most previous prefetching designs relied on a static approach for determining the documents to prefetch. More specifically, such designs do not consider the state of the network (e.g., traffic load) in deciding how many documents to prefetch. For example, in threshold-based schemes, all documents whose access probabilities are greater than the prefetching threshold are prefetched, such astrategy may actually increase the average latency of a document.

 

Tags : , , , , , , , ,

XMM versus floating point registers

Processors with the SSE (Streaming SIMD Extensions) instruction set can do single precision floating point calculations in XMM (X-ray Multi-Mirror Mission)  registers. Processors with the SSE2 instruction set can also do double precision calculations in XMM registers. Floating point calculations are approximately equally fast in XMM registers and the old floating point stack registers. The decision of whether to use the floating point stack registers ST(0) – ST(7) or XMM registers depends on the following factors.

Advantages of using ST() registers :

• Compatible with old processors without SSE or SSE2.

• Compatible with old operating systems without XMM support.

• Supports long double precision.

• Intermediate results are calculated with long double precision.

• Precision conversions are free in the sense that they require no extra instructions and take no extra time. You may use ST() registers for expressions where operands have mixed precision.

• Mathematical functions such as logarithms and trigonometric functions are  supported by hardware instructions. These functions are useful when optimizing forsize, but not necessarily faster than library functions using XMM registers.

• Conversions to and from decimal numbers can use the FBLD and FBSTP instructions when optimizing for size.

• Floating point instructions using ST() registers are smaller than the corresponding instructions using XMM registers. For example, FADD ST(0),  ST(1) is 2 bytes, while ADDSD XMM0, XMM1 is 4 bytes.

Advantages of using XMM or YMM registers :

• Can do multiple operations with a single vector instruction.

• Avoids the need to use FXCH for getting the desired register to the top of the stack.

• No need to clean up the register stack after use.

• Can be used together with MMX instructions.

• No need for memory intermediates when converting between integers and floating point numbers.

• 64-bit systems have 16 XMM/YMM registers, but only 8 ST() registers.

• ST() registers cannot be used in device drivers in 64-bit Windows.

• The instruction set for ST() registers is no longer developed. The instructions willprobably still be supported for many years for the sake of backwards compatibility, but the instructions may work less efficiently in future processors.


Tags : , , , , , , , ,

Semantic Web as Relational Data and RDF

The Semantic Web is about creating a web of data by integrating data that comes from diverse sources:  HTML pages,  XML documents, spreadsheets, relational databases, etc. In order for software applications to be able to make use of these diverse data sources, a primary objective is to make the Internet appear as one unified, virtual database. This is possible through RDF (Resource Description Framework), which isa standardized format for representing data in a subject-predicate-object format. This facilitates the interchange and combination of data that is served all overthe web, which we now consider Linked Data.

The data that need/want to be shared are located in relational databases. Further more, internet-accessible databases contain up to 500 times more data compared to the static web;  and three-quarters of these are managed by relational databases. Therefore if the Linked Data Cloud set wants to have more clouds, it is imperative to have away to convert all the relational data into RDF.

To understand the relationship between Relational Databases and the Semantic Web, consider the following analogy. Relational data is to RDF as a relational schema is to an ontology (RDFS or OWL). Data has to be stored in a relational database that is made with a schema. (Wikipedia explains it: The structure of a database system, described in a formal language supported by the database management system (DBMS). In a relational database,the schema defines the tables, the fields in each table, and the relationships between fields and tables.) Likewise, RDF data is an instance of specific triple schema that is part of an ontology. Therefore to create RDF content from a relational database, it is necessary to use an ontology.

Ontology and Database mapping

One way is to reuse existing domain ontologies—such as FOAF or SIOC (Semantically-Interlinked Online Communities), since Ontologies are designed for reuse. If an ontology already exists, then a mapping between the old relational schema and the ontology has to be established. Imagine, for example, a database of contacts. The FOAF (Friend of a Friend) ontology represents the relationships between people. Becauseof this, the relation schema and relational data can be easily mapped to the FOAF ontology and RDF. This allows for the creation of a contact database bringing all the benefits of RDF relationships andthe added bonus of always having up-to date contact details.

Several tools have been created that have been able to successfully tackle this problem including D2RQ, R2O and others. Unfortunately, these current solutions are a bit complex. One has to learn a mapping language and an existing domain ontology must be used to map the database. Hence, it makes it harder for a database administrator to convert the relational database content into RDF and this hinders it from becoming a priority.

Direct Mapping

Another way to approach this problem is by direct mapping methods, which do not consider existing domain ontologies. A system like this could also facilitate translating SPARQL to SQL queries, so up-to-date relational data can be retrieved in RDF, instead of having an RDF dump of the relational data, which is the output of Ontology and Database mapping systems. Instead, these methods (semi)-automatically generate the ontology based on the domain semantics encoded in the relational schema. Once the ontology can be generated, the relational content can be mapped to RDF. Much work is still required in this area, however, to create more expressive automatic mappings.

Tags : , , , , , , , ,

Rule Based Automated Price Negotiation

The idea of automating e-commerce transactions attracted alot of interest. Multi-agent systems are one of promising software technologies for achieving this goal. This  rule-based approaches to automated negotiation and presents some experimental results based on our own implementation of a rule-based price negotiation mechanism in a model e-commerce multi-agent system. The experimental scenario considers multiple buyer agents involved in multiple English auctions that are performed in parallel.

Rules have been indicated as a promising technique for formalizing multi-agent negotiations. Note that when designing systems for automated negotiations one should distinguish between negotiation protocols (or mechanisms) that define”rules of encounter” between participants and negotiation strategies that define behaviors aiming at achieving a desired outcome. Thus far multiple rule representations were proposed for both negotiation mechanisms  and strategies. It shows a  complete framework for implementing portable agent negotiations that comprises: (1) negotiation infrastructure, (2) generic negotiation protocol and (3) taxonomy of declarative rules. The negotiation infrastructure defines roles of negotiation participants and of a host. Participants exchange proposals within a negotiation locale managed by the host. The generic negotiation protocol defines three phases of a negotiation: (1) admission, (2) exchange of proposals, and (3) formation of an agreement, in terms of how and when messages should be exchanged between the host and participants. Negotiation rules are used for enforcing the negotiation mechanism. Rules are organized into taxonomy: rules for participants  admission to negotiations, rules for checking validity of proposals, rules for protocol enforcement, rules for updating negotiation status and informing participants, rules for agreement formation and rules for controlling negotiation termination.

It  suggests use of an ontology for expressing negotiation protocols. Whenever an agent is admitted to negotiation it also obtains a specification of the negotiation rules in terms of the shared ontology. In some sense, the negotiation template used by our implementation is a ”simplified” negotiation ontology and the participants must be able to ”understand” parameters defined in the template. This approach is exemplified with a sample scenario, but it neither implementation details, nor experimental results. In  a mathematical characterization of auction rules for parameterizing the auction design space is introduced. The proposed parametrization is organized along three axes :  i) bidding rules – state when bids may be posted, updated or withdrawn;  ii) clearing policy – states how the auction commands resource allocation (including auctioneditems and money) between auction participants (this corresponds roughly to agreement making in our approach);  iii) information revelation policy – states how and what intermediate auction information is supplied to participating agents. An implementation of a new rule-based language for expressing auction mechanisms – AB3D scripting language — is reported. The design and implementation of AB3D were primarily influenced by the parametrization of auction design spacedefined in and the previous experiences with the Michigan Internet AuctionBot . According to  A3BD, it allows the initialization of auction parameters, the definition of rules for triggering auction events, declaration of user variables and definition of rules for controlling bid admissibility.

A formal executable approach for defining strategy of agents participating in negotiationsusing defeasible logic programs is reported. This approach was demonstrated using English auctions and bargaining with multiple parties by indicatingsets of rules for describing strategies of participating agents. In a preliminary implementation of a system of agents that negotiate using strategies expressed in defeasible logic is described. The implementation is demonstrated with a bargaining scenario involving one buyer and one seller agent. The buyer strategy is defined by a defeasible logic program.

CONSENSUS systems allows agents to negotiate diff erent complementary items on separate servers on behalf of human users. Each CONSENSUS agent uses rules partitioned into:  i) basic rules that determine the negotiation protocol,  ii)strategy rules that determine the negotiation strategy and  iii) coordination rules that determine knowledge for assuring that either all of the complementary items or none are purchased. Note that in CONSENSUS the rule-based approach is taken beyond mechanism and strategy representation to capture also coordination knowledge.

Tags : , , , , , , , ,

Stakeholders and the Value Chain of the Semantic Web

Stakeholders are people and organizations, who are affected by and can influence the Semantic Web. Several main actors, which can be considered as both consumers and producers (“prosumers”) of the Semantic Web, have been identified. The Semantic Web value chain represents groups of interests which co-create value through knowledge and experience exchanges. It connects researchers, who mainly produce Semantic Web theories and methods, computer science firms, which mainly produce solutions, and firms and end users, which adopt and use Semantic Web technologies.

 

 

Fig :  Stakeholders and the Value Chain of the Semantic Web

According to this value chain the following actors have been identified:

1.  Semantic Web researchers, who are directly involved in European and international projects, are developing and innovating theories and methods of the Semantic Web (i.e. Semantic Web languages, Semantic Web services or algorithms to deal with reasoning, scalability, heterogeneity, and dynamics). In order to validate the resulting theories, researchers often develop application prototypes which are tested within innovative firms.

2.  Consortiums of standardization are interested in sorting out new recommendations and standards for Semantic Web technologies, giving the basics for innovative tools.

3.  Software developers are interested in developing Semantic Web solutions and applications. They can directly sell the latter to firms and, at the same time, test innovative theories and methods.

4.  Intermediaries are interested in transferring technology and knowledge from researchers and developers to practitioners. Intermediaries can assume the form of  Semantic Web designers, consultants, venturecapitals, spin offs, etc.

5.  Innovative enterprises are interested in catching new opportunitiesfrom the Semantic Web and also developing new business models.

6. End-users are interested in obtaining useful and effective solutions. One of the most important requisites is to use a transparent technology – “No matter what is behind, if it works”. Although end users are positioned at the end of the value chain, their needs are very relevant for all Semantic Web stakeholders.

7.  Standardization de-facto. End-users should be considered as a central element in the production of social and semantic-based technology, thus should be strongly connected with all of the other value chain actors. Also, the consumerization phenomenon is unveiling newstandards, which are de-facto substituting for consortiums of standardization.

Thanks to several European projects developed in different fields, a lot of companies are considering Semantic Web technologies as very challenging, and are starting to test these technologies in real applications. In many sectors, such as the ones that the Knowledge Web NoE has explored, beneficial results are wide spread.

a. aerospace

b. automobile industries

c. banking and finance

d. consumer goods

e. distribution energy and public utilities

f. environment

g· government and public services

h· food industries

i· industry and construction

j· pharmaceuticals and health

k· service industries

l· sports

m· technology and solution providers

n· telecommunication

o· transport and logistics

The Semantic Web will spread all over the industry sectors, as an important building block of any sort of application. In other words, the Semantic Web is affecting most of the industry sectors in which technology and knowledge are relevant assets to be managed.

Tags : , , , , , , , ,