We evaluate the financial viability of Data Furances from the perspective of cloud service providers. Because DFs serve as a primary heat source in homes, we first perform a simulation study to understand the heating demands for a single family house across the climatic zones in the U.S.. Based on the results, we discuss the expected savings if DFs were used in each zone. We use ballpark figures and back-of-the-envelope calculations; the exact numbers depend on the specific households and data centers under consideration.
DFs reduce the total cost of conventional datacenters in three main ways. First, much of the initial capital investment to build the infrastructure for a datacenter is avoided, including real estate, construction costs, and the cost of new power distribution, networking equipment, and other facilities. A second and related benefit is that operating costs are reduced. For example, cooling cost is significant in centralized data centers due to the power density, but DFs have essentially no additional cooling or air circulation costs since the heat distribution system in the house already circulates air. Thus, DFs increase the power usage efficiencies (PUE) over conventional datacenters. Finally, the money to buy and operate a furnace for home heating is avoided, and can be used instead to offset the cost of servers: the cloud service provider can sell DFs at the price of a furnace, and charge household owners for home heating. By doing this, the heating cost remains the same for the host family, while costs are reduced for the cloud service provider.
One disadvantage of DFs is that the retail price of electricity is usually higher in the residential areas by 10% to 50% than industrial areas. Another potential disadvantage is that the network bandwidth can cost more in homes if the home broadband link cannot satisfy the service and a high bandwidth link must be purchased. Finally, maintenance costs will increase because the machines will be geographically distributed.
To weigh these advantages and disadvantages, we perform a total cost of ownership (TCO) analysis for both DFs and conventional data centers. The initial and operating cost can change based on climate zone, so we first measure the actual heating demand for homes using the U.S. Department of Energy’s EnergyPlus simulator. This simulator calculates the heating load (BTU) required each minute to keep a home warm, using actual weather traces recorded at airports. We simulate a 1700 square foot residential house that is moderately insulated and sealed with a heating setpoint of 21°C (70°F). We use weather data of Typical Meteorological
Year 3 (TMY3), and replay the entire year for cities in each of the five climate zones in the U.S., as listed in Table 1. The last two columns show the percentage of time (in minutes granularity) that the outside temperature is less than 21°C, (thus heating is useful) and that the outside temperature is greater than 35°C (thus the server may have to be shut down for thermo protection since
we do not expect cooling the furnace.). The percentage of time in between is when the servers can be run but the heat must be pumped outside.
Table 1: Representative locations used in simulations
Most data sets do not make use of full floating-point precision so that the lowest-order bits are noise and not actual data. For effective compression, floating-point positions are usually quantized onto a uniform grid. To support quantization for streaming meshes whose bounding box is not known in advance, we use a scheme that quantizes conservatively using a bounding box that is learned as the mesh streams by. The first two vertex positions are compressed without quantization and their distance gives the initial guess on the number of mantissa bits that need to be preserved to guarantee the user-requested precision. This maximum distance is updated with every compressed vertex position and will eventually match the extent of the actual bounding box. How long quantization is overly conservative depends on the order in which of the vertex positions are compressed.
This scheme is part of our current API and works reasonably well, but we still need to analyze and optimize compression speeds and bit-rates. Since conservative quantization encodes many positions with more precision than needed, thereby inflating compression rates, we want to use bounding box information if possible. For the results reported in this paper we assume that advance knowledge about the
bounding box is available. Our streaming mesh writer also supports lossless floating-point compression. This is less efficient since the low-order bits of the mantissa typically contain incompressible noise. But providing this functionality makes it possible to use compression when quantization—for whatever reason—is not an option.
The EX8208 supports Juniper Networks’ unique Virtual Chassis technology, which enables two interconnected EX8200 chassis— any combination of EX8208s or EX8216s—to operate as a single, logical device with a single IP address. Deployed as a collapsed aggregation or core layer solution, an EX8200 Virtual Chassis configuration creates a network fabric for interconnecting access switches, routers, and service-layer devices such as firewalls and load balancers using standards-based Ethernet LAGs.
In a Virtual Chassis configuration, EX8200 switches can be interconnected using either single line-rate 10GbE links or a LAG with up to 12 10GbE line-rate links. Since the Virtual Chassis intraconnections use small form SFP+ interfaces, Virtual Chassis member switches can be separated by distances of up to 40 km. If the EX8200 Virtual Chassis switch members are located in the same or adjacent racks, low cost direct attach cables (DACs) can be used as the interconnect mechanism.
Since the network fabric created by an EX8200 Virtual Chassis configuration prevents loops, it eliminates the need for protocols such as Spanning Tree. The fabric also simplifies the network by eliminating the need for Virtual Router Redundancy Protocol (VRRP), increasing the scalability of the network design. In addition, since the Virtual Chassis Control Protocol (VCCP) used to form the EX8200 Virtual Chassis configuration does not affect the function of the control plane, Junos OS control plane protocols such as 802.3ad, OSPF, Internet Group Management Protocol (IGMP), Physical Interface Module (PIM), BGP and others running on an EX8200 Virtual Chassis system behave in exactly the same way as when running on a standalone chassis.
EX8200 Virtual Chassis configurations are highly resilient, with no single point of failure, ensuring that no single element—whether a
chassis, a line card, a Routing Engine, or an interconnection—can render the entire fabric inoperable following a failure. Virtual
Chassis technology also makes server virtualization at scale feasible by providing simple L2 connectivity over a very large pool
of compute resources located anywhere within a data center.
Virtual Chassis technology can also be used to extend EX8200- based VLANs between data centers by placing an equal number of
switches in both data centers, or by interconnecting two separate Virtual Chassis configurations using a simple L2 trunk.
In some Canadian jurisdictions, personal information is defined as recorded information about an identifiable individual, other than contact information. Under that broad definition, any biometric information is personal information. However, in this document, we will adopt a narrower concept of “personally identifiable information” (PII). Information is considered personally identifiable if an individual may be uniquely identified either from this information only or in combination with any other information. If it is determined that the information is PII (and not just contact information), it will also be considered “personal information” by other Canadian jurisdictions (including the federal Personal Information Protection and Electronic Documents Act).
Some organizations may see the following claims: (i) the stored biometric information is just a meaningless number, and therefore is not personally identifiable information; (ii) biometric templates stored in a database cannot be linked to other databases because a sophisticated proprietary algorithm is used; or (iii) a biometric image cannot be reconstructed from the stored biometric template. In most of these cases, none of these statements is true. If organizations do not have sufficient, state-of-the-art expertise in biometrics, they can easily fall victim to misleading information.
As such, great caution must be taken when stored biometric information is referred to as a “meaningless number.” It will be shown below that this is not necessarily true; in fact, a skilled (but not necessarily malicious) individual, with the proper knowledge, may be able to not only derive personally identifiable information from the stored “number,” but also to reconstruct a replica fingerprint from template data. What follows in this section is a discussion of the validity, or lack thereof, of the notion that the stored biometric information is a “meaningless number.” In particular, the following questions will be addressed:
- Does calling a biometric template a “number” reduce its sensitivity as personal information?
- Which biometric information is, in fact, collected?
- Is it possible to identify an individual based on the collected information?
- Is it possible to link the collected information with other fingerprint databases?
- Can a fingerprint image be reconstructed from the collected information?
The Safe Recipients list allows you to specify email addresses or domain names that will not filter the messages you send to them. You can add, edit, and remove entries in the Safe Recipients list.
1. Be sure the junk email filter is on
2. On the toolbar, click OPTIONS, The Options screen appears.
3. From the Options list, select Junk E-Mail, The Junk E-Mail options appear.
4. To add an entry, in the Safe Recipients List section
- Click the Add text box
- Type the email address (e.g., email@example.com) or the domain name (e.g., cvtc.edu) of the safe recipient NOTE: For domains, you do not need to type the @. HINT: Adding a domain name to your Safe Recipients list ensures that all email addresses ending in that domain will not be sent to their Junk E-mail folder unless your address is blocked.
- Click ADD, The recipient is added.
5. To edit an entry
- From the Safe Recipients list, select the entry you want to edit
- Click EDIT, The recipient appears in the Add text box.
- Make your desired changes
- Click ADD, The entry is edited.
6. To remove an entry
- From the Safe Recipients list, select the entry you want to remove
- Click REMOVE, The entry is removed.
7. Click SAVE, The changes are saved.
Email systems support a service called Delivery Status Notification or DSN for short. This feature allows end users to be notified of successful or failed delivery of email messages. Examples include sending a report when email delivery has been delayed or when an email message has been successfully delivered.
A non-delivery report or NDR is a DSN message sent by the email server (mail transfer agent or MTA for short) that informs the sender that the delivery of the email message failed. While there are various events that can trigger an NDR, the most common cases are when the recipient of the message does not exist or when the destination mailbox is full.
A simple email message is typically made up of a set of headers and at least one body. An example of this can be seen in figure 1. In this example, the email is sent from firstname.lastname@example.org to email@example.com. If the domain name domain2.com does not exist
or does not have an email server, then the MTA at “domain1.com” will send an NDR to firstname.lastname@example.org. When the domain name exists and the MTA at domain2.com is accepting email, the behavior is different. In this case, the domain2.com email server should
check if the destination mailbox exists and is accepting emails. If this is not the case, then the MTA should reject the email message. However, many mail servers will accept any email and then bounce the email later on if the destination address does not exist.
Figure 2 describes a scenario where “email@example.com” does not exist, but the mail server at domain2.com still accepts the email as it cannot verify if the mailbox exists or not. The server then sends an NDR message to “firstname.lastname@example.org” which includes the original message attached.
A quantum public key is a quantum state drawn from a set of non-orthogonal states. Multiple copies of the same key can be issued and distributed to different participants in a system. Such states can be used to encode classical information privately, because by the principles of quantum theory the states cannot be fully distinguished. The natural way to encode classical information on quantum states is to apply some quantum operation which represents the information on the quantum state.
It is considered to use quantum public key encryption for its direct and natural purpose – secure communication. In this part the emphasis is on the advantages of this method with respect to private key cryptography. Other uses of public keys in quantum cryptography include quantum fingerprinting, quantum digital signatures and quantum string commitment. In each of these cases a choice of the set of non orthogonal states is made suitable for the particular application. In the case of secure communication, discussed in this thesis, the quantum states must have the property that they can easily be used for encoding of classical information by a person without knowing which of the states from the set was chosen.
The contribution of this definition is a rigorous description of the parameters of a quantum public key. We give simple and efficient protocols for distribution of public keys, and of encoding and decoding of classical information using these keys. The protocols are divided into two types – those where the key distribution phase is quantum, but the encodings and decodings of messages are classical, and those where also the encoding and decoding procedures involve quantum communication. Each protocol comes with a thorough analysis of its security.
The good properties of this method of encryption allow us to have a network where the content of the exchanged messages, as well as the identities of senders and receivers of messages, are kept secret from any unauthorized entity. This unauthorized entity (the adversary) is assumed to control an arbitrary fraction (smaller than 1) of the users/players in the network. The network provides
unconditional security in both these two aspects. In terms of communication complexity the main parameter is the number of users of the network. With respect to this parameter we have protocols that require for a delivery of a single message a polylogarithmic number of communication rounds. The total amount of communication per message delivery is also polylogarithmic in this parameter.
Classically, according to what is currently known, tolerating an arbitrary fraction of adversary controlled users can be achieved efficiently only with computational security (to be considered efficient a protocol has to operate with both polylogarithmic number of rounds and polylogarithmic total communication per message). Proving this fact is an open problem; however if unconditional
security is required then the only known classical solutions either limit the power of the adversary to control at most half of the players in the network, or are highly inefficient in terms of communication cost in the network. To be more specific, these solutions require at least a linear number of communication rounds per single message delivery (which is far from being acceptable), and the
total amount of communication per message is polynomial.
ColdFusion 8 enables Event Gateways, instant messaging (IM), and SMS (short message service) for interacting with external systems. Event Gateways are ColdFusion components that respond asynchronously to non-HTTP requests: instant messages, SMS text from wireless devices, and so on. ColdFusion provides Lotus Sametime and XMPP (Extensible Messaging and Presence Protocol) gateways for instant messaging. It also provides an event gateway for interacting with SMS text messages.
Injection along these gateways can happen when users (and/or systems) send malicious code to execute on the server. These gateways all utilize ColdFusion Components (CFCs) for processing. Use standard ColdFusion functions, tags, and validation techniques to protect against malicious code injection. Sanitize all input strings and do not allow unvalidated code to access backend systems.
- Use the XML functions to validate XML input.
- When performing XPath searches and transformations in ColdFusion, validate the source before executing.
- Use ColdFusion validation techniques to sanitize strings passed to xmlSearch for performing XPath queries.
- When performing XML transformations use only a trusted source for the XSL stylesheet.
- Ensure that the memory size of the Java Sandbox containing ColdFusion can handle large XML documents without adversely affecting server resources.
- Set the maximum memory (heap) value to less than the amount of RAM on the server (-Xmx)
- Remove DOCTYPE elements from the XML string before converting it to an XML object.
- Use scriptProtect to thwart most attempts of cross-site scripting. Set scriptProtect to All in the Application.cfc file.
- Use <cfparam> or <cfargument> to instantiate variables in ColdFusion. Use these tag with the name and type attributes. If the value is not of the specified type, ColdFusion returns an error.
- To handle untyped variables use IsValid() to validate its value against any legal object type that ColdFusion supports.
- Use <cfqueryparam> and <cfprocparam> to validate dynamic SQL variables against database datatypes.
- Use CFLDAP for accessing LDAP servers. Avoid allowing native JNDI calls to connect to LDAP.
With the rapid expansion of cloud computing and server farms, data centers are having a hard time keeping the amount of cabling, and power, required to support the networks. Fiber optic data links are used to transfer data between servers and to other areas, but the signals still require copper cabling ins and around the server. On-chip Optical Interlinks have a small silicon laser embedded directly on the chip so that the high bandwidth optical signal can be sent directly. This greatly reduces the amount of cabling required for server processors to make the necessary connections. Silicon photodetectors embedded on the chip provide the optical sensing needed to receive optical transmissions.
The concept of replacing traditional copper cable within computers with the optical equivalent will soon become commonplace. Intel’s “Light Peak” seen in Figure 1.1 mounts fiber optic cable directly to the chip. The transmitter is fabricated in a similar manner as the chip itself allowing for data transmission between devices of 10 GB/s, 20 times faster than USB. The device shown in Figure 3.1 is capable of transmitting 50 GB/s down a single channel. With higher powered servers in mind, Hewlett-Packard has begun to use optical waveguides carved into plastic with metal reflectors to route optical data within servers, instead of using optical interconnects. These waveguides can be manufactured in the same fashion as a compact disk. IBM has begun mounting optical transmitters on chips to speed data flow between the cores of multicore processors. All of these ideas have been around for some time, but the technology to produce these chips is just now coming into play.
Figure 1.1 Intel’s Hybrid Integrated Silicon Laser
While these technologies are effective in increasing data flow in and around servers, photons still must be converted to electrons before processing. The only thing stopping an all optical digital processor is the optical transistor that can manipulate the flow of photons within the chip.
PDNS is based on the ns2 simulation tool. Each PDNS federate differs from ns2 in two important respects. First, modifications to ns2
were required to support distributed execution. Specifically, a central problem that must be addressed when federating sequential network simulation software in this way is the global state problem. Each PDNS federate no longer has global knowledge of the state of the system. Specifically, one ns2 federate cannot directly reference state information for network nodes that are instantiated in a different federate. In general, some provision must be made to deal with both static state information that does not change during the execution (e.g., topology information), and dynamic information that does change (e.g., queue lengths). Fortunately, due to the modular design of the ns2 software, one need only address this problem for static information concerning network topology, greatly simplifying the global state problem.
To address this problem, a naming convention is required for an ns2 federate to refer to global state information. Two types of remote information must be accessed. The first corresponds to link end points. Consider the case of a link that spans federate boundaries, i.e., the two link endpoints are mapped to different federates. Such links are referred to in PDNS as rlinks (remote links). Some provision is necessary to refer to end points that reside in a different federate. This is handled in PDNS by using an IP address and network mask to refer to any link endpoint. When configuring a simulation in PDNS, one creates a TCL script for each federate that instantiates the portion of the network mapped to the federate, and instantiates rlinks to represent the “edge-connections” the span federates. The second situation where the global state issue arises concerns reference to endpoints for logical connections, specifically, the final destination for a TCP flow may reside in a different federate. Here, PDNS again borrows well-known networking concepts, and uses a port number and IP address to identify a logical end point.
The second way that PDNS differs from ns2 is it uses a technique called NIx-Vector routing. An initial study of ns2 indicated that routing table size placed severe limitations on the size of networks that could be simulated because the amount of memory required to store routing table information increased O(N2) where N is the number of network nodes. To address this problem, message
routes are computed dynamically, as needed. The path from source to destination is encoded as a sequence of (in effect) output ports numbers call the NIx-Vector, leading to a compact representation. Previously computed routes are cached to avoid repeated re-computation of the same path. This greatly reduces the amount of memory required, and greatly increases the size of network that can
be simulated on a single node. The NIxVector technique is also applicable to the sequential version of ns2 and is used in PDNS.