We evaluate the financial viability of Data Furances from the perspective of cloud service providers. Because DFs serve as a primary heat source in homes, we first perform a simulation study to understand the heating demands for a single family house across the climatic zones in the U.S.. Based on the results, we discuss the expected savings if DFs were used in each zone. We use ballpark figures and back-of-the-envelope calculations; the exact numbers depend on the specific households and data centers under consideration.
DFs reduce the total cost of conventional datacenters in three main ways. First, much of the initial capital investment to build the infrastructure for a datacenter is avoided, including real estate, construction costs, and the cost of new power distribution, networking equipment, and other facilities. A second and related benefit is that operating costs are reduced. For example, cooling cost is significant in centralized data centers due to the power density, but DFs have essentially no additional cooling or air circulation costs since the heat distribution system in the house already circulates air. Thus, DFs increase the power usage efficiencies (PUE) over conventional datacenters. Finally, the money to buy and operate a furnace for home heating is avoided, and can be used instead to offset the cost of servers: the cloud service provider can sell DFs at the price of a furnace, and charge household owners for home heating. By doing this, the heating cost remains the same for the host family, while costs are reduced for the cloud service provider.
One disadvantage of DFs is that the retail price of electricity is usually higher in the residential areas by 10% to 50% than industrial areas. Another potential disadvantage is that the network bandwidth can cost more in homes if the home broadband link cannot satisfy the service and a high bandwidth link must be purchased. Finally, maintenance costs will increase because the machines will be geographically distributed.
To weigh these advantages and disadvantages, we perform a total cost of ownership (TCO) analysis for both DFs and conventional data centers. The initial and operating cost can change based on climate zone, so we first measure the actual heating demand for homes using the U.S. Department of Energy’s EnergyPlus simulator. This simulator calculates the heating load (BTU) required each minute to keep a home warm, using actual weather traces recorded at airports. We simulate a 1700 square foot residential house that is moderately insulated and sealed with a heating setpoint of 21°C (70°F). We use weather data of Typical Meteorological
Year 3 (TMY3), and replay the entire year for cities in each of the five climate zones in the U.S., as listed in Table 1. The last two columns show the percentage of time (in minutes granularity) that the outside temperature is less than 21°C, (thus heating is useful) and that the outside temperature is greater than 35°C (thus the server may have to be shut down for thermo protection since
we do not expect cooling the furnace.). The percentage of time in between is when the servers can be run but the heat must be pumped outside.
Table 1: Representative locations used in simulations
The EX8208 supports Juniper Networks’ unique Virtual Chassis technology, which enables two interconnected EX8200 chassis— any combination of EX8208s or EX8216s—to operate as a single, logical device with a single IP address. Deployed as a collapsed aggregation or core layer solution, an EX8200 Virtual Chassis configuration creates a network fabric for interconnecting access switches, routers, and service-layer devices such as firewalls and load balancers using standards-based Ethernet LAGs.
In a Virtual Chassis configuration, EX8200 switches can be interconnected using either single line-rate 10GbE links or a LAG with up to 12 10GbE line-rate links. Since the Virtual Chassis intraconnections use small form SFP+ interfaces, Virtual Chassis member switches can be separated by distances of up to 40 km. If the EX8200 Virtual Chassis switch members are located in the same or adjacent racks, low cost direct attach cables (DACs) can be used as the interconnect mechanism.
Since the network fabric created by an EX8200 Virtual Chassis configuration prevents loops, it eliminates the need for protocols such as Spanning Tree. The fabric also simplifies the network by eliminating the need for Virtual Router Redundancy Protocol (VRRP), increasing the scalability of the network design. In addition, since the Virtual Chassis Control Protocol (VCCP) used to form the EX8200 Virtual Chassis configuration does not affect the function of the control plane, Junos OS control plane protocols such as 802.3ad, OSPF, Internet Group Management Protocol (IGMP), Physical Interface Module (PIM), BGP and others running on an EX8200 Virtual Chassis system behave in exactly the same way as when running on a standalone chassis.
EX8200 Virtual Chassis configurations are highly resilient, with no single point of failure, ensuring that no single element—whether a
chassis, a line card, a Routing Engine, or an interconnection—can render the entire fabric inoperable following a failure. Virtual
Chassis technology also makes server virtualization at scale feasible by providing simple L2 connectivity over a very large pool
of compute resources located anywhere within a data center.
Virtual Chassis technology can also be used to extend EX8200- based VLANs between data centers by placing an equal number of
switches in both data centers, or by interconnecting two separate Virtual Chassis configurations using a simple L2 trunk.
In some Canadian jurisdictions, personal information is defined as recorded information about an identifiable individual, other than contact information. Under that broad definition, any biometric information is personal information. However, in this document, we will adopt a narrower concept of “personally identifiable information” (PII). Information is considered personally identifiable if an individual may be uniquely identified either from this information only or in combination with any other information. If it is determined that the information is PII (and not just contact information), it will also be considered “personal information” by other Canadian jurisdictions (including the federal Personal Information Protection and Electronic Documents Act).
Some organizations may see the following claims: (i) the stored biometric information is just a meaningless number, and therefore is not personally identifiable information; (ii) biometric templates stored in a database cannot be linked to other databases because a sophisticated proprietary algorithm is used; or (iii) a biometric image cannot be reconstructed from the stored biometric template. In most of these cases, none of these statements is true. If organizations do not have sufficient, state-of-the-art expertise in biometrics, they can easily fall victim to misleading information.
As such, great caution must be taken when stored biometric information is referred to as a “meaningless number.” It will be shown below that this is not necessarily true; in fact, a skilled (but not necessarily malicious) individual, with the proper knowledge, may be able to not only derive personally identifiable information from the stored “number,” but also to reconstruct a replica fingerprint from template data. What follows in this section is a discussion of the validity, or lack thereof, of the notion that the stored biometric information is a “meaningless number.” In particular, the following questions will be addressed:
- Does calling a biometric template a “number” reduce its sensitivity as personal information?
- Which biometric information is, in fact, collected?
- Is it possible to identify an individual based on the collected information?
- Is it possible to link the collected information with other fingerprint databases?
- Can a fingerprint image be reconstructed from the collected information?
The Safe Recipients list allows you to specify email addresses or domain names that will not filter the messages you send to them. You can add, edit, and remove entries in the Safe Recipients list.
1. Be sure the junk email filter is on
2. On the toolbar, click OPTIONS, The Options screen appears.
3. From the Options list, select Junk E-Mail, The Junk E-Mail options appear.
4. To add an entry, in the Safe Recipients List section
- Click the Add text box
- Type the email address (e.g., firstname.lastname@example.org) or the domain name (e.g., cvtc.edu) of the safe recipient NOTE: For domains, you do not need to type the @. HINT: Adding a domain name to your Safe Recipients list ensures that all email addresses ending in that domain will not be sent to their Junk E-mail folder unless your address is blocked.
- Click ADD, The recipient is added.
5. To edit an entry
- From the Safe Recipients list, select the entry you want to edit
- Click EDIT, The recipient appears in the Add text box.
- Make your desired changes
- Click ADD, The entry is edited.
6. To remove an entry
- From the Safe Recipients list, select the entry you want to remove
- Click REMOVE, The entry is removed.
7. Click SAVE, The changes are saved.
Email systems support a service called Delivery Status Notification or DSN for short. This feature allows end users to be notified of successful or failed delivery of email messages. Examples include sending a report when email delivery has been delayed or when an email message has been successfully delivered.
A non-delivery report or NDR is a DSN message sent by the email server (mail transfer agent or MTA for short) that informs the sender that the delivery of the email message failed. While there are various events that can trigger an NDR, the most common cases are when the recipient of the message does not exist or when the destination mailbox is full.
A simple email message is typically made up of a set of headers and at least one body. An example of this can be seen in figure 1. In this example, the email is sent from email@example.com to firstname.lastname@example.org. If the domain name domain2.com does not exist
or does not have an email server, then the MTA at “domain1.com” will send an NDR to email@example.com. When the domain name exists and the MTA at domain2.com is accepting email, the behavior is different. In this case, the domain2.com email server should
check if the destination mailbox exists and is accepting emails. If this is not the case, then the MTA should reject the email message. However, many mail servers will accept any email and then bounce the email later on if the destination address does not exist.
Figure 2 describes a scenario where “firstname.lastname@example.org” does not exist, but the mail server at domain2.com still accepts the email as it cannot verify if the mailbox exists or not. The server then sends an NDR message to “email@example.com” which includes the original message attached.
A quantum public key is a quantum state drawn from a set of non-orthogonal states. Multiple copies of the same key can be issued and distributed to different participants in a system. Such states can be used to encode classical information privately, because by the principles of quantum theory the states cannot be fully distinguished. The natural way to encode classical information on quantum states is to apply some quantum operation which represents the information on the quantum state.
It is considered to use quantum public key encryption for its direct and natural purpose – secure communication. In this part the emphasis is on the advantages of this method with respect to private key cryptography. Other uses of public keys in quantum cryptography include quantum fingerprinting, quantum digital signatures and quantum string commitment. In each of these cases a choice of the set of non orthogonal states is made suitable for the particular application. In the case of secure communication, discussed in this thesis, the quantum states must have the property that they can easily be used for encoding of classical information by a person without knowing which of the states from the set was chosen.
The contribution of this definition is a rigorous description of the parameters of a quantum public key. We give simple and efficient protocols for distribution of public keys, and of encoding and decoding of classical information using these keys. The protocols are divided into two types – those where the key distribution phase is quantum, but the encodings and decodings of messages are classical, and those where also the encoding and decoding procedures involve quantum communication. Each protocol comes with a thorough analysis of its security.
The good properties of this method of encryption allow us to have a network where the content of the exchanged messages, as well as the identities of senders and receivers of messages, are kept secret from any unauthorized entity. This unauthorized entity (the adversary) is assumed to control an arbitrary fraction (smaller than 1) of the users/players in the network. The network provides
unconditional security in both these two aspects. In terms of communication complexity the main parameter is the number of users of the network. With respect to this parameter we have protocols that require for a delivery of a single message a polylogarithmic number of communication rounds. The total amount of communication per message delivery is also polylogarithmic in this parameter.
Classically, according to what is currently known, tolerating an arbitrary fraction of adversary controlled users can be achieved efficiently only with computational security (to be considered efficient a protocol has to operate with both polylogarithmic number of rounds and polylogarithmic total communication per message). Proving this fact is an open problem; however if unconditional
security is required then the only known classical solutions either limit the power of the adversary to control at most half of the players in the network, or are highly inefficient in terms of communication cost in the network. To be more specific, these solutions require at least a linear number of communication rounds per single message delivery (which is far from being acceptable), and the
total amount of communication per message is polynomial.
ColdFusion 8 enables Event Gateways, instant messaging (IM), and SMS (short message service) for interacting with external systems. Event Gateways are ColdFusion components that respond asynchronously to non-HTTP requests: instant messages, SMS text from wireless devices, and so on. ColdFusion provides Lotus Sametime and XMPP (Extensible Messaging and Presence Protocol) gateways for instant messaging. It also provides an event gateway for interacting with SMS text messages.
Injection along these gateways can happen when users (and/or systems) send malicious code to execute on the server. These gateways all utilize ColdFusion Components (CFCs) for processing. Use standard ColdFusion functions, tags, and validation techniques to protect against malicious code injection. Sanitize all input strings and do not allow unvalidated code to access backend systems.
- Use the XML functions to validate XML input.
- When performing XPath searches and transformations in ColdFusion, validate the source before executing.
- Use ColdFusion validation techniques to sanitize strings passed to xmlSearch for performing XPath queries.
- When performing XML transformations use only a trusted source for the XSL stylesheet.
- Ensure that the memory size of the Java Sandbox containing ColdFusion can handle large XML documents without adversely affecting server resources.
- Set the maximum memory (heap) value to less than the amount of RAM on the server (-Xmx)
- Remove DOCTYPE elements from the XML string before converting it to an XML object.
- Use scriptProtect to thwart most attempts of cross-site scripting. Set scriptProtect to All in the Application.cfc file.
- Use <cfparam> or <cfargument> to instantiate variables in ColdFusion. Use these tag with the name and type attributes. If the value is not of the specified type, ColdFusion returns an error.
- To handle untyped variables use IsValid() to validate its value against any legal object type that ColdFusion supports.
- Use <cfqueryparam> and <cfprocparam> to validate dynamic SQL variables against database datatypes.
- Use CFLDAP for accessing LDAP servers. Avoid allowing native JNDI calls to connect to LDAP.
With the rapid expansion of cloud computing and server farms, data centers are having a hard time keeping the amount of cabling, and power, required to support the networks. Fiber optic data links are used to transfer data between servers and to other areas, but the signals still require copper cabling ins and around the server. On-chip Optical Interlinks have a small silicon laser embedded directly on the chip so that the high bandwidth optical signal can be sent directly. This greatly reduces the amount of cabling required for server processors to make the necessary connections. Silicon photodetectors embedded on the chip provide the optical sensing needed to receive optical transmissions.
The concept of replacing traditional copper cable within computers with the optical equivalent will soon become commonplace. Intel’s “Light Peak” seen in Figure 1.1 mounts fiber optic cable directly to the chip. The transmitter is fabricated in a similar manner as the chip itself allowing for data transmission between devices of 10 GB/s, 20 times faster than USB. The device shown in Figure 3.1 is capable of transmitting 50 GB/s down a single channel. With higher powered servers in mind, Hewlett-Packard has begun to use optical waveguides carved into plastic with metal reflectors to route optical data within servers, instead of using optical interconnects. These waveguides can be manufactured in the same fashion as a compact disk. IBM has begun mounting optical transmitters on chips to speed data flow between the cores of multicore processors. All of these ideas have been around for some time, but the technology to produce these chips is just now coming into play.
Figure 1.1 Intel’s Hybrid Integrated Silicon Laser
While these technologies are effective in increasing data flow in and around servers, photons still must be converted to electrons before processing. The only thing stopping an all optical digital processor is the optical transistor that can manipulate the flow of photons within the chip.
PDNS is based on the ns2 simulation tool. Each PDNS federate differs from ns2 in two important respects. First, modifications to ns2
were required to support distributed execution. Specifically, a central problem that must be addressed when federating sequential network simulation software in this way is the global state problem. Each PDNS federate no longer has global knowledge of the state of the system. Specifically, one ns2 federate cannot directly reference state information for network nodes that are instantiated in a different federate. In general, some provision must be made to deal with both static state information that does not change during the execution (e.g., topology information), and dynamic information that does change (e.g., queue lengths). Fortunately, due to the modular design of the ns2 software, one need only address this problem for static information concerning network topology, greatly simplifying the global state problem.
To address this problem, a naming convention is required for an ns2 federate to refer to global state information. Two types of remote information must be accessed. The first corresponds to link end points. Consider the case of a link that spans federate boundaries, i.e., the two link endpoints are mapped to different federates. Such links are referred to in PDNS as rlinks (remote links). Some provision is necessary to refer to end points that reside in a different federate. This is handled in PDNS by using an IP address and network mask to refer to any link endpoint. When configuring a simulation in PDNS, one creates a TCL script for each federate that instantiates the portion of the network mapped to the federate, and instantiates rlinks to represent the “edge-connections” the span federates. The second situation where the global state issue arises concerns reference to endpoints for logical connections, specifically, the final destination for a TCP flow may reside in a different federate. Here, PDNS again borrows well-known networking concepts, and uses a port number and IP address to identify a logical end point.
The second way that PDNS differs from ns2 is it uses a technique called NIx-Vector routing. An initial study of ns2 indicated that routing table size placed severe limitations on the size of networks that could be simulated because the amount of memory required to store routing table information increased O(N2) where N is the number of network nodes. To address this problem, message
routes are computed dynamically, as needed. The path from source to destination is encoded as a sequence of (in effect) output ports numbers call the NIx-Vector, leading to a compact representation. Previously computed routes are cached to avoid repeated re-computation of the same path. This greatly reduces the amount of memory required, and greatly increases the size of network that can
be simulated on a single node. The NIxVector technique is also applicable to the sequential version of ns2 and is used in PDNS.
Currently, optics is used mostly to link portions of computers, or more intrinsically in devices that have some optical application or component. For example, much progress has been achieved, and optical signal processors have been successfully used, for applications such as synthetic aperture radars, optical pattern recognition, optical image processing, fingerprint enhancement, and optical spectrum analyzers. The early work in optical signal processing and computing was basically analog in nature. In the past two decades, however, a great deal of effort has been expended in the development of digital optical processors. Much work remains before digital optical computers will be widely available commercially, but the pace of research and development has increased through the 1990s. During the last decade, there has been continuing emphasis on the following aspects of optical computing:
- Optical tunnel devices are under continuous development varying from small caliber endoscopes to character recognition systems with multiple type capability.
- Development of optical processors for asynchronous transfer mode.
- Development architectures for optical neural networks.
- Development of high accuracy analog optical processors, capable of processing large amounts of data in parallel.
Since photons are uncharged and do not interact with one another as readily as electrons, light beams may pass through one another in full-duplex operation, for example without distorting the information carried. In the case of electronics, loops usually generate noise voltage spikes whenever the electromagnetic fields through the loop changes. Further, high frequency or fast switching pulses will cause interference in neighboring wires. On the other hand, signals in adjacent optical fibers or in optical integrated channels do not affect one another nor do they pick up noise due to loops. Finally, optical materials possess superior storage density and accessibility over magnetic materials. The field of optical computing is progressing rapidly and shows many dramatic opportunities for overcoming the limitations described earlier for current electronic computers. The process is already underway whereby optical devices have been incorporated into many computing systems. Laser diodes as sources of coherent light have dropped rapidly in price due to mass production. Also, optical CD-ROM discs are now very common in home and office computers.
Current trends in optical computing emphasize communications, for example the use of free-space optical interconnects as a potential solution to alleviate bottlenecks experienced in electronic architectures, including loss of communication efficiency in multiprocessors and difficulty of scaling down the IC technology to sub-micron levels. Light beams can travel very close to each other, and even intersect, without observable or measurable generation of unwanted signals. Therefore, dense arrays of interconnects can be built using optical systems. In addition, risk of noise is further reduced, as light is immune to electromagnetic interferences. Finally, as light travels fast and it has extremely large spatial bandwidth and physical channel density, it appears to be an excellent media for information transport and hence can be harnessed for data processing. This high bandwidth capability offers a great deal of architectural advantage and flexibility. Based on the technology now available, future systems could have 1024 smart pixels per chip with each channel clocked at 200MHz (a chip I/O of 200Gbits per second), giving aggregate data capacity in the parallel optical highway of more that 200Tbits per second; this could be further increased to 1000Tbits. Free-space optical techniques are also used in scalable crossbar systems, which allow arbitrary interconnections between a set of inputs and a set of outputs. Optical sorting and
optical crossbar inter-connects are used in asynchronous transfer modes or packet routing and in shared memory multiprocessor
In optical computing two types of memory are discussed. One consists of arrays of one-bit-store elements and the other is mass
storage, which is implemented by optical disks or by holographic storage systems. This type of memory promises very high capacity and storage density. The primary benefits offered by holographic optical data storage over current storage technologies include significantly higher storage capacities and faster read-out rates. This research is expected to lead to compact, high-capacity, rapid- and random-access, radiation-resistant, low-power, and low-cost data storage devices necessary for future intelligent spacecraft, as well as to massive-capacity and fast-access terrestrial data archives. As multimedia applications and services become more and more prevalent, entertainment and data storage companies are looking at ways to increase the amount of stored data and reduce the time it takes to get that data out of storage. The SLMs and the linear array beam steerer are used in optical data storage applications. These devices are used to write data into the optical storage medium at high speed. The analog nature of these devices means that data can be stored at much higher density than data written by conventional devices. Researchers around the world are evaluating a number of
inventive ways to store optical data while improving the performance and capacity of existing optical disk technology. While these approaches vary in materials and methods, they do share a common objective: expanded capacity through stacking layers of optical material. For audio recordings, a 150-MB minidisk with a 2.5-in. diameter has been developed that uses special compression to shrink a standard CD’s 640-MB storage capacity onto the smaller polymer substrate. It is rewritable and uses magnetic field modulation on optical material. The minidisk uses one of two methods to write information onto an optical disk. With the minidisk, a magnetic field placed behind the optical disk is modulated while the intensity of the writing laser head is held constant. By switching the polarity of the magnetic field while the laser creates a state of flux in the optical material, digital data can be recorded on a single layer. As with all optical storage media, a read laser retrieves the data. Along with minidisk developments, standard magneto-optical CD technology has
expanded the capacity of the 3.5-in. diameter disk from 640 MB to commercially available 1 GB storage media. These conventional
storage media modulate the laser instead of the magnetic field during the writing process. Fourth-generation 8´ 5.25 in. diameter disks that use the same technology have reached capacities of 4 GB per disk. These disks are used mainly in ‘jukebox’ devices. Not to be confused with the musical jukebox, these machines contain multiple disks for storage and backup of large amounts of data that need to be accessed quickly.
Beyond these existing systems are several laboratory systems that use multiple layers of optical material on a single disk. The one with the largest capacity, magnetic super-resolution (MSR), uses two layers of optical material. The data is written onto the bottom layer through a writing laser and magnetic field modulation (MFM). When reading the disk in MSR mode, the data is copied from the lower layer to the upper layer with greater spacing between bits. In this way, data can be stored much closer together (at distances smaller than the read beam wavelength) on the bottom layer without losing data due to averaging across bits. This method is close to commercial production, offering capacities of up to 20 GB on a 5.25 in. disk without the need for altering conventional read-laser technology. Advanced storage magnetic optics (ASMO) builds on MSR, but with one exception. Standard optical disks, including those used in MSR, have grooves and lands just like a phonograph record. These grooves are used as guideposts for the writing and reading lasers. However, standard systems only record data in the grooves, not on the lands, wasting a certain amount of the optical material’s capacity. ASMO records data on both lands and grooves and, by choosing groove depths approximately 1/6 the wavelength of the reading laser light, the system can eliminate the crosstrack crosstalk that would normally be the result of recording on both grooves and lands. Even conventional CD recordings pick up data from neighboring tracks, but this information is filtered out, reducing the signal-to-noise ratio. By closely controlling the groove depth, ASMO eliminates this problem while maximizing the signal-to-noise ratio. MSR and ASMO technologies are expected to produce removable optical disk drives with capacities between 6 and 20 GB on a 12-cm optical disk, which is the same size as a standard CD that holds 640 MB. Magnetic amplifying magneto-optical systems (MAMMOS) use a standard polymer disk with two or three magnetic layers. In general terms, MAMMOS is similar to MSR, except that when the data is copied from the bottom to the upper layer, it is expanded in size, amplifying the signal.