08/10/19

Review of Computer Energy Consumption and Potential Savings - P2P Computing


Name               : Rachma Oktari
Nim                 : 001201907023
Subject             : Distributed Systems
Lecturer           : Tjong Wan Sen
Faculty/Major  : Computing/MSIT
Task1               : Research percentage or ratio of used vs idle computing power on smartphone.

Review of Computer Energy Consumption and Potential Savings - P2P Computing

Computers and monitors account for 40%-60% of the energy used by office equipment. Their energy consumption is second only to office lighting (Picklum et al: 1999, Roth et al: 2002).
Reducing the energy consumption of computers and monitors is simple. A power managed computer consumes less than half the energy of a computer without power management,[1] and depending on how your computers are used, power management can reduce the annual energy consumption of your computers and monitors by 80%.[2]
The average computer and monitor use 30% of their energy while idle and 40% of their energy outside business hours (Kawamoto et al: 2004). Power management reduces the energy consumed by computers and monitors while they are not in use. This represents a clear opportunity for saving money on energy costs.
The energy consumption of computers and monitors is influenced by two factors:
1. The energy required to run the device, or the power draw;
2. How and when the device is used, that is, its usage pattern.
Table 1: Energy requirements of computers
The difference in the energy requirements of newer and older computers is highlighted by two studies: one by Roberson et al (2002), and the other by Kawamoto et al (2001).
The study by Roberson et al (2002) looked at computers manufactured between July 2000 and October 2001. The study by Kawamoto et al (2001), which was done around the same time, looked at existing computers in office sites As such these computers were a mix of older models manufactured before 2001.
These studies show that, on average, newer computers use 70W when active and 9W in low power mode (Roberson et al: 2002), whereas older computers use 55W when active and 25W in low power mode (Kawamoto et al: 2001).[5]


A study by Kawamoto et al (2004) found that in the average office, a computer is used for 6.9 hours a day. Of those 6.9 hours it is in active use for 3 hours, and idle for the remaining 3.9 hours.
A computer which is actively used 3 hours a day, 5 days a week, is only in use 9% of the week.
The study by Mungwititkul and Mohanty (1997) and the study by Nordman (1999) both found that on average, computers are active for 9% of the year.

It is unrealistic to assume that a computer is put into low power mode as soon as it becomes idle. Power managed computers usually enter low power mode after a specified delay. The length of the delay affects how much of the idle time the computer spends in active mode and how much of the idle time it spends in low power mode.

The study by Kawamoto et al (2004) found that the average computer is idle for 3.9 hours a day. If a computer goes into low power mode after 5 minutes of idle time it will spend 76% of those 3.9 hours in low power mode. If a computer goes into low power mode after 30 minutes of idle time it will spend just 34% of those 3.9 hours in low power mode.

The best length for the delay period will be determined by how the computer is used. Someone who spends a lot of time reading on screen will need a longer delay period than someone who spends most of their time typing.

Table 3: Effect of idle time delay on power state (source: Kawamoto et al: 2004)

The idea of using spare computing resources has been addressed for some time by traditional distributed com-puting systems. The Beowulf project from NASA [Beck-er et al. 1995] was a major milestone that showed thathigh performance can be obtained by using a number of standard machines. Other efforts, such as MOSIX [Barakand Litman 1985, Barak and Wheeler 1989] and Condor [Litzkow et al 1988, Litzkow and Solomon 1992], also addressed distributed computing in a community of ma-chines, focusing on the delegation or migration of com-puting tasks from machine to machine.

Derivatives of Grid Computing based on standard Inter-net-connected PCs began to appear in the late 90’s. Theyachieve  processing  scalability  by  aggregating  the  re-sources of large number of individual computers. Typi-cally,  distributed  computing  requires  applications  thatare run in a proprietary way by a central controller. Such applications are usually targeting massive multi-parame-ter systems, with long running jobs (months or years) us-ing  P2P  foundations.  One  of  the  first  widely  visible distributed computing events occurred in January 1999, where  distributed.net,  with  the  help  of  several  tens  ofthousands  of  Internet  computers,  broke  the  RSA  chal-lenge [Cavallar et al. 2000] in less than 24 hours using adistributed computing approach. This made people real-ize how much power can be available from idle InternetPCs.

In the biotechnology sector, the need for advanced com-puting techniques is being driven by the availability ofcolossal amounts of data. For instance, genomic research has close to three billion sequences in the human genomedatabase. Applying statistical inference techniques todata of this magnitude requires unprecedented computa-tional power. Traditionally, scientists have used high-performance clustering (HPC) and super computing solu-tions, and have been forced to employ approximatingtechniques in order to complete studies in an acceptableamount of time. 

By harnessing idle computing cycles (95%-98% unused) from general purpose machines on the network, and grouping multi-site resources, grid computing makes more computing power available to re-searchers. Grid solutions partition the problem spaceamong the aggregated resources to speed up completiontimes. Companies such as Platform Computing (LSF) [Platform Computing 2001, Zhou et al. 1994], Entropia[2001], Avaki [2001] and Grid Computing Bioinformat-ics [2001] offer complete HPC and grid solutions to bio-logical research organizations and pharmaceuticalresearch and development. Genomics and proteomicsprojects such as Genome@home [2001] and Fold-ing@home [2001] managed by groups at Stanford makeuse of the idle cycles of registered clients to computeparts of the complex genome sequencing and protein folding problems [Natarajan 2001].

Sumber :
https://www.dssw.co.uk/research/computer_energy_consumption.html
https://www.hpl.hp.com/techreports/2002/HPL-2002-57R1.pdf

Tidak ada komentar:

ADAPTIVE SOFTWARE DEVELOPMENT - GROUP 3