Search Results

Search found 3392 results on 136 pages for 'average joe'.

Page 8/136 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Which type of memory is faster than average for server use?

    - by Tony_Henrich
    I am building a server computer which will be used for SQL Server and I am planning to use like 32G+ of RAM and putting the databases in memory. (I know all about data loss issues when power is gone). I haven't been up to date with the new types of memory sticks out there. What kind of memory should I get which is faster than average and not very expensive? I am buying a lot of ram so I am looking for memory that's above average but below high end if high end is very expensive. (I will be using Windows Server 2008 R2 Standard or Windows HPC Server 2008 R2)

    Read the article

  • Using BPEL Performance Statistics to Diagnose Performance Bottlenecks

    - by fip
    Tuning performance of Oracle SOA 11G applications could be challenging. Because SOA is a platform for you to build composite applications that connect many applications and "services", when the overall performance is slow, the bottlenecks could be anywhere in the system: the applications/services that SOA connects to, the infrastructure database, or the SOA server itself.How to quickly identify the bottleneck becomes crucial in tuning the overall performance. Fortunately, the BPEL engine in Oracle SOA 11G (and 10G, for that matter) collects BPEL Engine Performance Statistics, which show the latencies of low level BPEL engine activities. The BPEL engine performance statistics can make it a bit easier for you to identify the performance bottleneck. Although the BPEL engine performance statistics are always available, the access to and interpretation of them are somewhat obscure in the early and current (PS5) 11G versions. This blog attempts to offer instructions that help you to enable, retrieve and interpret the performance statistics, before the future versions provides a more pleasant user experience. Overview of BPEL Engine Performance Statistics  SOA BPEL has a feature of collecting some performance statistics and store them in memory. One MBean attribute, StatLastN, configures the size of the memory buffer to store the statistics. This memory buffer is a "moving window", in a way that old statistics will be flushed out by the new if the amount of data exceeds the buffer size. Since the buffer size is limited by StatLastN, impacts of statistics collection on performance is minimal. By default StatLastN=-1, which means no collection of performance data. Once the statistics are collected in the memory buffer, they can be retrieved via another MBean oracle.as.soainfra.bpel:Location=[Server Name],name=BPELEngine,type=BPELEngine.> My friend in Oracle SOA development wrote this simple 'bpelstat' web app that looks up and retrieves the performance data from the MBean and displays it in a human readable form. It does not have beautiful UI but it is fairly useful. Although in Oracle SOA 11.1.1.5 onwards the same statistics can be viewed via a more elegant UI under "request break down" at EM -> SOA Infrastructure -> Service Engines -> BPEL -> Statistics, some unsophisticated minds like mine may still prefer the simplicity of the 'bpelstat' JSP. One thing that simple JSP does do well is that you can save the page and send it to someone to further analyze Follows are the instructions of how to install and invoke the BPEL statistic JSP. My friend in SOA Development will soon blog about interpreting the statistics. Stay tuned. Step1: Enable BPEL Engine Statistics for Each SOA Servers via Enterprise Manager First st you need to set the StatLastN to some number as a way to enable the collection of BPEL Engine Performance Statistics EM Console -> soa-infra(Server Name) -> SOA Infrastructure -> SOA Administration -> BPEL Properties Click on "More BPEL Configuration Properties" Click on attribute "StatLastN", set its value to some integer number. Typically you want to set it 1000 or more. Step 2: Download and Deploy bpelstat.war File to Admin Server, Note: the WAR file contains a JSP that does NOT have any security restriction. You do NOT want to keep in your production server for a long time as it is a security hazard. Deactivate the war once you are done. Download the bpelstat.war to your local PC At WebLogic Console, Go to Deployments -> Install Click on the "upload your file(s)" Click the "Browse" button to upload the deployment to Admin Server Accept the uploaded file as the path, click next Check the default option "Install this deployment as an application" Check "AdminServer" as the target server Finish the rest of the deployment with default settings Console -> Deployments Check the box next to "bpelstat" application Click on the "Start" button. It will change the state of the app from "prepared" to "active" Step 3: Invoke the BPEL Statistic Tool The BPELStat tool merely call the MBean of BPEL server and collects and display the in-memory performance statics. You usually want to do that after some peak loads. Go to http://<admin-server-host>:<admin-server-port>/bpelstat Enter the correct admin hostname, port, username and password Enter the SOA Server Name from which you want to collect the performance statistics. For example, SOA_MS1, etc. Click Submit Keep doing the same for all SOA servers. Step 3: Interpret the BPEL Engine Statistics You will see a few categories of BPEL Statistics from the JSP Page. First it starts with the overall latency of BPEL processes, grouped by synchronous and asynchronous processes. Then it provides the further break down of the measurements through the life time of a BPEL request, which is called the "request break down". 1. Overall latency of BPEL processes The top of the page shows that the elapse time of executing the synchronous process TestSyncBPELProcess from the composite TestComposite averages at about 1543.21ms, while the elapse time of executing the asynchronous process TestAsyncBPELProcess from the composite TestComposite2 averages at about 1765.43ms. The maximum and minimum latency were also shown. Synchronous process statistics <statistics>     <stats key="default/TestComposite!2.0.2-ScopedJMSOSB*soa_bfba2527-a9ba-41a7-95c5-87e49c32f4ff/TestSyncBPELProcess" min="1234" max="4567" average="1543.21" count="1000">     </stats> </statistics> Asynchronous process statistics <statistics>     <stats key="default/TestComposite2!2.0.2-ScopedJMSOSB*soa_bfba2527-a9ba-41a7-95c5-87e49c32f4ff/TestAsyncBPELProcess" min="2234" max="3234" average="1765.43" count="1000">     </stats> </statistics> 2. Request break down Under the overall latency categorized by synchronous and asynchronous processes is the "Request breakdown". Organized by statistic keys, the Request breakdown gives finer grain performance statistics through the life time of the BPEL requests.It uses indention to show the hierarchy of the statistics. Request breakdown <statistics>     <stats key="eng-composite-request" min="0" max="0" average="0.0" count="0">         <stats key="eng-single-request" min="22" max="606" average="258.43" count="277">             <stats key="populate-context" min="0" max="0" average="0.0" count="248"> Please note that in SOA 11.1.1.6, the statistics under Request breakdown is aggregated together cross all the BPEL processes based on statistic keys. It does not differentiate between BPEL processes. If two BPEL processes happen to have the statistic that share same statistic key, the statistics from two BPEL processes will be aggregated together. Keep this in mind when we go through more details below. 2.1 BPEL process activity latencies A very useful measurement in the Request Breakdown is the performance statistics of the BPEL activities you put in your BPEL processes: Assign, Invoke, Receive, etc. The names of the measurement in the JSP page directly come from the names to assign to each BPEL activity. These measurements are under the statistic key "actual-perform" Example 1:  Follows is the measurement for BPEL activity "AssignInvokeCreditProvider_Input", which looks like the Assign activity in a BPEL process that assign an input variable before passing it to the invocation:                                <stats key="AssignInvokeCreditProvider_Input" min="1" max="8" average="1.9" count="153">                                     <stats key="sensor-send-activity-data" min="0" max="1" average="0.0" count="306">                                     </stats>                                     <stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                 </stats> Note: because as previously mentioned that the statistics cross all BPEL processes are aggregated together based on statistic keys, if two BPEL processes happen to name their Invoke activity the same name, they will show up at one measurement (i.e. statistic key). Example 2: Follows is the measurement of BPEL activity called "InvokeCreditProvider". You can not only see that by average it takes 3.31ms to finish this call (pretty fast) but also you can see from the further break down that most of this 3.31 ms was spent on the "invoke-service".                                  <stats key="InvokeCreditProvider" min="1" max="13" average="3.31" count="153">                                     <stats key="initiate-correlation-set-again" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="invoke-service" min="1" max="13" average="3.08" count="153">                                         <stats key="prep-call" min="0" max="1" average="0.04" count="153">                                         </stats>                                     </stats>                                     <stats key="initiate-correlation-set" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="sensor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                     <stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                     <stats key="update-audit-trail" min="0" max="2" average="0.03" count="153">                                     </stats>                                 </stats> 2.2 BPEL engine activity latency Another type of measurements under Request breakdown are the latencies of underlying system level engine activities. These activities are not directly tied to a particular BPEL process or process activity, but they are critical factors in the overall engine performance. These activities include the latency of saving asynchronous requests to database, and latency of process dehydration. My friend Malkit Bhasin is working on providing more information on interpreting the statistics on engine activities on his blog (https://blogs.oracle.com/malkit/). I will update this blog once the information becomes available. Update on 2012-10-02: My friend Malkit Bhasin has published the detail interpretation of the BPEL service engine statistics at his blog http://malkit.blogspot.com/2012/09/oracle-bpel-engine-soa-suite.html.

    Read the article

  • Advice on database design / SQL for retrieving data with chronological order

    - by Remnant
    I am creating a database that will help keep track of which employees have been on a certain training course. I would like to get some guidance on the best way to design the database. Specifically, each employee must attend the training course each year and my database needs to keep a history of all the dates on which they have attend the course in the past. The end user will use the software as a planning tool to help them book future course dates for employees. When they select a given employee they will see: (a) Last attendance date (b) Projected future attendance date(i.e. last attendance date + 1 calendar year) In terms of my database, any given employee may have multiple past course attendance dates: EmpName AttandanceDate Joe Bloggs 1st Jan 2007 Joe Bloggs 4th Jan 2008 Joe Bloggs 3rd Jan 2009 Joe Bloggs 8th Jan 2010 My question is what is the best way to set up the database to make it easy to retrieve the most recent course attendance date? In the example above, the most recent would be 8th Jan 2010. Is there a good way to use SQL to sort by date and pick the MAX date? My other idea was to add a column called ‘MostRecent’ and just set this to TRUE. EmpName AttandanceDate MostRecent Joe Bloggs 1st Jan 2007 False Joe Bloggs 4th Jan 2008 False Joe Bloggs 3rd Jan 2009 False Joe Bloggs 8th Jan 2010 True I wondered if this would simplify the SQL i.e. SELECT Joe Bloggs WHERE MostRecent = ‘TRUE’ Also, when the user updates a given employee’s attendance record (i.e. with latest attendance date) I could use SQL to: Search for the employee and set the MostRecent value to FALSE Add a new record with MostRecent set to TRUE? Would anybody recommended either method over the other? Or do you have a completely different way of solving this problem?

    Read the article

  • How do I average the difference between specific values in TSQL?

    - by jvenema
    Hey folks, sorry this is a bit of a longer question... I have a table with the following columns: [ChatID] [User] [LogID] [CreatedOn] [Text] What I need to find is the average response time for a given user id, to another specific user id. So, if my data looks like: [1] [john] [20] [1/1/11 3:00:00] [Hello] [1] [john] [21] [1/1/11 3:00:23] [Anyone there?] [1] [susan] [22] [1/1/11 3:00:43] [Hello!] [1] [susan] [23] [1/1/11 3:00:53] [What's up?] [1] [john] [24] [1/1/11 3:01:02] [Not much] [1] [susan] [25] [1/1/11 3:01:08] [Cool] ...then I need to see that Susan has an average response time of (20 + 6) / 2 = 13 seconds to John, and John has an average of (9 / 1) = 9 seconds to Susan. I'm not even sure this can be done in set-based logic, but if anyone has any ideas, they'd be much appreciated!

    Read the article

  • How to calculate Centered Moving Average of a set of data in Hadoop Map-Reduce?

    - by 100gods
    I want to calculate Centered Moving average of a set of Data . Example Input format : quarter | sales Q1'11 | 9 Q2'11 | 8 Q3'11 | 9 Q4'11 | 12 Q1'12 | 9 Q2'12 | 12 Q3'12 | 9 Q4'12 | 10 Mathematical Representation of data and calculation of Moving average and then centered moving average Period Value MA Centered 1 9 1.5 2 8 2.5 9.5 3 9 9.5 3.5 9.5 4 12 10.0 4.5 10.5 5 9 10.750 5.5 11.0 6 12 6.5 7 9 I am stuck with the implementation of RecordReader which will provide mapper sales value of a year i.e. of four quarter. The RecordReader Problem Question Thread Thanks

    Read the article

  • How much is a subscriber worth?

    - by Tom Lewin
    This year at Red Gate, we’ve started providing a way to back up SQL Azure databases and Azure storage. We decided to sell this as a service, instead of a product, which means customers only pay for what they use. Unfortunately for us, it makes figuring out revenue much trickier. With a product like SQL Compare, a customer pays for it, and it’s theirs for good. Sure, we offer support and upgrades, but, fundamentally, the sale is a simple, upfront transaction: we’ve made this product, you need this product, we swap product for money and everyone is happy. With software as a service, it isn’t that easy. The money and product don’t change hands up front. Instead, we provide a service in exchange for a recurring fee. We know someone buying SQL Compare will pay us $X, but we don’t know how long service customers will stay with us, or how much they will spend. How do we find this out? We use lifetime value analysis. What is lifetime value? Lifetime value, or LTV, is how much a customer is worth to the business. For Entrepreneurs has a brilliant write up that we followed to conduct our analysis. Basically, it all boils down to this equation: LTV = ARPU x ALC To make it a bit less of an alphabet-soup and a bit more understandable, we can write it out in full: The lifetime value of a customer equals the average revenue per customer per month, times the average time a customer spends with the service Simple, right? A customer is worth the average spend times the average stay. If customers pay on average $50/month, and stay on average for ten months, then a new customer will, on average, bring in $500 over the time they are a customer! Average spend is easy to work out; it’s revenue divided by customers. The problem comes when we realise that we don’t know exactly how long a customer will stay with us. How can we figure out the average lifetime of a customer, if we only have six months’ worth of data? The answer lies in the fact that: Average Lifetime of a Customer = 1 / Churn Rate The churn rate is the percentage of customers that cancel in a month. If half of your customers cancel each month, then your average customer lifetime is two months. The problem we faced was that we didn’t have enough data to make an estimate of one month’s cancellations reliable (because barely anybody cancels)! To deal with this data problem, we can take data from the last three months instead. This means we have more data to play with. We can still use the equation above, we just need to multiply the final result by three (as we worked out how many three month periods customers stay for, and we want our answer to be in months). Now these estimates are likely to be fairly unreliable; when there’s not a lot of data it pays to be cautious with inference. That said, the numbers we have look fairly consistent, and it’s super easy to revise our estimates when new data comes in. At the very least, these numbers give us a vague idea of whether a subscription business is viable. As far as Cloud Services goes, the business looks very viable indeed, and the low cancellation rates are much more than just data points in LTV equations; they show that the product is working out great for our customers, which is exactly what we’re looking for!

    Read the article

  • Trying to output a list using class

    - by captain morgan
    Am trying to get the moving average of a price..but i keep getting an attribute error in my Moving_Average class. ('Moving_Average' object has no attribute 'days'). Here is what I have: class Moving_Average: def calculation(self, alist:list,days:int): m = self.days prices = alist[1::2] average = [0]* len(prices) signal = ['']* len(prices) for m in range(0,len(prices)-days+1): average[m+2] = sum(prices[m:m+days])/days if prices[m+2] < average[m+2]: signal[m+2]='SELL' elif prices[m+2] > average[m+2] and prices[m+1] < average[m+1]: signal[m+2]='BUY' else: signal[m+2] ='' return average,signal def print_report(symbol:str,strategy:str): print('SYMBOL: ', symbol) print('STRATEGY: ', strategy) print('Date Closing Strategy Signal') def user(): strategy = ''' Which of the following strategy would you like to use? * Simple Moving Average [S] * Directional Indicator[D] Please enter your choice: ''' if signal_strategy in 'Ss': days = input('Please enter the number of days for the average') days = int(days) strategy = 'Simple Moving Average {}-days'.format(str(days)) m = Moving_Average() ma = m.calculation(gg, days) print(ma) gg is an list that contains date and prices. [2013-10-01,60,2013-10-02,60] The output is supposed to look like: Date Price Average Signal 2013-10-01 60.0 2013-10-02 60.0 60.00 BUY

    Read the article

  • How to find the average color of an image.

    - by Edward Boyle
    Years ago I was the lead developer of a large Scrapbook Web Site. One of the things I implemented was to allow shoppers to find Scrapbook papers and embellishments of like colors (“more like this color”). Below is the base algorithm I wrote to extract the color from an image. It worked out pretty well. I took the returned values and stored them in an associated table for the products. Yet another algorithm was used to SELECT near matches. This algorithm has turned out to be very handy for me. I have used it for borders and subtle outlined text overlays. I am sure you will find more creative uses for it. Enjoy… private Color GetColor(Bitmap bmp) { int r = 0; int g = 0; int b = 0; Color mColor = System.Drawing.Color.White; for (int i = 1; i < bmp.Width; i++) { for (int x = 1; x < bmp.Height; x++) { mColor = bmp.GetPixel(i, x); r += mColor.R; g += mColor.G; b += mColor.B; } } r = (r / (bmp.Height * bmp.Width)); g = (g / (bmp.Height * bmp.Width)); b = (b / (bmp.Height * bmp.Width)); return System.Drawing.Color.FromArgb(r, g, b); } You could also get the RGB values by passing in the RGB by ref private Color GetColor(ref int r, ref int g, ref int b, Bitmap bmp) but that is a bit much as you can simply get it from the return value: mReturnedColor.R; mReturnedColor.G; mReturnedColor.B;

    Read the article

  • Cutting Edge versus Just Average? Your SOA, Got BPM? by Mala Ramakrishnan

    - by JuergenKress
    Service Oriented Architecture (SOA) has completely transformed IT from the time it was introduced well over a decade ago. Organizations have been re-plumbing their infrastructure for reusability, efficiency and gain and succeeding with it. Best practices have emerged and people and technology have matured. We have got better at delivering on a stable platform on mission critical applications and services. Yet, there is this one secret that sets some SOA customers apart from the others. These companies grow and revolutionize their business and not just transform their IT infrastructure. The differences seem subtle for an untrained eye examining these organizations externally. And from within the company, it’s a bit like an ant sitting on an elephant, hard to differentiate between the IT trunk and business tail. What is it that some organizations do differently that makes them succeed beyond SOA? These organizations pull in business people more and more to weigh into their IT decisions. They wrench understanding process over services. They don’t settle easily when bridging business metrics and IT performance. They anguish over business requirements not translating seamlessly and quickly into IT. IT is not just an enabler but a pillar that revolutionizes their business. Okay, I’ll give it to you. These organizations layer Business Process Management (BPM) on top of their SOA. Think about lifeblood business processes in your own organizations. If you are Fedex, this would be shipping and handling. If you are Stanford Hospital, this would be patient case-management: from on-boarding through discharge and follow-up care. If you are Wells Fargo, this would be loan origination. Now think about how your SOA ties into your business process. Can you decouple your business processes from your SOA so that the two can transform and change independent of each other? Can you forecast success metrics for your business process, make the changes across the board and then look back over different periods of time to see if you are on track? Are your critical business processes entrenched in the minds of few experts in your organization or does everyone from the receptionist to your enterprise architect to your CEO understand what they can do to revolutionize it? Business Process Management is a superset of SOA. It is the process of getting your business to articulate business value and metrics and have it implemented in IT without any loss in translation. It is the act of extracting the business process from the minds of experts and IT applications in your organization and valuing them as assets for performance and gain. BPM is stepping outside your SOA and moving your organization to the next level of innovation. Oracle is accelerating BPM across industries with the latest launch. Join us to understand how BPM can give your organization a cutting edge over your SOA. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How much server bandwidth does an average RTS game require per month?

    - by Nat Weiss
    My friend and I are going to write a multiplayer, multiplatform RTS game and are currently analyzing the costs of going with a client-server architecture. The game will have a small map with mostly characters, not buildings (think of DotA or League of Legends). The authoritative game logic will run on the server and message packet sizes will be highly optimized. We'd like to know approximately how much server bandwidth our proposed RTS game would use on a monthly basis, considering these theoretical constants: 100 concurrent users maximum 8 players maximum per game 10 ticks per second Bonus: If you can tell us approximately how much server RAM this kind of game would use that would also help a great deal. Thanks in advance.

    Read the article

  • How many simultaneous requests can be handled by a medium class server on the average?

    - by Motivated Student
    I have bought a PRIMERGY TX100 S1 Server with a trial version of Windows Server 2008 R2 Web Edition. My internet connection with a static IP is very very fast (about 50 mega bit per second) for both downloading and uploading. My site serves text based contents only, no streaming. How many simultaneous requests can be handled by a medium class server on the average? Can it handle at least 1000 simultaneous requests?

    Read the article

  • How do you get average of sums in SQL (multi-level aggregation)?

    - by paxdiablo
    I have a simplified table xx as follows: rdate date rtime time rid integer rsub integer rval integer primary key on (rdate,rtime,rid,rsub) and I want to get the average (across all times) of the sums (across all ids) of the values. By way of a sample table, I have (with consecutive identical values blanked out for readability): rdate rtime rid rsub rval ------------------------------------- 2010-01-01 00.00.00 1 1 10 2 20 2 1 30 2 40 01.00.00 1 1 50 2 60 2 1 70 2 80 02.00.00 1 1 90 2 100 2010-01-02 00.00.00 1 1 999 I can get the sums I want with: select rdate,rtime,rid, sum(rval) as rsum from xx where rdate = '2010-06-01' group by rdate,rtime,rid which gives me: rdate rtime rid rsum ------------------------------- 2010-01-01 00.00.00 1 30 (10+20) 2 70 (30+40) 01.00.00 1 110 (50+60) 2 150 (70+80) 02.00.00 1 190 (90+100) as expected. Now what I want is the query that will also average those values across the time dimension, giving me: rdate rtime ravgsum ---------------------------- 2010-01-01 00.00.00 50 ((30+70)/2) 01.00.00 130 ((110+150)/2) 02.00.00 190 ((190)/1) I'm using DB2 for z/OS but I'd prefer standard SQL if possible.

    Read the article

  • pylab.savefig() and pylab.show() image difference

    - by Jack1990
    I'm making an script to automatically create plots from .xvg files, but there's a problem when I'm trying to use pylab's savefig() method. Using pylab.show() and saving from there, everything's fine. Using pylab.show() Using pylab.savefig() def producePlot(timestep, energy_values,type_line = 'r', jump = 1,finish = 100): fc = sp.interp1d(timestep[::jump], energy_values[::jump],kind='cubic') xnew = numpy.linspace(0, finish, finish*2) pylab.plot(xnew, fc(xnew),type_line) pylab.xlabel('Time in ps ') pylab.ylabel('kJ/mol') pylab.xlim(xmin=0, xmax=finish) def produceSimplePlot(timestep, energy_values,type_line = 'r', jump = 1,finish = 100): pylab.plot(timestep, energy_values,type_line) pylab.xlabel('Time in ps ') pylab.ylabel('kJ/mol') pylab.xlim(xmin=0, xmax=finish) def linearRegression(timestep, energy_values, type_line = 'g'): #, jump = 1,finish = 100): from scipy import stats import numpy #print 'fuck' timestep = numpy.asarray(timestep) slope, intercept, r_value, p_value, std_err = stats.linregress(timestep,energy_values) line = slope*timestep+intercept pylab.plot(timestep, line, type_line) def plottingTime(Title,file_name, timestep, energy_values ,loc, jump , finish): pylab.title(Title) producePlot(timestep,energy_values, 'b',jump, finish) linearRegression(timestep,energy_values) import numpy Average = numpy.average(energy_values) #print Average pylab.legend(("Average = %.2f" %(Average),'Linear Reg'),loc) #pylab.show() pylab.savefig('%s.jpg' %file_name[:-4], bbox_inches= None, pad_inches=0) #if __name__ == '__main__': #plottingTime(Title,timestep1, energy_values, jump =10, finish = 4800) def specialCase(Title,file_name, timestep, energy_values,loc, jump, finish): #print 'Working here ...?' pylab.title(Title) producePlot(timestep,energy_values, 'b',jump, finish) import numpy from pylab import * Average = numpy.average(energy_values) #print Average pylab.legend(("Average = %.2g" %(Average), Title),loc) locs,labels = yticks() yticks(locs, map(lambda x: "%.3g" % x, locs)) #pylab.show() pylab.savefig('%s.jpg' %file_name[:-4] , bbox_inches= None, pad_inches=0) Thanks in advance, John

    Read the article

  • Calculating a 12-month moving average of weekly series, but anchored at the last day of the month

    - by richardh
    I have a zoo object oi.zoo with weekly data. I would like to sooth this with a 12-month moving average (easy enough), but I can't figure out how to anchor the right edge of the the moving average window at the end of the month (to correspond with the on which factors I am regressing). For example: > head(oi.zoo) 1986-01-15 1986-01-31 1986-02-14 1986-02-28 1986-03-14 1986-03-31 2966182 2986748 2948045 2990979 2993453 2936038 > head(mkt) 1926-07-31 1926-08-31 1926-09-30 1926-10-31 1926-11-30 1926-12-31 2.62 2.56 0.36 -3.43 2.44 2.77 I have some other factors and plan on using dynlm to regress. Thanks!

    Read the article

  • Measuring debug vs release of ASP.NET applications

    - by Alex Angas
    A question at work came up about building ASP.NET applications in release vs debug mode. When researching further (particularly on SO), general advice is that setting <compilation debug="true"> in web.config has a much bigger impact. Has anyone done any testing to get some actual numbers about this? Here's the sort of information I'm looking for (which may give away my experience with testing such things): Execution time | Debug build | Release build -------------------+---------------+--------------- Debug web.config | average 1 | average 2 Retail web.config | average 3 | average 4 Max memory usage | Debug build | Release build -------------------+---------------+--------------- Debug web.config | average 1 | average 2 Retail web.config | average 3 | average 4 Output file size | Debug build | Release build -------------------+---------------+--------------- | size 1 | size 2

    Read the article

  • Quotas - Using quotas on ZFSSA shares and projects and users

    - by Steve Tunstall
    So you don't want your users to fill up your entire storage pool with their MP3 files, right? Good idea to make some quotas. There's some good tips and tricks here, including a helpful workflow (a script) that will allow you to set a default quota on all of the users of a share at once. Let's start with some basics. I mad a project called "small" and inside it I made a share called "Share1". You can set quotas on the project level, which will affect all of the shares in it, or you can do it on the share level like I am here. Go the the share's General property page. First, I'm using a Windows client, so I need to make sure I have my SMB mountpoint. Do you know this trick yet? Go to the Protocol page of the share. See the SMB section? It needs a resource name to make the UNC path for the SMB (Windows) users. You do NOT have to type this name in for every share you make! Do this at the Project level. Before you make any shares, go to the Protocol properties of the Project, and set the SMB Resource name to "On". This special code will automatically make the SMB resource name of every share in the project the same as the share name. Note the UNC path name I got below. Since I did this at the Project level, I didn't have to lift a finger for it to work on every share I make in this project. Simple. So I have now mapped my Windows "Z:" drive to this Share1. I logged in as the user "Joe". Note that my computer shows my Z: drive as 34GB, which is the entire size of my Pool that this share is in. Right now, Joe could fill this drive up and it would fill up my pool.  Now, go back to the General properties of Share1. In the "Space Usage" area, over on the right, click on the "Show All" text under the Users & Groups section. Sure enough, Joe and some other users are in here and have some data. Note this is also a handy window to use just to see how much space your users are using in any given share.  Ok, Joe owes us money from lunch last week, so we want to give him a quota of 100MB. Type his name in the Users box. Notice how it now shows you how much data he's currently using. Go ahead and give him a 100M quota and hit the Apply button. If I go back to "Show All", I can see that Joe now has a quota, and no one else does. Sure enough, as soon as I refresh my screen back on Joe's client, he sees that his Z: drive is now only 100MB, and he's more than half way full.  That was easy enough, but what if you wanted to make the whole share have a quota, so that the share itself, no matter who uses it, can only grow to a certain size? That's even easier. Just use the Quota box on the left hand side. Here, I use a Quota on the share of 300MB.  So now I log off as Joe, and log in as Steve. Even though Steve does NOT have a quota, it is showing my Z: drive as 300MB. This would effect anyone, INCLUDING the ROOT user, becuase you specified the Quota to be on the SHARE, not on a person.  Note that back in the Share, if you click the "Show All" text, the window does NOT show Steve, or anyone else, to have a quota of 300MB. Yet we do, because it's on the share itself, not on any user, so this panel does not see that. Ok, here is where it gets FUN.... Let's say you do NOT want a quota on the SHARE, because you want SOME people, like Root and yourself, to have FULL access to it and you want the ability to fill the whole thing up if you darn well feel like it. HOWEVER, you want to give the other users a quota. HOWEVER you have, say, 200 users, and you do NOT feel like typing in each of their names and giving them each a quota, and they are not all members of a AD global group you could use or anything like that.  Hmmmmmm.... No worries, mate. We have a handy-dandy script that can do this for us. Now, this script was written a few years back by Tim Graves, one of our ZFSSA engineers out of the UK. This is not my script. It is NOT supported by Oracle support in any way. It does work fine with the 2011.1.4 code as best as I can tell, but Oracle, and I, are NOT responsible for ANYTHING that you do with this script. Furthermore, I will NOT give you this script, so do not ask me for it. You need to get this from your local Oracle storage SC. I will give it to them. I want this only going to my fellow SCs, who can then work with you to have it and show you how it works.  Here's what it does...Once you add this workflow to the Maintenance-->Workflows section, you click it once to run it. Nothing seems to happen at this point, but something did.   Go back to any share or project. You will see that you now have four new, custom properties on the bottom.  Do NOT touch the bottom two properties, EVER. Only touch the top two. Here, I'm going to give my users a default quota of about 40MB each. The beauty of this script is that it will only effect users that do NOT already have any kind of personal quota. It will only change people who have no quota at all. It does not effect the Root user.  After I hit Apply on the Share screen. Nothing will happen until I go back and run the script again. The first time you run it, it creates the custom properties. The second and all subsequent times you run it, it checks the shares for any users, and applies your quota number to each one of them, UNLESS they already have one set. Notice in the readout below how it did NOT apply to my Joe user, since Joe had a quota set.  Sure enough, when I go back to the "Show All" in the share properties, all of the users who did not have a quota, now have one for 39.1MB. Hmmm... I did my math wrong, didn't I?    That's OK, I'll just change the number of the Custom Default quota again. Here, I am adding a zero on the end.  After I click Apply, and then run the script again, all of my users, except Joe, now have a quota of 391MB  You can customize a person at any time. Here, I took the Steve user, and specifically gave him a Quota of zero. Now when I run the script again, he is different from the rest, so he is no longer effected by the script. Under Show All, I see that Joe is at 100, and Steve has no Quota at all. I can do this all day long. es, you will have to re-run the script every time new users get added. The script only applies the default quota to users that are present at the time the script is ran. However, it would be a simple thing to schedule the script to run each night, or to make an alert to run the script when certain events occur.  For you power users, if you ever want to delete these custom properties and remove the script completely, you will find these properties under the "Schema" section under the Shares section. You can remove them here. There's no need to, however, they don't hurt a thing if you just don't use them.  I hope these tips have helped you out there. Quotas can be fun. 

    Read the article

  • Why can I resolve this hostname but not a cname to this hostname?

    - by Joe Hopfgartner
    if a run dig against a hostname, i get the according cname, however i get an NXDOMAIN error (non existent domain). if i run dig against the cname i got, I can resolve it to an IP address successfully. It is reproduceable. On the system I am currently on it is always the case, on other systems it sometimes works and sometimes not, and on other systems it seems to work all the time. If i run using a nameserver i specify (for example googles public nameserver) i can successfully resolve the hostname. i would just blame the local system, but it seems i am not having the only one problems the 2nd domain (unrestrict.me) is hosted on amazon route 53 nameservers. the 1st one on another dns server which has proofen to be fully functional and reliable over the years. i once switchted with the other domain to amazon dns as well, everything seemed to work, also various dns health check tests reported fine, however i recieved a lot of support tickets that dns resolution would not work. is amazon just "bad" or am i doing something wrong? i did not tamper with the domain in any way on the local system (in case of caching or making a custom dns view or whatever...) joe@joe:~$ dig scorpion.premiumize.me ; <<>> DiG 9.8.1-P1 <<>> scorpion.premiumize.me ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 10222 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;scorpion.premiumize.me. IN A ;; ANSWER SECTION: scorpion.premiumize.me. 180 IN CNAME alpha.nue.scorpion.unrestrict.me. ;; Query time: 28 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Mon Jun 18 10:28:39 2012 ;; MSG SIZE rcvd: 84 joe@joe:~$ dig alpha.nue.scorpion.unrestrict.me ; <<>> DiG 9.8.1-P1 <<>> alpha.nue.scorpion.unrestrict.me ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25381 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;alpha.nue.scorpion.unrestrict.me. IN A ;; ANSWER SECTION: alpha.nue.scorpion.unrestrict.me. 300 IN A 78.46.25.130 ;; Query time: 48 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Mon Jun 18 10:28:47 2012 ;; MSG SIZE rcvd: 66 joe@joe:~$

    Read the article

  • Does a high run queue length average result in poor performance for a web server?

    - by Domino
    I'm trying to narrow down the list of suspects of web servers that perform moderately well most of the time with occasional bouts of poor performance. I'm analyzing the data collected and summarized by sar. I've noticed a few things, one of which is high number of tasks in the run queue. 10:15:01 AM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked 10:25:01 AM 2 150 0.05 0.05 0.06 0 10:35:01 AM 4 149 0.08 0.12 0.09 0 10:45:01 AM 6 150 0.13 0.19 0.15 0 10:55:01 AM 1 150 0.08 0.10 0.13 0 11:05:01 AM 4 150 0.20 0.35 0.23 0 11:15:01 AM 3 149 0.02 0.09 0.15 0 11:25:01 AM 7 149 0.04 0.05 0.11 0 11:35:01 AM 4 150 0.14 0.15 0.13 0 11:45:01 AM 6 150 0.27 0.18 0.16 0 11:55:01 AM 5 150 0.08 0.10 0.13 0 12:05:01 PM 3 149 0.35 0.40 0.26 0 12:15:01 PM 19 155 0.02 0.10 0.16 1 12:25:01 PM 2 150 0.00 0.07 0.12 0 12:35:02 PM 3 151 0.58 0.24 0.17 0 12:45:01 PM 8 150 0.02 0.13 0.15 0 12:55:01 PM 6 149 0.81 0.29 0.18 0 01:05:01 PM 3 148 0.00 0.09 0.13 0 01:15:01 PM 7 149 0.00 0.04 0.11 0 I believe these are 10 minute averages. Is this an indicator that the web server is not performing as fast as it could if the average run queue length was lower?

    Read the article

  • My server is slower than the average user's computer, should I still offload Access queries to SQL Server? [closed]

    - by andrewb
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I have a database set up with MS Access 2007 front ends and an SQL Server 2005 back end. At the moment, all the queries are saved in the front end as I've only recently moved to an SQL Server backend. I'm wondering how much of those queries I should save as stored procedures/views on SQL Server. About the system The number of concurrent users is only a handful, though it could be as high as 25 at one time (very unlikely). The average computer has an Intel i3-2120 CPU running at 3.3 GHz, which gets a PassMark score of 3,987, whilst the server has an Intel Xeon E5335 running at 2.0 GHz, which gets a PassMark score of 2,637. Always an awkward situation when an i3 outperforms a Xeon... though the i3 is from Q1 2011 and the Xeon is Q2 2009. There is potential for a server upgrade in the future, though it wouldn't come easy. I'm inclined to move the queries to the back end, as they are beginning to take noticeable time and I figure that is a better way of doing things. I like the idea of throwing everything at the server, then pushing for a server upgrade. It makes more sense in my mind to be upgrading one server rather than 30 PCs. Or am I being overzealous? Why my question isn't a duplicate It seems that my question has been misinterpreted and labelled a duplicate of quite a different question, one about testing and capacity planning. I'll try explain how my question is very different from the linked question. The crux of my question is something like "Even though my server is technically slower, is it better to have it doing more of the queries?" There's two ways that people could have answered this: I agree the server is going to be slower, but the extra benefits of such and such (like the less Access the better) means you should move most to the server anyway. (OR no it doesn't outweigh the benefit, keep them in Access) Actually the server will be faster because of such and such. I'm hoping that people out there could provide some answers like this, and the question in the dupe link doesn't really provide either of these answers. Ok sure, I suppose I could do extensive performance testing to compare Access queries running on a local machine to SQL Server queries running on the server, but that sounds like a very hard task (particularly performance testing of access) compared to someone giving some quick general guidance, and again, my question is looking for a lot more than immediate performance benefit.

    Read the article

  • The code works but when using printf it gives me a weird answer. Help please [closed]

    - by user71458
    //Programmer-William Chen //Seventh Period Computer Science II //Problem Statement - First get the elapsed times and the program will find the //split times for the user to see. // //Algorithm- First the programmer makes the prototype and calls them in the //main function. The programmer then asks the user to input lap time data. //Secondly, you convert the splits into seconds and subtract them so you can //find the splits. Then the average is all the lap time's in seconds. Finally, //the programmer printf all the results for the user to see. #include <iostream> #include <stdlib.h> #include <math.h> #include <conio.h> #include <stdio.h> using namespace std; void thisgetsElapsedTimes( int &m1, int &m2, int &m3, int &m4, int &m5, int &s1, int &s2, int &s3, int &s4, int &s5); //this is prototype void thisconvertstoseconds ( int &m1, int &m2, int &m3, int &m4, int &m5, int &s1, int &s2, int &s3, int &s4, int &s5, int &split1, int &split2, int &split3, int &split4, int &split5);//this too void thisfindsSplits(int &m1, int &m2, int &m3, int &m4, int &m5, int &split1, int &split2, int &split3, int &split4, int &split5, int &split6, int &split7, int &split8, int &split9, int &split10);// this is part of prototype void thisisthesecondconversation (int &split1M, int &split2M, int &split3M, int &split4M, int &split5M, int &split1S,int &split2S, int &split3S, int &split4S, int &split5S, int &split1, int &split2, int &split3, int &split4, int &split5);//this gets a value void thisfindstheaverage(double &average, int &split1, int &split2, int &split3, int &split4, int &split5);//and this void thisprintsstuff( int &split1M, int &split2M, int &split3M, int &split4M, int &split5M, int &split1S, int &split2S, int &split3S, int &split4S, int &split5S, double &average); //this prints int main(int argc, char *argv[]) { int m1, m2, m3, m4, m5, s1, s2, s3, s4, s5, split1, split2, split3, split4, split5, split1M, split2M, split3M, split4M, split5M, split1S, split2S, split3S, split4S, split5S; int split6, split7, split8, split9, split10; double average; char thistakescolon; thisgetsElapsedTimes ( m1, m2, m3, m4, m5, s1, s2, s3, s4, s5); thisconvertstoseconds ( m1, m2, m3, m4, m5, s1, s2, s3, s4, s5, split1, split2, split3, split4, split5); thisfindsSplits ( m1, m2, m3, m4, m5, split1, split2, split3, split4, split5, split6, split7, split8, split9, split10); thisisthesecondconversation ( split1M, split2M, split3M, split4M, split5M, split1S, split2S, split3S, split4S, split5S, split1, split2, split3, split4, split5); thisfindstheaverage ( average, split1, split2, split3, split4, split5); thisprintsstuff ( split1M, split2M, split3M, split4M, split5M, split1S, split2S, split3S, split4S, split5S, average); // these are calling statements and they call from the main function to the other functions. system("PAUSE"); return 0; } void thisgetsElapsedTimes(int &m1, int &m2, int &m3, int &m4, int &m5, int &s1, int &s2, int &s3, int &s4, int &s5) { char thistakescolon; cout << "Enter the elapsed time:" << endl; cout << " Kilometer 1 "; cin m1 thistakescolon s1; cout << " Kilometer 2 "; cin m2 thistakescolon s2; cout << " Kilometer 3 " ; cin m3 thistakescolon s3; cout << " Kilometer 4 "; cin m4 thistakescolon s4; cout << " Kilometer 5 "; cin m5 thistakescolon s5; // this gets the data required to get the results needed for the user to see // . } void thisconvertstoseconds (int &m1, int &m2, int &m3, int &m4, int &m5, int &s1, int &s2, int &s3, int &s4, int &s5, int &split1, int &split2, int &split3, int &split4, int &split5) { split1 = (m1 * 60) + s1;//this converts for minutes to seconds for m1 split2 = (m2 * 60) + s2;//this converts for minutes to seconds for m2 split3 = (m3 * 60) + s3;//this converts for minutes to seconds for m3 split4 = (m4 * 60) + s4;//this converts for minutes to seconds for m4 split5 = (m5 * 60) + s5;//this converts for minutes to seconds for m5 } void thisfindsSplits (int &m1, int &m2, int &m3, int &m4, int &m5,int &split1, int &split2, int &split3, int &split4, int &split5, int &split6, int &split7, int &split8, int &split9, int &split10)//this is function heading { split6 = split1; //this is split for the first lap. split7 = split2 - split1;//this is split for the second lap. split8 = split3 - split2;//this is split for the third lap. split9 = split4 - split3;//this is split for the fourth lap. split10 = split5 - split4;//this is split for the fifth lap. } void thisfindstheaverage(double &average, int &split1, int &split2, int &split3, int &split4, int &split5) { average = (split1 + split2 + split3 + split4 + split5)/5; // this finds the average from all the splits in seconds } void thisisthesecondconversation (int &split1M, int &split2M, int &split3M, int &split4M, int &split5M, int &split1S,int &split2S, int &split3S, int &split4S, int &split5S, int &split1, int &split2, int &split3, int &split4, int &split5) { split1M = split1 * 60; //this finds the split times split1S = split1M - split1 * 60; //then this finds split2M = split2 * 60; //and all of this split2S = split2M - split2 * 60; //does basically split3M = split3 * 60; //the same thing split3S = split3M - split3 * 60; //all of it split4M = split4 * 60; //it's also a split4S = split4M - split4 * 60; //function split5M = split5 * 60; //and it finds the splits split5S = split5M - split5 * 60; //for each lap. } void thisprintsstuff (int &split1M, int &split2M, int &split3M, int &split4M, int &split5M, int &split1S, int &split2S, int &split3S, int &split4S, int &split5S, double &average)// this is function heading { printf("\n kilometer 1 %d" , ":02%d",'split1M','split1S'); printf("\n kilometer 2 %d" , ":02%d",'split2M','split2S'); printf("\n kilometer 3 %d" , ":02%d",'split3M','split3S'); printf("\n kilometer 4 %d" , ":02%d",'split4M','split4S'); printf("\n kilometer 5 %d" , ":02%d",'split5M','split5S'); printf("\n your average pace is ",'average',"per kilometer \n", "William Chen\n"); // this printf so the programmer // can allow the user to see // the results from the data gathered. }

    Read the article

  • Attempting to extract a pattern within a string

    - by Brian
    I'm attempting to extract a given pattern within a text file, however, the results are not 100% what I want. Here's my code: import java.util.regex.Matcher; import java.util.regex.Pattern; public class ParseText1 { public static void main(String[] args) { String content = "<p>Yada yada yada <code> foo ddd</code>yada yada ...\n" + "more here <2004-08-24> bar<Bob Joe> etc etc\n" + "more here again <2004-09-24> bar<Bob Joe> <Fred Kej> etc etc\n" + "more here again <2004-08-24> bar<Bob Joe><Fred Kej> etc etc\n" + "and still more <2004-08-21><2004-08-21> baz <John Doe> and now <code>the end</code> </p>\n"; Pattern p = Pattern .compile("<[1234567890]{4}-[1234567890]{2}-[1234567890]{2}>.*?<[^%0-9/]*>", Pattern.MULTILINE); Matcher m = p.matcher(content); // print all the matches that we find while (m.find()) { System.out.println(m.group()); } } } The output I'm getting is: <2004-08-24> bar<Bob Joe> <2004-09-24> bar<Bob Joe> <Fred Kej> <2004-08-24> bar<Bob Joe><Fred Kej> <2004-08-21><2004-08-21> baz <John Doe> and now <code> The output I want is: <2004-08-24> bar<Bob Joe> <2004-08-24> bar<Bob Joe> <2004-08-24> bar<Bob Joe> <2004-08-21> baz <John Doe> In short, the sequence of "date", "text (or blank)", and "name" must be extracted. Everything else should be avoided. For example the tag "Fred Kej" did not have any "date" tag before it, therefore, it should be flagged as invalid. Also, as a side question, is there a way to store or track the text snippets that were skipped/rejected as were the valid texts. Thanks, Brian

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >