Search Results

Search found 19483 results on 780 pages for 'load average'.

Page 206/780 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • Domain DNS Lookup time

    - by Maxim Dsouza
    I have a website hosted at www.doondoo.com. The site when loaded in the browser for the first time, takes a bit of time to load. It looks like the DNS lookup takes a lot of time. Once the site is loaded on the browser, other pages load very quickly. The application is hosted on Linode and I have pointed my domain to the nameservers of Linode i.e ns1.linode.com and ns2.linode.com I wanted to know what is the reason behind this delay in the loading. And what could be the possible means to improve it. Thanks in advance.

    Read the article

  • Whole Lotta Virtualization Goin' On

    - by rickramsey
    Lately we've published a lot of content about virtualization. Here's a sampling. Podcat: Technology Preview of Transcendent Memory Turns out that in a virtual environment, RAM is the bottleneck. Not because it's slow, it's not, but because each CPU still had to use its own RAM. Which gets expensive. In this podcast, Dan Magenheimer describes how Oracle and the open source community taught the guest kernel in Oracle Linux to share its memory with other CPU's. Transcendent memory will wind up saving large data centers a lot of money. Find out how. Tech Article: How to Use Oracle VM Templates This article describes how to prepare an Oracle VM environment to use Oracle VM Templates, how to obtain a template, and how to deploy the template to your Oracle VM environment. It also describes how to create a virtual machine based on that template and how you can clone the template and change the clone's configuration. Tech Article: How to Set Up a Load Balanced Application Across Two Oracle Solaris Zones Install Apache Tomcat on two Oracle Solaris zones. Connect them across a VPN. And let the Integrated Load Balancer in Oracle Solaris 11 manage traffic. Presto: high(er) availability in a single server. Tech Article: How to Install Oracle RAC on Oracle Solaris Zone Clusters Learn how to implement a multi-tiered database environment that isolates database tiers and administrative domains, while taking advantage of centralized (and simpler) cluster admin. For fans of Jerry Lee Lewis If you're a fan of Jerry Lee Lewis, you might enjoy this video. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Node.js MMO - process and/or map division

    - by Gipsy King
    I am in the phase of designing a mmo browser based game (certainly not massive, but all connected players are in the same universe), and I am struggling with finding a good solution to the problem of distributing players across processes. I'm using node.js with socket.io. I have read this helpful article, but I would like some advice since I am also concerned with different processes. Solution 1: Tie a process to a map location (like a map-cell), connect players to the process corresponding to their location. When a player performs an action, transmit it to all other players in this process. When a player moves away, he will eventually have to connect to another process (automatically). Pros: Easier to implement Cons: Must divide map into zones Player reconnection when moving into a different zone is probably annoying If one zone/process is always busy (has players in it), it doesn't really load-balance, unless I split the zone which may not be always viable There shouldn't be any visible borders Solution 1b: Same as 1, but connect processes of bordering cells, so that players on the other side of the border are visible and such. Maybe even let them interact. Solution 2: Spawn processes on demand, unrelated to a location. Have one special process to keep track of all connected player handles, their location, and the process they're connected to. Then when a player performs an action, the process finds all other nearby players (from the special player-process-location tracking node), and instructs their matching processes to relay the action. Pros: Easy load balancing: spawn more processes Avoids player reconnecting / borders between zones Cons: Harder to implement and test Additional steps of finding players, and relaying event/action to another process If the player-location-process tracking process fails, all other fail too I would like to hear if I'm missing something, or completely off track.

    Read the article

  • Libraries for eclipse with CCSv5 from Texas Instruments

    - by Alex
    My system is Ubuntu 11.10 64bit and i have to run a 32bit version of eclipse to use the TI plugins for CCSv5 but it doesn't work. I tried to run eclipse in a 62bit java environment but it doesn't even start. Now I got "java version "1.6.0_30"" from Sun in 32bit and now eclipse starts but can't use the TI plug ins and I get the following errors in bash: /usr/lib/gtk-2.0/2.10.0/menuproxies/libappmenu.so: falsche ELF-Klasse: ELFCLASS64 /usr/lib/gio/modules/libgvfsdbus.so: falsche ELF-Klasse: ELFCLASS64 Failed to load module: /usr/lib/gio/modules/libgvfsdbus.so and this in a popup-window when Itry to use the plugin: The selected wizard could not be started. Plug-in com.ti.ccstudio.project.ui was unable to load class com.ti.ccstudio.project.ui.internal.wizards.importexport.temp.ExternalProjectImportWizard. An error occurred while automatically activating bundle com.ti.ccstudio.project.ui (352). T tested the libraries with file: /usr/lib/gio/modules/libgvfsdbus.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, stripped /usr/lib/gtk-2.0/2.10.0/menuproxies/libappmenu.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, stripped The ia32-libs are installed.

    Read the article

  • Autoscaling in a modern world&hellip;. Part 2

    - by Steve Loethen
    When we last left off, we had a web application spinning away in the cloud, and a local console application watching it and reacting to changes in demand.  Reactions that were specified by a set of rules.  Let’s talk about those rules. Constraints.  The first set of rules this application answered to were the constraints. Here is what they looked like: <constraintRules> <rule name="default" enabled="true" rank="1" description="The default constraint rule"> <actions> <range min="1" max="4" target="AutoscalingApplicationRole"/> </actions> </rule> </constraintRules> Pretty basic.  We have one role, the “AutoscalingApplicationRole”, and we have decided to have it live within a range of 1 to 4.  This rule does not adjust, but instead, set’s limits on what other rules can do.  It has a rank, so you can have you can specify other sets of constraints, perhaps based on time or date, to allow for deviations from this set.  But for now, let’s keep it simple.  In the real world, you would probably use the minimum to set a lower end SLA.  A common value might be a 2, to prevent the reactive rules from ever taking you down to 1 role.  The maximum is often used to keep a rule from driving the cost up, setting an upper limit to prevent you waking up one morning and find a bill for hundreds of instances you didn’t expect.  So, here we have the range we want our application to live inside.  This is good for our investigation and testing.  Next, let’s take a look at the reactive rules.  These rules are what you use to react (hence reactive rules) to changing demands on your application.  The HOL has two simple rules.  One that looks at a queue depth, and one that looks at a performance counter that reports cpu utilization.  the XML in the rules file looks like this: <reactiveRules> <rule name="ScaleUp" rank="10" description="Scale Up the web role" enabled="true"> <when> <any> <greaterOrEqual operand="Length_05_holqueue" than="10"/> <greaterOrEqual operand="CPU_05_holwebrole" than="65"/> </any> </when> <actions> <scale target="AutoscalingApplicationRole" by="1"/> </actions> </rule> <rule name="ScaleDown" rank="10" description="Scale down the web role" enabled="true"> <when> <all> <less operand="Length_05_holqueue" than="5"/> <less operand="CPU_05_holwebrole" than="40"/> </all> </when> <actions> <scale target="AutoscalingApplicationRole" by="-1"/> </actions> </rule> </reactiveRules> <operands> <performanceCounter alias="CPU_05_holwebrole" performanceCounterName="\Processor(_Total)\% Processor Time" source="AutoscalingApplicationRole" timespan="00:05:00" aggregate="Average" /> <queueLength alias="Length_05_holqueue" queue="hol-queue" timespan="00:05:00" aggregate="Average"/> </operands> These rules are currently contained in a file called rules.xml, that is in the root of the console application.  The console app, starts up, grabs the rules and starts watching the 2 operands.  When it detects a rule has been satisfied, it performs the desired action.  (here, scale up or down my 1). But I want to host the autoscaler  in the cloud.  For my first trick, I will move the rules (and another file called services.xml) to azure blob storage.  Look for part 3.

    Read the article

  • Dynamic Memory Allocation and Memory Management

    - by Bunkai.Satori
    In an average game, there are hundreds or maybe thousands of objects in the scene. Is it completely correct to allocate memory for all objects, including gun shots (bullets), dynamically via default new()? Should I create any memory pool for dynamic allocation, or is there no need to bother with this? What if the target platform are mobile devices? Is there a need for a memory manager in a mobile game, please? Thank you. Language Used: C++; Currently developed under Windows, but planned to be ported later.

    Read the article

  • Xeon X5550 vs Six-core Opteron for database server

    - by gregmac
    I'm specing out a database server, and the price works out to be reasonably close between an Intel X5550 Quad-core and an AMD 2425HE (2.1Ghz) Six-core Opteron. I've been looking for some comparisons between the two, but the only thing useful I've found is an AnandTech Review of the 2435 which compares it to Intel Xeons, but concludes they both have their place. My load is MS SQL Server 2008, with an OLTP database that has about an equal amount of reading/writing (and it's a reasonably heavy load). So my question is, what is going to work better in this situation, assuming the drives are the same: Xeon X5550, with 16GB 1333Mhz RAM (Dell R510) or an Opteron 2425HE (2.1Ghz) or 2439SE (2.8Ghz) with 16GB of 800Mhz RAM? (Dell 2970) Note: the 2439 adds $500, but the overall pricing works out that it's not that much more than the R510. Using the 2425HE, the Dell 2970 server is slightly less than the similarly-equipped R510). If it adds a decent amount of performance, it's worthwhile to go faster. (single CPU, in both cases).

    Read the article

  • nginx + php-fpm “504 Gateway Time-out” after compiling with curl support

    - by Brian
    We recently switched to managing php with php-fpm. It was working great, but is now giving me issues. The most recent change was to install libcurl-devel and re-compile php (5.3.3) using --with-curl. Now I'm getting 504 timeouts with nginx and the pages won't load. HTML pages load fine, phpinfo() loads as well. Tried backing out the changes and re-compiling without curl support, but still not having any luck. Also tried adjusting request_terminate_timeout per some of the other posts here on SF without change. This is on a test machine that has no other clients hitting it. I also tried switching to unix socket instead of tcp--same result. What am I missing here? Am I barking up the wrong tree with curl?

    Read the article

  • Windows Server Configuration with Exchange, SQL Express and IIS

    - by Reafidy
    In our small office we are currently running a standalone tower server with WS 2008 R2, SQL Express and IIS. This server is going to be decommissioned and scrapped as its old and very noisy. We are going to purchase a new server with WS 2012 Standard and a heap of ram. It will still be a standalone server so it will be a domain controller, have SQL Express and IIS installed. We intend to install the hyper-v role and host a second virtual server to distribute the load. We are a small company and have only 15 staff members so its not a huge load on the server. Can a single server handle this type of installation, we don't want to purchase two servers. If so how should it be configured with regard to which software packages should be virtualized(if any). Redundancy is not a huge issue for us.

    Read the article

  • What response should be made to a continued web-app crack attempt?

    - by Tchalvak
    I've issues with a continuous, concerted cracking attempt on a website (coded in php). The main problem is sql-injection attempts, running on a Debian server. A secondary effect of the problem is being spidered or repeatedly spammed with urls that, though a security hole has been closed, are still obviously related attempts to crack the site, and continue to add load to the site, and thus should be blocked. So what measures can I take to: A: Block known intruders/known attack machines (notably making themselves anonymous via botnet or relaying servers) to prevent their repeated, continuous, timed access from affecting the load of the site, and B: report & respond to the attack (I'm aware that the reporting to law enforcement is almost certainly futile, as may be reporting to the ip/machine where the attacks are originating, but other responses to take would be welcome).

    Read the article

  • IIS won't start

    - by doekman
    When I startup my computer, pages from my local webserver (IIS) won't load. In the event viewer(System log) I get the error below. The only way I get IIS to work again is to issue an iisreset.exe. I have Windows XP. Any ideas to fix this? Event Type: Warning Event Source: W3SVC Event Category: None Event ID: 36 Date: 27-4-2010 Time: 10:30:47 User: N/A Computer: NLD4P4Z2J Description: The server failed to load application '/LM/W3SVC'. The error was '8007045b'. For additional information specific to this message please visit the Microsoft Online Support site located at: http://www.microsoft.com/contentredirect.asp. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Recommended method for XML level loading in XNA

    - by David Saltares Márquez
    I want to use Blender as my level designer tool for an XNA game. Using an existing plugin, I can export my levels to DotScene format which is basically an xml file like this one: <scene formatVersion="1.0.0"> <nodes> <node name="scene-staircase.001"> <position x="10.500000" y="1.400000" z="-9.600000"/> <quaternion x="0.000000" y="0.000000" z="-0.000000" w="1.000000"/> <scale x="1.000000" y="1.000000" z="1.000000"/> <entity name="scene-staircase.001" meshFile="staircase.mesh"/> </node> <node name="Lamp.003"> <position x="11.024290" y="5.903862" z="9.658987"/> <quaternion x="-0.284166" y="0.726942" z="0.342034" w="0.523275"/> <scale x="1.000000" y="1.000000" z="1.000000"/> <light name="Spot.003" type="point"> <colourDiffuse r="0.400000" g="0.154618" b="0.145180"/> <colourSpecular r="0.400000" g="0.154618" b="0.145180"/> <lightAttenuation range="5000.0" constant="1.000000" linear="0.033333" quadratic="0.000000"/> </light> </node> ... </nodes> </scene> Using naming conventions I could easily parse the file and load the correspondent in game content. I am new to XNA and I have seen that there are several methods to load XML files into a game like serializing and deserializing. Which one would you recommend?

    Read the article

  • Composite Moon Map Offers Stunning Views of the Lunar Surface [Astronomy]

    - by Jason Fitzpatrick
    Researchers at Arizona State University have stitched together a massive high-resolution map of the moon; seen the moon in astounding detail. Using images fro the Lunar Reconnaissance Orbiter (LRO) they carefully stitch a massive map of the moon with a higher resolution than the public has ever seen before: The WAC has a pixel scale of about 75 meters, and with an average altitude of 50 km, a WAC image swath is 70 km wide across the ground-track. Because the equatorial distance between orbits is about 30 km, there is nearly complete orbit-to-orbit stereo overlap all the way around the Moon, every month. Using digital photogrammetric techniques, a terrain model was computed from this stereo overlap. Hit up the link below to check out the images and the process they used. Lunar Topography as Never Seen Before [via NASA] How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review HTG Explains: How Hackers Take Over Web Sites with SQL Injection / DDoS

    Read the article

  • Stack-based keyboard delay using Logitech MX3100 keyboard

    - by Mark S. Rasmussen
    I've been using a Logitech Cordless Desktop MX3100 keyboard for quite a while. I've never really had any problems, except for the occasional typo. I noticed however that I tended make the typo "Laod" instead of "Load", quite a bit more often than any other typos. As it started to get on my nerves, I decided to do some testing. What I found out was than when I write lowercase "load", I'd never make the typo. All uppercase, or just uppercase L, I'd make the typo quite often. My actual (very scientific) testing is probably best described by showing the output: moatmoatmoat MoatMoatMoat loatloatloat LaotLaotLaot loafloafloaf LaofLaofLaof hoathoathoat HoatHoatHoat hoadhoadhoad HoadHoadHoad lortlortlort LrotLrotLrot What i found out was that whenever shift was depressed, typing an uppercase "L" would induce a significant lag if the next character was an "o", compared to the lag of the any other key: High "o" lag: LoLoLoLoLoLo No "a" lag: LaLaLaLaLaLa No lag for neither "o" nor "a": lolololololo lalalalalala By realizing this I regained a slight bit of sanity as I knew I wasn't coming down with a case of Parkinsons. I was actually typing correctly, the lag just interpreted it wrongly. Now, what really bugs me is that I can't fathom how this is occurring. What I'm actually typing, in physical order, is this: L - o - a - d, and yet, the "a" is output before the "o", even though "o" was pressed before "a". So while the keyboard is processing the "Lo" combo, the "a" gets prioritized and is inserted before the "o" is done processing, resulting in Laod instead of Load. And this only happens when typing "Lo", not when typing lowercase "lo". This problem could stem from the keyboard hardware, the receiver hardware or the keyboard software driver. No matter the fault location however, I can't imagine how this could be implemented as anything but a FIFO queue. A general delay, sure, I could live with that, albeit I'd be irritated. But a lag affecting different keys differently, and even resulting in unpredictable outcome - that just doesn't make any sense. I've solved the problem by just switching to a wired keyboard. I just can't shake it off me though; what kind of bug/error/scenario would result in a case like this? Edit: It's been suggested that I stop drinking Red Bull and stick to water instead. While that may actually help solve the issue, I'm really not looking for a solution as such. I'm more interested in an explanation of how this could happen, as I can't imagine any viable technical solution that could result in this behavior.

    Read the article

  • linux firefox flash debug: i don't get alerts when errors occur

    - by ufk
    Hello. I use firefox 3.6.13 on linux gentoo amd64 and flash 10.2 debug version, i also have firebug flashbug and flashfirebug installed. my problem is that whenever I run a flash application that encounters an error, the browser does not open an alert window. it makes it more difficult to debug applications because i need to make sure that the firebug window is always opened. the only way i can see the error is if i open the flash tab in firebug. is there a way to change this behavior ? i checked .xsession-errors and tried running firefox from console: these are the only errors the i see in the log and these errors do not appear when the flash app encounters an error which means they are not related: Gtk-Message: Failed to load module "canberra-gtk-module": libcanberra-gtk-module.so: cannot open shared object file: No such file or directory Gtk-Message: Failed to load module "gnomebreakpad": libgnomebreakpad.so: cannot open shared object file: No such file or directory thanks

    Read the article

  • CentOS Failover Cluster - SIOCADDRT: No such process (when adding a loopback)

    - by Steve Rolfe
    I'm trying to configure two web servers for a load balancing server. The load balancing aspect works fine (it sees both server, kills 'em if it needs to, and seems to direct traffic fine). The only issue is with the servers looping: /etc/sysconfig/network-scripts/ifcfg-lo:0 DEVICE=lo:0 IPADDR=<Virtual IP> NETMASK=255.255.255.255 ONBOOT=yes NAME=loopback Everytime I try a "service network restart" I get a SIOCADDRT: No such process when loading the loopback interface. Anyone have an idea what's causing this?

    Read the article

  • Web Application Tasks Estimation

    - by Ali
    I know the answer depends on the exact project and its requirements but i am asking on the avarage % of the total time goes into the tasks of the web application development statistically from your own experiance. While developing a web application (database driven) How much % of time does each of the following activities usually takes: -- database creation & all related stored procedures -- server side development -- client side development -- layout settings and designing I know there are lots of great web application developers around here and each one of you have done fair amount of web development and as a result there could be an almost fixed percentage of time going to each of the above activities of web developments for standard projects Update : I am not expecting someone to tell me number of hours i am asking about the average percentage of time that goes on each of the activities as per your experience i.e. server side dev 50%, client side development 20% ,,,,, I repeat there will be lots of cases that differs from the standard depending on the exact requirments of each web application project but here i am asking about Avarage for standard (no special requirment) web project

    Read the article

  • Trouble with cross network permissions for an image through iis7 in an asp.net virtual directory

    - by EdenMachine
    I have load balanced web servers My web application has a function that allows the user to upload their company logo to display in the application header obviously, when they upload the logo image file, it needs to be in a central location or otherwise, the file will not be accessible to the other server on the load balancer. in order to be able to upload the image through the application other on one of either servers and then display it on both servers I need a virtual directory in IIS on both servers that point to a third "file server" (this is the "AcctData" directory shown below with a sub folder "images") the problem is that no matter what I do, I run into a permissioning issue If I use pass-through authentication I get a 401 error. If I use a specific user that's set up on both boxes, I get a 500 error. The "Default" application under "CP" (shown in the image below) uses an AppPool that is a Domain Account that has admin permissions on all the servers. I've also tried sticking a Web.config file in the "AcctData" directory allowing anonymous access. Nothing is working though.

    Read the article

  • How to test web application performance from other continent?

    - by Thomas Einwaller
    We are hosting our web application http://timr.com on a server located in Germany. The server handles a high load of traffic very well and everything works as desired in terms of performance and load times. However we sometimes get complaints from our overseas users (US, South America) that the experience slow page loading times. What would be the best way to test the performance of a web application "as if you are on another continent"? I want to make sure that the distance between the server and the user is no problem?

    Read the article

  • The Unspoken - The Why of GC Ergonomics

    - by jonthecollector
    Do you use GC ergonomics, -XX:+UseAdaptiveSizePolicy, with the UseParallelGC collector? The jist of GC ergonomics for that collector is that it tries to grow or shrink the heap to meet a specified goal. The goals that you can choose are maximum pause time and/or throughput. Don't get too excited there. I'm speaking about UseParallelGC (the throughput collector) so there are definite limits to what pause goals can be achieved. When you say out loud "I don't care about pause times, give me the best throughput I can get" and then say to yourself "Well, maybe 10 seconds really is too long", then think about a pause time goal. By default there is no pause time goal and the throughput goal is high (98% of the time doing application work and 2% of the time doing GC work). You can get more details on this in my very first blog. GC ergonomics The UseG1GC has its own version of GC ergonomics, but I'll be talking only about the UseParallelGC version. If you use this option and wanted to know what it (GC ergonomics) was thinking, try -XX:AdaptiveSizePolicyOutputInterval=1 This will print out information every i-th GC (above i is 1) about what the GC ergonomics to trying to do. For example, UseAdaptiveSizePolicy actions to meet *** throughput goal *** GC overhead (%) Young generation: 16.10 (attempted to grow) Tenured generation: 4.67 (attempted to grow) Tenuring threshold: (attempted to decrease to balance GC costs) = 1 GC ergonomics tries to meet (in order) Pause time goal Throughput goal Minimum footprint The first line says that it's trying to meet the throughput goal. UseAdaptiveSizePolicy actions to meet *** throughput goal *** This run has the default pause time goal (i.e., no pause time goal) so it is trying to reach a 98% throughput. The lines Young generation: 16.10 (attempted to grow) Tenured generation: 4.67 (attempted to grow) say that we're currently spending about 16% of the time doing young GC's and about 5% of the time doing full GC's. These percentages are a decaying, weighted average (earlier contributions to the average are given less weight). The source code is available as part of the OpenJDK so you can take a look at it if you want the exact definition. GC ergonomics is trying to increase the throughput by growing the heap (so says the "attempted to grow"). The last line Tenuring threshold: (attempted to decrease to balance GC costs) = 1 says that the ergonomics is trying to balance the GC times between young GC's and full GC's by decreasing the tenuring threshold. During a young collection the younger objects are copied to the survivor spaces while the older objects are copied to the tenured generation. Younger and older are defined by the tenuring threshold. If the tenuring threshold hold is 4, an object that has survived fewer than 4 young collections (and has remained in the young generation by being copied to the part of the young generation called a survivor space) it is younger and copied again to a survivor space. If it has survived 4 or more young collections, it is older and gets copied to the tenured generation. A lower tenuring threshold moves objects more eagerly to the tenured generation and, conversely a higher tenuring threshold keeps copying objects between survivor spaces longer. The tenuring threshold varies dynamically with the UseParallelGC collector. That is different than our other collectors which have a static tenuring threshold. GC ergonomics tries to balance the amount of work done by the young GC's and the full GC's by varying the tenuring threshold. Want more work done in the young GC's? Keep objects longer in the survivor spaces by increasing the tenuring threshold. This is an example of the output when GC ergonomics is trying to achieve a pause time goal UseAdaptiveSizePolicy actions to meet *** pause time goal *** GC overhead (%) Young generation: 20.74 (no change) Tenured generation: 31.70 (attempted to shrink) The pause goal was set at 50 millisecs and the last GC was 0.415: [Full GC (Ergonomics) [PSYoungGen: 2048K-0K(26624K)] [ParOldGen: 26095K-9711K(28992K)] 28143K-9711K(55616K), [Metaspace: 1719K-1719K(2473K/6528K)], 0.0758940 secs] [Times: user=0.28 sys=0.00, real=0.08 secs] The full collection took about 76 millisecs so GC ergonomics wants to shrink the tenured generation to reduce that pause time. The previous young GC was 0.346: [GC (Allocation Failure) [PSYoungGen: 26624K-2048K(26624K)] 40547K-22223K(56768K), 0.0136501 secs] [Times: user=0.06 sys=0.00, real=0.02 secs] so the pause time there was about 14 millisecs so no changes are needed. If trying to meet a pause time goal, the generations are typically shrunk. With a pause time goal in play, watch the GC overhead numbers and you will usually see the cost of setting a pause time goal (i.e., throughput goes down). If the pause goal is too low, you won't achieve your pause time goal and you will spend all your time doing GC. GC ergonomics is meant to be simple because it is meant to be used by anyone. It was not meant to be mysterious and so this output was added. If you don't like what GC ergonomics is doing, you can turn it off with -XX:-UseAdaptiveSizePolicy, but be pre-warned that you have to manage the size of the generations explicitly. If UseAdaptiveSizePolicy is turned off, the heap does not grow. The size of the heap (and the generations) at the start of execution is always the size of the heap. I don't like that and tried to fix it once (with some help from an OpenJDK contributor) but it unfortunately never made it out the door. I still have hope though. Just a side note. With the default throughput goal of 98% the heap often grows to it's maximum value and stays there. Definitely reduce the throughput goal if footprint is important. Start with -XX:GCTimeRatio=4 for a more modest throughput goal (%20 of the time spent in GC). A higher value means a smaller amount of time in GC (as the throughput goal).

    Read the article

  • Wow Twitter!!! Ten billions and counting

    - by samsudeen
    Twitter the micro blogging site crossed the ten billions milestone on 4th of this month as per the report by GigaTweet (Site which tracks the number of tweets posted on twitter) The person who sent the 10 billionth tweet is still unknown as his profile is protected. But the 9,999,999,999th tweet was sent by one Rafaela Marques from Brazil. AS you can see GigaTweet expects just another 196 days to reach the 20 billionth marks if tweet continues with the current pace. Some of the interesting statics about rate in which people tweeted every year 2007 – 5000 tweets per day 2008 – 300,000 tweets per day 2009 – 2.5 million per day It reached an average of 35 million tweets per day by end  2009. Today believe it or not the tweet rate is 50 million tweets per day and that’s why we call Wow Twitter!!! . Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Web Fonts in Firefox behind Squid

    - by sinping
    It appears as though Firefox requests web fonts (specifically Google's) in a different manner than other browsers. They fail to load when behind Squid as a result, and I'm trying to isolate why. Using this page as a reference in various browsers, Firefox noticeably displays it differently. In the Squid logs, you see hits like this: TCP_DENIED/407 3292 GET http://themes.googleusercontent.com/font? - NONE/- text/html If you load the same page with IE, you see hits like this: TCP_DENIED/407 4389 GET http://themes.googleusercontent.com/static/fonts/federant/v1/C109bUmZeyhh-vIXq9lNfvesZW2xOQ-xsNqO47m55DA.woff - NONE/- text/html So why the difference and how can we make it play nice with Firefox? This is running Squid 3.1.15, but also tested with a few other 3.1.x versions.

    Read the article

  • Where is the bottleneck?

    - by jsymon
    There is a limit on connections somewhere along the line here... On a windows server 2008 machine, each request to a url running on localhost takes ~3 seconds to complete. This is fine and normal for the url. However, if i open the same localhost url in about 10 tabs, and set them to reload all at the same time, they finish sequentially, 3 seconds after each other. Meaning the last tab has taken 30 seconds to load (3s x 10). What is especially odd is that firebug reports each page as taking 3seconds to load. Another point to add is that the status bar just sits at 'done' for the last tab until 3 seconds before completing, where it then changes to 'waiting for localhost'. I am praying there is some connection limit somewhere otherwise this would be a disaster if more than one user ever visited the site at a time! Maybe a limit or something where one pc cant make more than 2 simultaneous requests to a url at a given time?

    Read the article

  • Developing a long pannable, sprite-animated Windows Store app

    - by Groo
    I am creating my first Windows Store app in XAML, and I cannot seem to find a proper example for the requirements I have (I have spent a couple of days fiddling around, so I apologize if I missed something obvious). Basic idea of the app is to have a large scrollable canvas which would lazily start animating visible parts of the view as soon as user stops panning over a certain content (with some audio played also): My original idea was to use a StackPanel to add a bunch of custom controls, each of which would then animate itself once visible (with a short delay), but I have a couple of concerns: If the entire canvas is ~50 screen widths wide, is it feasible to load all content at the beginning, or do I need to plan doing some lazy loading during scrolling? For example, when I select a certain region in the Bing Travel app, it seems to lazily load tiles as I scroll it towards the end. Since content is stretched 100% vertically, and these animations are vectorized to be resolution independent, I am not sure if XAML (CompositionTarget) will be able to handle this, or I have to go for DirectX (MonoGame or C++) to get rid of flicker. Even better, is there an example for Windows 8 which uses a 100% vertically sized GridView with custom animated controls inside?

    Read the article

  • Troubleshooting VC++ DLL in VB.Net

    - by Jolyon
    I'm trying to make a solution in Visual Studio that consists of a VC++ DLL and a VB.Net application. To figure this out, I created a VC++ Class Library project, with the following code (I removed all the junk the wizard creates): mathfuncs.cpp: #include "MathFuncs.h" namespace MathFuncs { double MyMathFuncs::Add(double a, double b) { return a + b; } } mathfuncs.h: using namespace System; namespace MathFuncs { public ref class MyMathFuncs { public: static double Add(double a, double b); }; } This compiles quite happily. I can then add a VC++ console project to the solution, add a reference to the original project for this new project, and call it as follows: test.cpp: using namespace System; int main(array<System::String ^> ^args) { double a = 7.4; int b = 99; Console::WriteLine("a + b = {0}", MathFuncs::MyMathFuncs::Add(a, b)); return 0; } This works just fine, and will build to test.exe and mathsfuncs.dll. However, I want to use a VB.Net project to call the DLL. To do this, I add a VB.Net project to the solution, make it the startup project, and add a reference to the original project. Then, I attempt to use it as follows: MsgBox(MathFuncs.MyMathFuncs.Add(1, 2)) However, when I run this code, it gives me an error: "Could not load file or assembly 'MathFuncsAssembly, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. An attempt was made to load a program with an incorrect format." Do I need to expose the method somehow? I'm using Visual Studio 2008 Professional.

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >