Search Results

Search found 18148 results on 726 pages for 'performance monitor'.

Page 234/726 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Ubuntu Lucid: Erratic screen behaviour after boot

    - by fgysin
    In short: about 50% of the time I have a screwed up monitor setup after reboot. About 50% it is totally correct. Now the longer version: I updated my machine from 9.04 to 10.04 (via 9.10). At first I run into some monitor problems (I have a 3-monitor setup) because of the known bug in the new xserver driver for xinerama. This messes up behaviour if the mouse goes either left or above the screen number 0, i.e. I had to make my left-most monitor screen 0. Everything worked out fine finally, I got my 3-monitor setup back with xinerama enabled to get one big desktop streched over 3 screens. Now the fun part: Every time I start up my machine only one of the 3 monitors gets a signal and is woken up: it only recognizes the left-most monitor (screen 0) and crams all the desktop stuff into this one screen. If I go into nvidia settings I only see one physical device although all 3 are connected and have power. When I look into the xorg.conf I can still see my old setup with 3 devices, 3 screens, xinerama active etc... But I was totally unable to get 3 montitors to work. (I tried unplugging monitors, reconfiguring whole nvidia setup, ...) But it gets even better: When I restart my machine (i.e. choose the restart option from the Ubuntu menu) it shuts down and tries to restart. The restart then gets stuck after showing the Ubuntu splash screen with the 'loading bar' (the moving dots thingy) and I am forced to kill the machine by cutting power. But after the power cut the machine boots up normally and suddenly I get my 3 monitor setup back up working. That is until the next time I shut down and start up, where it all starts over again and I only have one monitor... (see above) I really have a hard time seeing where the error is. It must be that the restart boot somehow differs from the 'normal' boot. But the fact that it gets stuck and I need to cut power which then basically triggers a 'normal' boot does not really support this theory... My setup (please tell me if you need further info): 3 monitors as 3 screens as one desktop (with xinerama) 2 nvidia cards where screen 0 and 1 are on card 0 and screen 2 is on card 1 Ubuntu 10.04 Lucid Lynx (updated from 9.10, 9.04, ....) I would appreciate every idea on the subject, at the moment I really don't have any clue what to do...

    Read the article

  • Corosync :: Restarting some resources after Lan connectivity issue

    - by moebius_eye
    I am currently looking into corosync to build a two-node cluster. So, I've got it working fine, and it does what I want to do, which is: Lost connectivity between the two nodes gives the first node '10node' both Failover Wan IPs. (aka resources WanCluster100 and WanCluster101 ) '11node' does nothing. He "thinks" he still has his Failover Wan IP. (aka WanCluster101) But it doesn't do this: '11node' should restart the WanCluster101 resource when the connectivity with the other node is back. This is to prevent a condition where node10 simply dies (and thus does not get 11node's Failover Wan IP), resulting in a situation where none of the nodes have 10node's failover IP because 10node is down 11node has "given back" his failover Wan IP. Here's the current configuration I'm working on. node 10sch \ attributes standby="off" node 11sch \ attributes standby="off" primitive LanCluster100 ocf:heartbeat:IPaddr2 \ params ip="172.25.0.100" cidr_netmask="32" nic="eth3" \ op monitor interval="10s" \ meta is-managed="true" target-role="Started" primitive LanCluster101 ocf:heartbeat:IPaddr2 \ params ip="172.25.0.101" cidr_netmask="32" nic="eth3" \ op monitor interval="10s" \ meta is-managed="true" target-role="Started" primitive Ping100 ocf:pacemaker:ping \ params host_list="192.0.2.1" multiplier="500" dampen="15s" \ op monitor interval="5s" \ meta target-role="Started" primitive Ping101 ocf:pacemaker:ping \ params host_list="192.0.2.1" multiplier="500" dampen="15s" \ op monitor interval="5s" \ meta target-role="Started" primitive WanCluster100 ocf:heartbeat:IPaddr2 \ params ip="192.0.2.100" cidr_netmask="32" nic="eth2" \ op monitor interval="10s" \ meta target-role="Started" primitive WanCluster101 ocf:heartbeat:IPaddr2 \ params ip="192.0.2.101" cidr_netmask="32" nic="eth2" \ op monitor interval="10s" \ meta target-role="Started" primitive Website0 ocf:heartbeat:apache \ params configfile="/etc/apache2/apache2.conf" options="-DSSL" \ operations $id="Website-one" \ op start interval="0" timeout="40" \ op stop interval="0" timeout="60" \ op monitor interval="10" timeout="120" start-delay="0" statusurl="http://127.0.0.1/server-status/" \ meta target-role="Started" primitive Website1 ocf:heartbeat:apache \ params configfile="/etc/apache2/apache2.conf.1" options="-DSSL" \ operations $id="Website-two" \ op start interval="0" timeout="40" \ op stop interval="0" timeout="60" \ op monitor interval="10" timeout="120" start-delay="0" statusurl="http://127.0.0.1/server-status/" \ meta target-role="Started" group All100 WanCluster100 LanCluster100 group All101 WanCluster101 LanCluster101 location AlwaysPing100WithNode10 Ping100 \ rule $id="AlWaysPing100WithNode10-rule" inf: #uname eq 10sch location AlwaysPing101WithNode11 Ping101 \ rule $id="AlWaysPing101WithNode11-rule" inf: #uname eq 11sch location NeverLan100WithNode11 LanCluster100 \ rule $id="RAND1083308" -inf: #uname eq 11sch location NeverPing100WithNode11 Ping100 \ rule $id="NeverPing100WithNode11-rule" -inf: #uname eq 11sch location NeverPing101WithNode10 Ping101 \ rule $id="NeverPing101WithNode10-rule" -inf: #uname eq 10sch location Website0NeedsConnectivity Website0 \ rule $id="Website0NeedsConnectivity-rule" -inf: not_defined pingd or pingd lte 0 location Website1NeedsConnectivity Website1 \ rule $id="Website1NeedsConnectivity-rule" -inf: not_defined pingd or pingd lte 0 colocation Never -inf: LanCluster101 LanCluster100 colocation Never2 -inf: WanCluster100 LanCluster101 colocation NeverBothWebsitesTogether -inf: Website0 Website1 property $id="cib-bootstrap-options" \ dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ last-lrm-refresh="1408954702" \ maintenance-mode="false" rsc_defaults $id="rsc-options" \ resource-stickiness="100" \ migration-threshold="3" I also have a less important question concerning this line: colocation NeverBothLans -inf: LanCluster101 LanCluster100 How do I tell it that this collocation only applies to '11node'.

    Read the article

  • Ubuntu Lucid: Erratic screen behaviour after boot

    - by fgysin
    In short: about 50% of the time I have a screwed up monitor setup after reboot. About 50% it is totally correct. Now the longer version: I updated my machine from 9.04 to 10.04 (via 9.10). At first I run into some monitor problems (I have a 3-monitor setup) because of the known bug in the new xserver driver for xinerama. This messes up behaviour if the mouse goes either left or above the screen number 0, i.e. I had to make my left-most monitor screen 0. Everything worked out fine finally, I got my 3-monitor setup back with xinerama enabled to get one big desktop streched over 3 screens. Now the fun part: Every time I start up my machine only one of the 3 monitors gets a signal and is woken up: it only recognizes the left-most monitor (screen 0) and crams all the desktop stuff into this one screen. If I go into nvidia settings I only see one physical device although all 3 are connected and have power. When I look into the xorg.conf I can still see my old setup with 3 devices, 3 screens, xinerama active etc... But I was totally unable to get 3 montitors to work. (I tried unplugging monitors, reconfiguring whole nvidia setup, ...) But it gets even better: When I restart my machine (i.e. choose the restart option from the Ubuntu menu) it shuts down and tries to restart. The restart then gets stuck after showing the Ubuntu splash screen with the 'loading bar' (the moving dots thingy) and I am forced to kill the machine by cutting power. But after the power cut the machine boots up normally and suddenly I get my 3 monitor setup back up working. That is until the next time I shut down and start up, where it all starts over again and I only have one monitor... (see above) I really have a hard time seeing where the error is. It must be that the restart boot somehow differs from the 'normal' boot. But the fact that it gets stuck and I need to cut power which then basically triggers a 'normal' boot does not really support this theory... My setup (please tell me if you need further info): 3 monitors as 3 screens as one desktop (with xinerama) 2 nvidia cards where screen 0 and 1 are on card 0 and screen 2 is on card 1 Ubuntu 10.04 Lucid Lynx (updated from 9.10, 9.04, ....) I would appreciate every idea on the subject, at the moment I really don't have any clue what to do...

    Read the article

  • Dual Monitors in Ubuntu

    - by Luminose
    My only issue that is stopping me from moving to Linux is the dual monitor support. If I use TwinView, maximizing an application causes it to take over both monitors, not maximize in the current monitor the way it works in Windows. If I use two separate X windows, certain programs default to a specific monitor with no way of moving it to the other desktop. Has anyone else had these issues? Are there any detailed dual monitor resources.for Linux/Ubuntu I can read?

    Read the article

  • Dual Monitors Switching While Running Full-Screen Game?

    - by Phil Sandler
    I have a dual-monitor setup, and currently I can run a full screen game (Warcraft 3 and Starcraft 2 currently) and see still see anything I have open on my second monitor. I can't move my mouse from one monitor to the other, which is a good thing. However, I would like to know if it's possible to press some key combination and "release" my mouse control from the full screen game to my second monitor without the game minimizing (which is what happens when I ALT - Tab).

    Read the article

  • How to figure out how much RAM each prefork thread requires for maximum Wordpress performance on an EC2 small instance

    - by two7s_clash
    Just read Making WordPress Stable on EC2-Micro In the "Tuning Apache" section, I can't quite figure out how he comes up with his numbers for his prefork config. He explains how to get the numbers for an average process, which I get. But then: Or roughly 53MB per process...In this case, ten threads should be safe. This means that if we receive more than ten simultaneous requests, the other requests will be queued until a worker thread is available. In order to maximize performance, we will also configure the system to have this number of threads available all of the time. From 53MB per process, with 613MB of RAM, he somehow gets this config, which I don't get: <IfModule prefork.c> StartServers 10 MinSpareServers 10 MaxSpareServers 10 MaxClients 10 MaxRequestsPerChild 4000 </IfModule> How exactly does he get this from 53MB per process, with 613MB limit? Bonus question From the below, on a small instance (1.7 GB memory), what would good settings be? bitnami@ip-10-203-39-166:~$ ps xav |grep httpd 1411 ? Ss 0:00 2 0 114928 15436 0.8 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1415 ? S 0:06 10 0 125860 55900 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1426 ? S 0:08 19 0 127000 62996 3.5 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1446 ? S 0:05 48 0 131932 72792 4.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1513 ? S 0:05 7 0 125672 54840 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1516 ? S 0:02 2 0 125228 48680 2.7 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1517 ? S 0:06 2 0 127004 55796 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1518 ? S 0:03 1 0 127196 54208 3.0 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1531 ? R 0:04 0 0 127500 54236 3.0 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf

    Read the article

  • Increasing SQL Server / Sage performance with SSD? (Dell PE T410)

    - by Anthony
    I have a client wanting better performance of their Sage (Accpac & CRM) server (v5.5, soon to be v7). It's running on 1 of 2 Hyper-V VMs (Svr2008) on a Dell PE T410 server with 24GB of RAM (1333MHz) & dual quad-core, and both VMs (only their C: drives) are on a single RAID5 array. All clients connect via 1Gb ethernet. The 2nd VM is SBS2008 with 9GB RAM (& all SBS dbs & company data are on a separate RAID5 array), & 3GB RAM for the Svr2008 hypervisor. I've given the Sage/SQL Server VM all the RAM I can (12GB) & SQL Server RAM caching (~8GB, never exceeds ~7.5GB, eg. entire db can now be cached in RAM) and that's helped significantly. Upgrading the Hypervisor to Svr2012 is an obvious step, but probably not a dramatic improvement? What about an SSD for this Sage/SQL Server VM (VM = 100GB, <10GB for the actual live DB) ? Can SSDs be put into the SAS hot-swap bays? Or will I have to use the mobo SATA(3Gbps?) ports, or PCI-E SSD card? Should SSDs be RAIDed for this situation? Or is SSD's higher reliability offsetting the need for RAID1/5/10? (I have nightly full disk backups) New territory for me, would appreciate some feedback. Thanks, Anthony.

    Read the article

  • Increasing MSSQL/Sage performance with SSD? (Dell PE T410)

    - by Anthony
    I have a client wanting better performance of their Sage (Accpac & CRM) server (v5.5, soon to be v7). It's running on 1 of 2 Hyper-V VMs (Svr2008) on a Dell PE T410 server with 24GB of RAM (1333MHz) & dual quad-core, and both VMs (only their C: drives) are on a single RAID5 array. All clients connect via 1Gb ethernet. The 2nd VM is SBS2008 with 9GB RAM (& all SBS dbs & company data are on a separate RAID5 array), & 3GB RAM for the Svr2008 hypervisor. I've given the Sage/MSSQL VM all the RAM I can (12GB) & SQL RAM caching (~8GB, never exceeds ~7.5GB, eg. entire db can now be cached in RAM) and that's helped significantly. Upgrading the Hypervisor to Svr2012 is an obvious step, but probably not a dramatic improvement? What about an SSD for this Sage/SQL VM (VM = 100GB, <10GB for the actual live DB) ? Can SSDs be put into the SAS hot-swap bays? Or will I have to use the mobo SATA(3Gbps?) ports, or PCI-E SSD card? Should SSDs be RAIDed for this situation? Or is SSD's higher reliability offsetting the need for RAID1/5/10? (I have nightly full disk backups) New territory for me, would appreciate some feedback. Thanks, Anthony.

    Read the article

  • Will adding a SSD cache device to my ZFS storage improve performance?

    - by Sysadminicus
    The server has 4GB of RAM and my zpool is made up of 15.5k SAS drives arranged like this: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 c0t11d0 ONLINE 0 0 0 c0t12d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 spares c0t9d0 AVAIL c0t1d0 AVAIL The primary use is as an NFS store for a couple VMWare ESXi servers. I can't do any "true" benchmarks because this is a production system (no budget for test systems), but using dd and bonnie++ I can't get more than ~40-50MB/s writes and ~70-90MB/s reads. It seems I should be able to do much better, but I'm not sure where to optimize. Based on what I've read, I think dropping in a OCZ Vertex 2 Pro SSD as my L2ARC is going to be the best bang-for-the-buck to improve througput. Is there something else I should be looking into to help performance? If not... How do I know how big a cache device I need? Am I safe with only a single SSD as my cache device?

    Read the article

  • Dependency Injection and Unit of Work pattern

    - by sunwukung
    I have a dilemma. I've used DI (read: factory) to provide core components for a homebrew ORM. The container provides database connections, DAO's,Mappers and their resultant Domain Objects on request. Here's a basic outline of the Mappers and Domain Object classes class Mapper{ public function __constructor($DAO){ $this->DAO = $DAO; } public function load($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; $values = $this->DAO->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); return $Object; } } class User(){ public function setName($string){ $this->name = $string; //mark modified by means fair or foul } } The ORM also contains a class (Monitor) based on the Unit of Work pattern i.e. class Monitor(){ private static array modified; private static array dirty; public function markClean($class); public function markModified($class); } The ORM class itself simply co-ordinates resources extracted from the DI container. So, to instantiate a new User object: $Container = new DI_Container; $ORM = new ORM($Container); $User = $ORM->load('user',1); //at this point the container instantiates a mapper class //and passes a database connection to it via the constructor //the mapper then takes the second argument and loads the user with that id $User->setName('Rumpelstiltskin');//at this point, User must mark itself as "modified" My question is this. At the point when a user sets values on a Domain Object class, I need to mark the class as "dirty" in the Monitor class. I have one of three options as I can see it 1: Pass an instance of the Monitor class to the Domain Object. I noticed this gets marked as recursive in FirePHP - i.e. $this-Monitor-markModified($this) 2: Instantiate the Monitor directly in the Domain Object - does this break DI? 3: Make the Monitor methods static, and call them from inside the Domain Object - this breaks DI too doesn't it? What would be your recommended course of action (other than use an existing ORM, I'm doing this for fun...)

    Read the article

  • Is ReaderWriterLockSlim.EnterUpgradeableReadLock() essentially the same as Monitor.Enter()?

    - by Neil Barnwell
    So I have a situation where I may have many, many reads and only the occasional write to a resource shared between multiple threads. A long time ago I read about ReaderWriterLock, and have read about ReaderWriterGate which attempts to mitigate the issue where many writes coming in trump reads and hurt performance. However, now I've become aware of ReaderWriterLockSlim... From the docs, I believe that there can only be one thread in "upgradeable mode" at any one time. In a situation where the only access I'm using is EnterUpgradeableReadLock() (which is appropriate for my scenario) then is there much difference to just sticking with lock(){}? Here's the excerpt: A thread that tries to enter upgradeable mode blocks if there is already a thread in upgradeable mode, if there are threads waiting to enter write mode, or if there is a single thread in write mode. Or, does the recursion policy make any difference to this?

    Read the article

  • From a performance standpoint, is there a preferred way to set up Thunderbird message filters?

    - by Steve V.
    When I used Kontact, I used filters heavily to sort my incoming email into various folders. Since switching to Thunderbird, I've been slowly recreating these filters as new mail arrives. This seems like the perfect time to rethink how I use message filters. The way I see it, there are two basic ways to filter messages. Either I can have lots of filters, or I can have lots of criteria. An example is in order. Assume that I get emails from four people (Boss, CEO, Intern, and StackOverflow) that I want to sort into two folders, "Stack" and "Work" Option A: Filter 1: if FROM contains "Boss" -> move to "Work" Filter 2: if FROM contains "CEO" -> move to "Work" Filter 3: if FROM contains "Intern" -> move to "Work" Filter 4: if FROM contains "StackOverflow" -> move to "Stack" Option B: Filter 1: if FROM contains "Boss" OR FROM contains "CEO" OR FROM contains "Intern" -> move to "Work" Filter 2: if FROM contains "StackOverflow" -> move to "Stack" Assuming that when I'm done, I'll have about a hundred different criteria to filter on, is one of these methods better than the other from a performance standpoint?

    Read the article

  • Does a high run queue length average result in poor performance for a web server?

    - by Domino
    I'm trying to narrow down the list of suspects of web servers that perform moderately well most of the time with occasional bouts of poor performance. I'm analyzing the data collected and summarized by sar. I've noticed a few things, one of which is high number of tasks in the run queue. 10:15:01 AM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked 10:25:01 AM 2 150 0.05 0.05 0.06 0 10:35:01 AM 4 149 0.08 0.12 0.09 0 10:45:01 AM 6 150 0.13 0.19 0.15 0 10:55:01 AM 1 150 0.08 0.10 0.13 0 11:05:01 AM 4 150 0.20 0.35 0.23 0 11:15:01 AM 3 149 0.02 0.09 0.15 0 11:25:01 AM 7 149 0.04 0.05 0.11 0 11:35:01 AM 4 150 0.14 0.15 0.13 0 11:45:01 AM 6 150 0.27 0.18 0.16 0 11:55:01 AM 5 150 0.08 0.10 0.13 0 12:05:01 PM 3 149 0.35 0.40 0.26 0 12:15:01 PM 19 155 0.02 0.10 0.16 1 12:25:01 PM 2 150 0.00 0.07 0.12 0 12:35:02 PM 3 151 0.58 0.24 0.17 0 12:45:01 PM 8 150 0.02 0.13 0.15 0 12:55:01 PM 6 149 0.81 0.29 0.18 0 01:05:01 PM 3 148 0.00 0.09 0.13 0 01:15:01 PM 7 149 0.00 0.04 0.11 0 I believe these are 10 minute averages. Is this an indicator that the web server is not performing as fast as it could if the average run queue length was lower?

    Read the article

  • Performance-optimizing Oracle 10g on a server that is also a Tomcat JSP app server?

    - by PKHunter
    I have inherited a simple RedHat 5 - 64bit platform. It has SCSI disks on RAID1, with 16GB of RAM. Double Core CPU. Oracle 10g, Release 2. This would be a decent platform for running the DB only, perhaps, but the same server in an "A-A mode" clustering (very simple) also runs Tomcat and there are several Java servlets running on this. Sadly there is no caching platform etc. We only use an external CDN for some html caching. I am personally more familiar with web environments on the LAMPP platform (apache, php, mysql, postgresql). PROBLEM: Because the server has both Tomcat JSP/Java and Oracle 10g running on the same server, with no caching, I have some issues of the server going down. Often, sadly. QUESTION: What are my options in terms of improving performance of all these different apps? Connection Pooling? Example, in Postgresql world we have PgBouncer, which really helps things. Does Oracle have something similar? Or is there a famous Java-based external pooler that people use in production environments? (I'm not familiar with Java) Any "SQL cache" as in the MySQL and Postgresql world? Any other kind of application cache, as "APC" or "eAccelarator" in the PHP world? The "OSCache" stuff from the Java world (JSP thingie I found on Google: http://onjava.com/pub/a/onjava/2005/01/05/jspcache.html?page=2) ... What else? Sorry if this is a noob question. I have googled and googled, but problem is I don't know what to google for, other than the broad general concepts above. So if not full answers, I would even appreciate basic pointers and I am happy to JFGI myself. Thanks!

    Read the article

  • How to improve network performance between two Win 2008 KMV guest having virtio driver already?

    - by taazaa
    I have two physical servers with Ubuntu 10.04 server on them. They are connected with a 1Gbps card over a gigabit switch. Each of these host servers has one Win 2008 guest VM. Both VMs are well provisioned (4 cores, 12GB RAM), RAW disks. My asp.net/sql server applications are running much slower compared to very similar physical setups. Both machines are setup to use virtio for disk and network. I used iperf to check network performance and I get: Physical host 1 ----- Physical Host 2: 957 Mbits/sec Physical host 1 ----- Win 08 Guest 1: 557 Mbits/sec Win 08 Guest 1 ----- Phy host 1: 182 Mbits/sec Win 08 Guest 1 ----- Win 08 Guest 2: 111 Mbits /sec My app is running on Win08 Guest 1 and Guest 2 (web and db). There is a huge drop in network throughput (almost 90%) between the two guest. Further the throughput does not seem to be symmetric between host and guest as well. The CPU utilization on the guests and hosts is less than 2% right now (we are just testing right now). Apart from this, there have been random slow downs in the network to as low as 1 Mbits/sec making the whole application unusable. Any help to trouble shoot this would be appreciated.

    Read the article

  • Best Practices - which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) One question that frequently comes up is "which types of domain should I use to run applications?" There used to be a simple answer in most cases: "only run applications in guest domains", but enhancements to T-series servers, Oracle VM Server for SPARC and the advent of SPARC SuperCluster have made this question more interesting and worth qualifying differently. This article reviews the relevant concepts and provides suggestions on where to deploy applications in a logical domains environment. Review: division of labor and types of domain Oracle VM Server for SPARC offloads many functions from the hypervisor to domains (also called virtual machines). This is a modern alternative to using a "thick" hypervisor that provides all virtualization functions, as in traditional VM designs, This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, which further improves reliability and security. In this architecture, management and I/O functionality are provided within domains. Oracle VM Server for SPARC does this by defining the following types of domain, each with their own roles: Control domain - management control point for the server, used to configure domains and manage resources. It is the first domain to boot on a power-up, is an I/O domain, and is usually a service domain as well. I/O domain - has been assigned physical I/O devices: a PCIe root complex, a PCI device, or a SR-IOV (single-root I/O Virtualization) function. It has native performance and functionality for the devices it owns, unmediated by any virtualization layer. Service domain - provides virtual network and disk devices to guest domains. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI busses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain, which is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure: guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device doesn't result in an application outage. This is also used for "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O busses, so there is more I/O capacity that can be used for applications. Increased T-series server capacity made it attractive to run more vertical applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the SPARC SuperCluster engineered system, announced a year ago at Oracle OpenWorld. In SPARC SuperCluster, I/O domains are used for high performance applications, with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is the introduction of Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. A domain with either a DIO or SR-IOV device is an I/O domain. In summary: not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O go guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm has to be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. I/O domains can be used for applications with high performance requirements. This is used to great effect in SPARC SuperCluster and in general T4 deployments. Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV) make this more attractive by giving direct I/O access to more domains. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect other domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so an interruption of service in the service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. SPARC SuperCluster use the control domain for applications, but it is an exception: it's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity T-series servers have made it more attractive to use them for applications with high resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide higher performance for critical applications.

    Read the article

  • how to monitor the program code execution? (file creation and modification by code lines etc)

    - by infant programmer
    My program is about triggering XSL transformation, Its fact that this code for carrying out the transformation, creates some dll and tmp files and deletes them pretty soon after the transformation is completed. It is almost untraceable for me to monitor the creation and deletion of files manually, so I want to include some chunk of codelines to display "which codeline has created/modified which tmp and dll files" in console window. This is the relevant part of the code: string strXmlQueryTransformPath = @"input.xsl"; string strXmlOutput = string.Empty; StringReader srXmlInput = null; StringWriter swXmlOutput = null; XslCompiledTransform xslTransform = null; XPathDocument xpathXmlOrig = null; XsltSettings xslSettings = null; MemoryStream objMemoryStream = null; objMemoryStream = new MemoryStream(); xslTransform = new XslCompiledTransform(false); xpathXmlOrig = new XPathDocument("input.xml"); xslSettings = new XsltSettings(); xslSettings.EnableScript = true; xslTransform.Load(strXmlQueryTransformPath, xslSettings, new XmlUrlResolver()); xslTransform.Transform(xpathXmlOrig, null, objMemoryStream); objMemoryStream.Position = 0; StreamReader objStreamReader = new StreamReader(objMemoryStream); strXmlOutput = objStreamReader.ReadToEnd(); // make use of Data in string "strXmlOutput" google and msdn search couldn't help me much..

    Read the article

  • Best way to run multiple queries per second on database, performance wise?

    - by Michael Joell
    I am currently using Java to insert and update data multiple times per second. Never having used databases with Java, I am not sure what is required, and how to get the best performance. I currently have a method for each type of query I need to do (for example, update a row in a database). I also have a method to create the database connection. Below is my simplified code. public static void addOneForUserInChannel(String channel, String username) throws SQLException { Connection dbConnection = null; PreparedStatement ps = null; String updateSQL = "UPDATE " + channel + "_count SET messages = messages + 1 WHERE username = ?"; try { dbConnection = getDBConnection(); ps = dbConnection.prepareStatement(updateSQL); ps.setString(1, username); ps.executeUpdate(); } catch(SQLException e) { System.out.println(e.getMessage()); } finally { if(ps != null) { ps.close(); } if(dbConnection != null) { dbConnection.close(); } } } And my DB connection private static Connection getDBConnection() { Connection dbConnection = null; try { Class.forName(DB_DRIVER); } catch (ClassNotFoundException e) { System.out.println(e.getMessage()); } try { dbConnection = DriverManager.getConnection(DB_CONNECTION, DB_USER,DB_PASSWORD); return dbConnection; } catch (SQLException e) { System.out.println(e.getMessage()); } return dbConnection; } This seems to be working fine for now, with about 1-2 queries per second, but I am worried that once I expand and it is running many more, I might have some issues. My questions: Is there a way to have a persistent database connection throughout the entire run time of the process? If so, should I do this? Are there any other optimizations that I should do to help with performance? Thanks

    Read the article

  • What does 'Highest active time' for disk activity in Windows resource monitor mean?

    - by Nick R
    I know what the disk io, disk queue length and other measures are, but what does 'Highest active time' mean? Is it the amount of time it is busy handling requests, or something else? When it is high, does it mean the CPU is busy doing some IO work, or is it just indicating that the disk is busy handling requests? I'm trying to work out if 50% active time means that 50% of the time the disk is either seeking, reading or writing, rather than the kernel is spending 50% of it's time servicing IO requests. Edit Another quick data point here. If you look at the difference between an SSD and a physical disk, the SSD has significantly less activity, so I guess this really means the amount of time the operating system is waiting for the disk to respond and returning data.

    Read the article

  • Zabbix Server with Multiple NIC (one on different VLAN) - Monitor a host from both NIC?

    - by Joshua Enfield
    Basically we have many of servers configured for internal use only. I want to ensure the internal services are preserved as internal by checking a host using the local subnet (allowed - this checks if services are up and working), and that the internal services are indeed internal (make sure the services are "down" when checking from different subnet (vlan)) Is there an easy way to do this in Zabbix?

    Read the article

  • Why does this monitor go to sleep but won't wake up, event though all related power options are turned off?

    - by VoidKing
    We have a machine, Windows 7 (64-bit) with ASUS motherboard, in our building that only sometimes seems to go to sleep, but won't wake up. The machine is still running but shaking the mouse or hitting keys on the keyboard won't wake it up. I have tried for days to isolate the problem, but every time we get to a "let's try this" scenario, it happens again, later. All power options related to display being off are set to never, except the simple, "turn display off after..." setting. That is, the hard drive is set to never turn off, and the computer is set to never sleep nor hibernate. All drivers seem to be up to date, but I am afraid I will hose the machine if I do a BIOS update (plus I figured that will probably have nothing to do with the issue and only make something else break). Wasn't sure if there was something obvious I was missing?

    Read the article

  • Dual monitor webbrowsing with Firefox - Addon that opens links on the other window?

    - by raegadfsgfsdg
    I often browse news aggressors like Reddit.com and I want to be able to have one Firefox window open on each of my two screens, and when I middle click on a link on my left screen, it will open up the tab on my right firefox window. I know that there are some plugins that can open up new windows, but if I do that I'll have to many windows open and I'd rather manage my open news stories via tab bars rather than with windows (clearer, more efficient to browse, and easier to bookmark). Is there a Firefox addon with which I can middle click on a link and it will open it in a new tab in my other window?

    Read the article

  • Change Windows 7 start menu height depending on monitor size?

    - by hippietrail
    I know I can change the height of the Windows 7 Start Menu so that includes more or fewer recently used apps, etc. But I have a netbook with a tiny screen that I plug into a decent sized LCD most of the time at home. Is there a way to get Win7 to use a taller Start menu with more items when I'm using the LCD and a shorter Start menu with fewer items when using the netbook's built in screen? (I'm a programmer so capable of technical solutions if there's no ready-made solution.)

    Read the article

  • How to monitor changes in the frequency of network latency spikes over time?

    - by dequis
    I'm currently trying to troubleshoot an issue with my network in which I get latency spikes up to 200 seconds (normally around 50 secs) in an apparently random way at apparently random moments of the day. While trying to find what part of my messy network needs to be blamed (outside of the scope of this question - discussed a bit on chat), I realized I have no reliable way to confirm that a change actually improved anything. So far, the main way in which i notice this is that irssi shows [Lag: 15 (??)] in the statusbar, increasing every 5 seconds, and all other connections seem to be affected too. Since this depends on my observations, it's not a very reliable method to know how often it really happens. Note that just sending ICMP pings is probably not enough, but that's just my guess. It might be a "bufferbloat" issue, it might be packet loss, it might only apply to persistent connections. I suspect this because a few months ago, when the issue started, I had a "ping" command running in background and it didn't show anything weird at all during the latency spikes. This seems to have changed now (pings don't go through), but still, I'd prefer something more robust.

    Read the article

  • How should I monitor memory usage/performance in SunOS/Solaris?

    - by exhuma
    Last week we decided to add some SunOS (uname -a = SunOS bbs-sam-belair 5.10 Generic_127128-11 i86pc i386 i86pc) machines into our running munin instance. First off, the machines are pre-configured appliances, so, I want to avoid touching the system too much without supervision of the service provider. But adding it to munin was fairly easy by writing a small socket-service (if anyone is interested, I put it up on github: https://github.com/munin-monitoring/contrib/tree/master/tools/pypmmn) Yesterday, I implemented/adapted the required plugins for our machines. And here the questions start: First, I have not found a way to determine detailed memory usage values. I get the total memory by running prtconf | grep Memory, and the free memory using vmstat. Fiddling together a munin-plugin, gives me the following graph: This is pretty much uninformative. Compare this to the default plugin for linux nodes which has a lot more detail: Most importantly, this shows me how much memory is actually used by applications. So, first question: Is it possible to get detailed memory information on SunOS with the default system tools (i.e. not using top)? Onto the next puzzle: Seeing the graphs, I noticed activity in the "Paging in/out" graphs, even though the memory graph still has unused memory: Upon further investigation, I found out that df reports that /tmp is mounted on swap. Drilling around on the web, I understood that df will display swap, but in fact, it's mounted as a tmpfs. Now I don't know if this explains the swap activity. The default munin-plugin for solaris uses kstat -p -c misc -m cpu_stat to get these values. I find it already strange that this is using the cpu_stat module. So maybe I simply misinterpret the "paging" graphs? Second question: Do the paging graphs indicate that parts of the memory are paged to disk? Or is the activity caused by file operations in /tmp?

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >