Search Results

Search found 16987 results on 680 pages for 'second'.

Page 589/680 | < Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >

  • VMware Workstation reboot 32-bit host when starting 32/64-bit guest using VT-x

    - by Powerman
    I'm trying to start 64-bit guest (MacOSX and Windows7) on 32-bit host (Hardened Gentoo Linux, kernel 2.6.28-hardened-r9) using VMware Workstation (6.5.3.185404 and 7.0.1.227600). If VT-X disabled in BIOS, VMware refuse to start 64-bit guest (as expected). If VT-X enabled in BIOS, VMware start guest without complaining, but then, in about a second (I suppose as soon as guest try to switch on 64-bit) my host reboots (actually, it's more like reset - normal reboot procedure skipped and BIOS POST start immediately). My hardware is Core 2 Duo 6600 on ASUS P5B-Deluxe with latest stable BIOS 1101. I've power-cycled system, then enabled Vanderpool in BIOS. My CPU doesn't support Trusted Execution Technology, and there no way to disable it in BIOS. I've rebooted several times after that, sometimes with power-cycled, and ensure Vandertool is enabled in BIOS. I've also run VMware-guest64check-5.5.0-18463 tool, and it report "This host is capable of running a 64-bit guest operating system under this VMware product.". About a year ago I tried to disable hardened in kernel to ensure this isn't because of PaX/GrSecurity, but that doesn't help. I have not checked 32-bit guests with VT-X enabled yet, but without VT-X they works ok. ASUS provide "beta" BIOS updates, but according to their descriptions these updates doesn't fix this issue, so I'm not sure is it good idea to try it. My best guess now it's motherboard/BIOS bug. Any ideas? Update 1: I've tried to boot vt.iso provided at http://communities.vmware.com/docs/DOC-8978 and here is it report: CPU 0: VT is enabled on this core CPU 1: VT is enabled on this core Update 2: I've just tried to boot 32-bit guests (Windows7, Ubuntu9.04 and Gentoo) using all possible virtualization modes. In Automatic, Automatic with Replay, Binary translation everything works, in Intel VT-x/EPT or AMD-V/RVI I got message "This host does not support EPT. Using software virtualization with a software MMU." and everything works. BUT in Intel VT-x or AMD-V mode all 32-bit guests reset host just like 64-bit guests! So, this issue is not specific to 64-bit guests. One more thing. Using Intel VT-x or AMD-V mode for both 32/64-bit guests my host reset right after starting VM, i.e. before VM BIOS POST and before guest even start booting. But using Intel VT-x/EPT or AMD-V/RVI VM BIOS runs ok, then 64-bit guests start booting (Windows7 completed "Loading files" progressbar), and only after that host reset.

    Read the article

  • Windows explorer locks files

    - by John Prince
    I'm using Office 2010 & Windows 7 Home Premium 64-bit. My problem starts when I attempt to save e-mail messages to my PC that I have received via Outlook (my ISP is Comcast). I'm using the default .msg file extension option when I attempt to save these e-mails. The resultant files are locked and do not show the normal "envelope" icon. Instead, it’s a “blank page” icon with the right upper corner folded in. These files refuse to open either by double clicking on them or right clicking and trying to open them with Outlook. And when I return to Outlook, I discover that Outlook is now hung up and I have to close it via the Task Manager. To make matters worse, I’ve also discovered that every e-mail message that I've saved on my PC over the years has also somehow become locked and their original "envelope" icon has been replaced with the "blank page" icon. I found and installed an application called LockHunter. As a result, when I right click on a saved and locked e-mail message, I’ve given an option to find out what's locking it. Each time I'm told that the culprit is Windows explorer.exe. When I unlock the file the normal envelope icon is sometimes displayed (but not always) but at least the file can then be opened. But the file is still “squirrely” as it can’t be moved or saved to a folder until it’s unlocked again. On this second attempt, LockHunter says it’s now locked by Outlook.exe. By the way, I don't have this issue when I save Word, Excel & PowerPoint files; only with Outlook. I've exhausted every remedy that I can think of including: making sure that the file and folder options are checked to always show icons and not thumbnails; running the Windows 7 & Office 2010 repair options which find nothing amiss; running a complete system scan with Windows Malicious Software Removal Tool with negative results; verifying that Outlook is the default for opening e-mails; updating all of my applications via Secunia Personal Software Inspector; uninstalling every application that I felt was unnecessary; doing a registry cleanup via CC Cleaner; having Windows Security Essentials always on (it did find one Java Trojan recently which was quarantined and then deleted); uninstalling a bunch of non-Microsoft shell extensions; and deactivating all of the Outlook Add-ins and then re-activating each one. None of this solves the problem. I’d welcome any advice on how to resolve this.

    Read the article

  • How to manipulate default document with rewrite module on IIS7?

    - by eugeneK
    Until few month ago i've been using IIS 6 where i could add different Default Documents to each websites which are physically same directory. Since II7 which adds Default Document value to web config i couldn't use such technique as web.config was changed for all the directory. So I've found simple solution with rewrite module to change Default Document for each domain <defaultDocument enabled="false" /> <rewrite> <rewriteMaps> <rewriteMap name="ResolveDefaultDocForHost"> <add key="site1.com" value="Default1.aspx" /> <add key="site2.com" value="Default2.aspx" /> </rewriteMap> </rewriteMaps> <rules> <rule name="DefaultDoc Redirect If No Trailing Slash" stopProcessing="true"> <match url=".*[^/]$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" /> </conditions> <action type="Redirect" url="{R:0}/" /> </rule> <rule name="PerHostDefaultDocSlash" stopProcessing="true"> <match url="$|.*/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" /> <add input="{ResolveDefaultDocForHost:{HTTP_HOST}}" pattern="(.+)" /> </conditions> <action type="Rewrite" url="{R:0}{C:1}" appendQueryString="true" /> </rule> </rules> </rewrite> Now i've got two other issues. first of all i can't use canonical url rewrite, if i set one then site1.com and site2.com redirected to www.site1.com instead of having www. for each. second problem is that there is a directory within site1' and site2' physical directory called members in which Default.aspx is always a Default Document doesn't matter which domain name was used. It doesn't work as well. Please help me with this issue because i've never thought i will get such problem with supposed to be better IIS7...

    Read the article

  • Blank Screen After Login on Windows 7

    - by Leigh Riffel
    When unlocking a Windows 7 computer the screen briefly (less than a second) goes blank before showing the screen. Abound once a month (but sometimes within a few days) when I unlock my computer the screen doesn't come back from this brief blackout and stays black. Sometimes after five minutes or so the display will come back. Other times it has been blank for over 20 minutes, so I give up and restart the computer. It seems to happen more often when the computer has been locked for a longer period of time - I lock my computer several times a day, but the problem most happens when I come in at the start of the day. I have updated my video card and monitor drivers. I have two monitors driven by the same graphics card. The computer is a Dell Optiplex 740. When the problem occurs both monitors have a green light on to indicate that they are receiving signal. I've tried unplugging the monitors from the video card and turning the monitors on and off. The screen saver is set to one that is not blank screen. The Windows power settings are set to never turn off the display. When the problem occurs there is no significant disk activity occurring. When the problem occurs I can connect over the network to the hard drive on my computer. When the problem occurs I can't connect over the network with a VNC connection. The VNC client doesn't give an error, but also won't show the screen. The task actually seems to hang as I can see the task, but there is no window for it. This problem occurred when I was running a pre-release version of Windows 7. The hard drive was formatted for release version and the problem still occurs. I've been stopping some of my always running programs to see if one of them may be the culprit, but given the span between failures that will take some time to find the problem if it will even help at all. Some programs I have always running include IE, Firefox, Outlook, Evernote, Kana Reminder, Ditto, Macro Express, Timesnapper, and DisplayFusion. Any ideas appreciated.

    Read the article

  • Why is user asked to choose their workgroup?

    - by Clinton Blackmore
    We running Mac OS X Server 10.5.8 with Mac OS X 10.5.8 clients. Students use network logins to, well, log in. I've been asked to deny internet access to a specific user. I was told that a good way to do it is to create a user workgroup called "No Internet Access" and manage settings there. (Specifically, I told parental controls to allow access to no sites, and blacklisted all the installed web browsers). Now, when the user authenticates to log in, they are greeted with this dialog: Workgroups for <username> Grade 7 Students No Internet Access It is unlikely that the student would willing choose "No Internet Access" to be their base group. Looking in Workgroup Manager at the student's record, it shows their primary group ID is the grade 7 group, and "No Internet Access" is listed as another group they belong to. I looked at the managed preferences for all the computers pertaining to logins. They are set to their defaults. Specifically, the computer groups' preference for Logins - Access has the defaults: [unchecked] Ignore workgroup nesting [checked] Combine available workgroup settings Based on my reading of Tips and Tricks for Mac Administrators, this should be correct, the user should not be asked which group they belong to, and settings from all applicable groups should be applied. How can I achieve that result? Edit: I've decided to add some additional information from the Tips and Tricks for Mac Management White Paper (via Apple in Education, via the author's site). On page 21, it says: With Leopard MCX, workgroup preference settings are combined by default into a single set of values. This means that instead of having to choose between the Math, Science, or Language Arts workgroups when logging in, a user can just authenticate and be taken directly to the desktop. All the settings for each of those workgroups are composited together, providing you with all the Dock items and a composite of all the other settings. On page 40, an example is given in which settings are combined from different 'domains', one computer group, two (user) workgroups, and one individual user's settings. [When johnd logs into a leopard client,] the items staged in the Dock from left to right are: computer group, first workgroup alphabetically, second workgroup, user. Items within the workgroup are staged alphabetically. Nowhere is there an indication that groups are nested; indeed, I can see no sensible (non-flat) heirarchy for groups like Math, Science, and Language Arts. I strongly believe that there is a way to apply settings from two unrelated user workgroups such that a user of OS X 10.5.x or newer does not need to choose their workgroup. This is what I seek to achieve.

    Read the article

  • How to transition to Comcast with static IP address [migrated]

    - by steveha
    I have my own email server in my house, on a static IP address. I have had business DSL for over a decade, but I also now have Comcast business Internet. I want to transition from the DSL to the Comcast, and I have some questions. I have a domain name, my own mail server, and a firewall (a PC with two network interfaces, running Devil-Linux). I need to make sure I understand how to set up the Comcast cable box, and how to set up my firewall. First, do I need to change any settings in the cable box? Currently I have only used the cable box by plugging in a laptop, with the laptop doing DHCP. I think I can leave the box alone but I would like to make sure. Second, I'm not sure I understand the instructions Comcast gave me for setting up the firewall. My DSL provider gave me the following information: static IP address, net mask, gateway, and two DNS servers. Comcast gave me: static IP address, routable static IP address, net mask, and two DNS servers, and told me to put the "static IP address" as the "gateway" on the firewall. Is this just Comcast-speak here? Does "routable static IP address" mean the same thing as "static IP address" in my DSL setup, the end-point address that I should publish in the DNS MX records for my email server? Or should I publish the "static IP address", and Comcast will then route all its traffic over the cable box? My plan is: first, I'm going to configure another firewall, so I have one firewall for the DSL and one for the Comcast (rather than madly editing settings to switch back and forth). Then I will publish the new Comcast static IP address as a backup email server address in the DNS MX records, wait a while to let it propagate, and then switch my home over from the DSL to the Comcast. Then I'll change DNS to make that the primary mail address and the DSL the secondary, let that go a while and make sure it seems reliable. Then I'll remove the DSL from the DNS MX records completely, and finally shut down the DSL service. (I thought about keeping the DSL as a backup, but the reason I'm leaving DSL is that it has become unreliable; and I have heard that Comcast business Internet is reliable.) Final question, any advice for me? Anything you think might be useful, helpful, or educational. Thanks.

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • How to deny the web access to some files?

    - by Strae
    I need to do an operation a bit strange. First, i run on Debian, apache2 (which 'runs' as user www-data) So, I have simple text file with .txt ot .ini, or whatever extension, doesnt matter. These files are located in subfolders with a structure like this: www.example.com/folder1/car/foobar.txt www.example.com/folder1/cycle/foobar.txt www.example.com/folder1/fish/foobar.txt www.example.com/folder1/fruit/foobar.txt therefore, the file name always the same, ditto for the 'hierarchy', just change the name of the folder: /folder-name-static/folder-name-dinamyc/file-name-static.txt What I should do is (I think) relatively simple: I must be able to read that file by programs on the server (python, php for example), but if I try to retrieve the file contents by broswer (digiting the url www.example.com/folder1/car/foobar.txt, or via cUrl, etc..) I must get a forbidden error, or whatever, but not access the file. It would also be nice that even accessing those files via FTP are 'hidden', or anyway couldnt be downloaded (at least that I use with the ftp root and user data) How can I do? I found this online, be put in the file .htaccess: <Files File.txt> Order allow, deny Deny from all </ Files> It seems to work, but only if the file is in the web root (www.example.com / myfile.txt), and not in subfolders. Moreover, the folders in the second level (www.example.com/folder1/fruit/foobar.txt) will be dinamycally created.. I would like to avoid having to change .htaccess file from time to time. It is possible to create a rule, something like that, that goes for all files with given name, which is on www.example.com/folder-name-static/folder-name-dinamyc/file-name-static.txt, where those parts are allways the same, just that one change ? EDIT: As Dave Drager said, i could semplify this keeping those file outside the web accessible directory. But those directory's will contain others files too, images, and stuff used by my users, so i'm simply try to not have a duplicate folders system, like: /var/www/vhosts/example.com/httpdocs/folder1/car/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/cycle/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/fish/[other folders and files here] //and, then for the 'secrets' files: /folder1/data/car/foobar.txt /folder1/data/cycle/foobar.txt /folder1/data/fish/foobar.txt

    Read the article

  • Can you set up a gaming LAN using OpenVPN installed in a VMware guest OS and be playing the game on the host OS?

    - by Coder
    I would like to setup a gaming VPN. Ie. I have some games that work over LAN and would like to play them with people that are not on my LAN. I know I can do this with OpenVPN. My ultimate goal would be to run OpenVPN portably on my host OS and not even need any virtualization. As such i don't want to install it on my host, but i'm fine with running it portably. I'm even fine with temporarily adding registry keys, and then running a .reg file to remove these entries once i'm done. To this effect i have installed OpenVPN on a virtual machine and diffed the registry. I then manually (using a .reg file) added all the keys that seem important on my host OS and copied the installation folder of OpenVPN onto my host machine. Then i try to run openVPN GUI 1.0.3 as a test and it says "Error opening registy for reading (HKLM\SOFTWARE\OpenVPN). OpenVPN is probably not installed". I verified that that key is indeed in the registry with all subkeys and it looks correct. I have tried running the GUI as an administrator and in compatibility mode with no success. I am running Windows 7. If this fails then i would be happy with installing OpenVPN on a virtual machine in VMWare but they key is that i will be running the game installed on my host machine. The first question for this option is if this is even possible. The second is, that I can't get the VM to have internet access if I use bridging but i can if i use NAT. Is it possible to do this game VPN setup with VMWare guest OS running using NAT? Summary of questions: -Is it possible to run openVPN portably and if so what did i miss above? -If it's not possible to run it portably, then can setup a gaming LAN by installing OpenVPN in a guest OS with NAT and how can i do this? -If the above is not possible then can i install OpenVPN in a guest using bridging and if so how can i set this up with a Windows 7 host and Windows XP guest as currently i can't get the guest to be able to access the internet in bridging mode, but it working in NAT mode. -In general is there any good documentation on setting up a gaming LAN with OpenVPN (i am using 2.1.4) as i have never set up a VPN of any sort before so any help would be much appreciated. Thanks!

    Read the article

  • How to correctly partition usb flash drive and which filesystem to choose considering wear leveling?

    - by random1
    Two problems. First one: how to partition the flash drive? I shouldn't need to do this, but I'm no longer sure if my partition is properly aligned since I was forced to delete and create a new partition table after gparted complained when I tried to format the drive from FAT to ext4. The naive answer would be to say "just use default and everything is going to be alright". However if you read the following links you'll know things are not that simple: https://lwn.net/Articles/428584/ and http://linux-howto-guide.blogspot.com/2009/10/increase-usb-flash-drive-write-speed.html Then there is also the issue of cylinders, heads and sectors. Currently I get this: $sfdisk -l -uM /dev/sdd Disk /dev/sdd: 30147 cylinders, 64 heads, 32 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 30147/64/32). For this listing I'll assume that geometry. Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End MiB #blocks Id System /dev/sdd1 1 30146 30146 30869504 83 Linux $fdisk -l /dev/sdd Disk /dev/sdd: 31.6 GB, 31611420672 bytes 255 heads, 63 sectors/track, 3843 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00010c28 So from my current understanding I should align partitions at 4 MiB (currently it's at 1 MiB). But I still don't know how to set the heads and sectors properly for my device. Second problem: file system. From the benchmarks I saw ext4 provides the best performance, however there is the issue of wear leveling. How can I know that my Transcend JetFlash 700's microcontroller provides for wear leveling? Or will I just be killing my drive faster? I've seen a lot of posts on the web saying don't worry the newer drives already take care of that. But I've never seen a single piece of backed evidence of that and at some point people start mixing SSD with USB flash drives technology. The safe option would be to go for ext2, however a serious of tests that I performed showed horrible performance!!! These values are from a real scenario and not some synthetic test: 42 files: 3,429,415,284 bytes copied to flash drive original fat32: 15.1 MiB/s ext4 after new partition table: 10.2 MiB/s ext2 after new partition table: 1.9 MiB/s Please read the links that I posted above before answering. I would also be interested in answers backed up with some references because a lot is said and re-said but then it lacks facts. Thank you for the help.

    Read the article

  • Low framerate on background apps

    - by user1698923
    My problem is that when a game is running in the foreground, in Full Screen mode, any applications on my second monitor (such as youtube videos, videos, not app specific) drop their frame-rate to about 2-3 FPS. It seems like some sort of power management option that I can't track down. As far as I can tell, it's not due to the GPU not being able to keep up. For instance, my PC can play League of Legends at about 280FPS when the framerate is uncapped. If i cap it at 60FPS using the in-game option, it has no affect on the performance of the background app. Summary Operating System Windows 8 Pro 64-bit CPU Intel Core i7 3820 @ 3.60GHz 42 °C Sandy Bridge-E 32nm Technology RAM 12.0GB Triple-Channel DDR3 @ 533MHz (7-7-7-20) Motherboard Gigabyte Technology Co., Ltd. X79-UD3 (SOCKET 0) 37 °C Graphics DELL U2713HM (2560x1440@59Hz) DELL U2713HM (2560x1440@59Hz) 1280MB NVIDIA GeForce GTX 570 (Gigabyte) 58 °C Hard Drives 212GB Volume0 (RAID) 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 36 °C 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 34 °C Optical Drives No optical disk drives detected Audio ASUS Xonar Essence STX Audio Device Operating System Windows 8 Pro 64-bit Computer type: Desktop Graphics Monitor 1 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Secondary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY4\Monitor0 Monitor 2 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Primary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY5\Monitor0 NVIDIA GeForce GTX 570 Manufacturer NVIDIA Model GeForce GTX 570 GPU GF110 Device ID 10DE-1086 Revision A2 Subvendor Gigabyte (1458) Series GeForce GTX 500 Current Performance Level Level 3 Current GPU Clock 845 MHz Current Memory Clock 1900 MHz Current Shader Clock 1690 MHz Voltage 0.988 V Technology 40 nm Die Size 520 mm² Release Date Dec 07, 2010 DirectX Support 11.0 OpenGL Support 5.0 Bus Interface PCI Express x16 Temperature 57 °C Driver version 9.18.13.2018 BIOS Version 70.10.55.00.01 ROPs 40 Shaders 512 unified Memory Type GDDR5 Memory 1280 MB Bus Width 64x5 (320 bit) Filtering Modes 16x Anisotropic Noise Level Moderate Max Power Draw 219 Watts Count of performance levels : 3 Level 1 - "Default" GPU Clock 50 MHz Memory Clock 135 MHz Shader Clock 101 MHz Level 2 - "2D Desktop" GPU Clock 405 MHz Memory Clock 324 MHz Shader Clock 810 MHz Level 3 - "3D Applications" GPU Clock 845 MHz Memory Clock 1900 MHz Shader Clock 1690 MHz Things I've tried: 1) Updating the graphics driver 2) Setting windows power mode to High Performance 3) Reset Nvidia Global Performance settings to default

    Read the article

  • SQL Server High Availability - Mirroring with MSCS?

    - by David
    I'm looking at options for high-availability for my SQL Server-powered application. The requirements are: HA protection from storage failure. Data accessibility when one of the DB servers is undergoing software updates (e.g. planned outage for Windows Update / SQL Server service-packs). Must not involve much in the way of hardware procurement. The application is an ASP.NET web application. The web application's users have their own database instances. I've seen two main options: SQL Server failover clustering, and SQL Server mirroring. I understand that SQL Server Failover Clustering requires the purchasing of a shared disk array and doesn't offer any protection if the shared storage goes down (so the documentation recommends to set up a Mirroring between two clusters). Database Mirroring seems the cheaper option (as it only requires two database servers and a simple witness box) - but I've heard it doesn't work well when you have a large number of databases. The application I'm developing involves giving each client their own database for their application - there could be hundreds of databases. Setting up the mirroring is no problem thanks to the automation systems we have in place. My final point concerns how failover works with respect to client connections - SQL Server Failover Clustering uses MSCS which means that the cluster is invisible to clients - a connection attempt might fail during the failover, but a simple reconnect will have it working again. However mirroring, as far as I know, requires that the client be aware of the mirrored partners: if the client cannot connect to the primary server then it tries the secondary server. I'm wondering how this work with respect to Connection Pooling in ASP.NET applications - does the client connection failovering mean that there's a potential 2-second (assuming 2000ms TCP timeout policy) pause when the connection pool tries the primary server on every connection attempt? I read somewhere that Mirroring can be used on top of MSCS which means that the client does not need to be aware of mirroring (so there wouldn't be any potential delays during connection, and also that no changes would need to be made to the client, not even the connection string) - however I'm finding it hard to get documentation or white papers on this approach. But if true, then it means the best method is then Mirroring (for HA) with MSCS (for client ignorance and connection performance). ...but how does this scale to a server instance that might contain hundreds of mirrored databases?

    Read the article

  • SSL_CLIENT_CERT_CHAIN not being passed to backend server

    - by nidkil
    I have client certificate configured and working in Apache. I want to pass the PEM-encoded X.509 certificates of the client to the backend server. I tried with the SSLOptions +ExportCertData. This does nothing at all, while the documentation states it should add SSL_SERVER_CERT, SSL_CLIENT_CERT and SSL_CLIENT_CERT_CHAINn (with n = 0,1,2,..) as headers. Any ideas why this option is not working? I then tried setting the headers myself using RequestHeader. This works fine for all variables except SSL_CLIENT_CERT_CHAIN. It shows null in the header. Any ideas why the certificate chain is not being filled? This is my first Apache configuration: <VirtualHost 192.168.56.100:443> ServerName www.test.org ServerAdmin webmaster@localhost DocumentRoot /var/www ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/apache2/ssl/certs/www.test.org.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.test.org.key SSLCACertificateFile /etc/apache2/ssl/ca/ca.crt <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> <Location /carbon> ProxyPass http://www.test.org:9763/carbon ProxyPassReverse http://www.test.org:9763/carbon </Location> <Location /services/GbTestProxy> SSLVerifyClient require SSLVerifyDepth 5 SSLOptions +ExportCertData ProxyPass http://www.test.org:8888/services/GbTestProxy ProxyPassReverse http://www.test.org:8888/services/GbTestProxy </Location> </VirtualHost> This is my second Apache configuration: <VirtualHost 192.168.56.100:443> ServerName www.test.org ServerAdmin webmaster@localhost DocumentRoot /var/www ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/apache2/ssl/certs/www.test.org.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.test.org.key SSLCACertificateFile /etc/apache2/ssl/ca/ca.crt <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> <Location /carbon> ProxyPass http://www.test.org:9763/carbon ProxyPassReverse http://www.test.org:9763/carbon </Location> <Location /services/GbTestProxy> SSLVerifyClient require SSLVerifyDepth 5 RequestHeader set SSL_CLIENT_S_DN "%{SSL_CLIENT_S_DN}s" RequestHeader set SSL_CLIENT_I_DN "%{SSL_CLIENT_I_DN}s" RequestHeader set SSL_CLIENT_S_DN_CN "%{SSL_SERVER_S_DN_CN}s" RequestHeader set SSL_SERVER_S_DN_OU "%{SSL_SERVER_S_DN_OU}s" RequestHeader set SSL_CLIENT_CERT "%{SSL_CLIENT_CERT}s" RequestHeader set SSL_CLIENT_CERT_CHAIN0 "%{SSL_CLIENT_CERT_CHAIN0}s" RequestHeader set SSL_CLIENT_CERT_CHAIN1 "%{SSL_CLIENT_CERT_CHAIN1}s" RequestHeader set SSL_CLIENT_VERIFY "%{SSL_CLIENT_VERIFY}s" ProxyPass http://www.test.org:8888/services/GbTestProxy ProxyPassReverse http://www.test.org:8888/services/GbTestProxy </Location> </VirtualHost> Hope someone can help. Regards, nidkil

    Read the article

  • OCZ Vertex 2 not recognized by Ubuntu installer

    - by Zsub
    As I boot into the Ubuntu 10.10 (or 11.04, doesn't matter) live environment or installer, it just refuses to recognise my Vertex 2. It reports the disk as ATA and not supporting smart, shows no serial number, and doesn't list the size correctly. All fdisk tells me is Unable to read /dev/sda (it's the only storage in the PC). I'm now running a temporary install of Windows 7 off of it, which worked like a charm, so where am I going wrong with Ubuntu... Specs: Asus M4N68T-M LE V2 (BIOS 0702, most recent) OCZ Vertex 2 SSD 60 GB Amd Athlon II X4 640 Patriot PSD34G13332 4GB DDR3 ram (two banks) EDIT I installed a second drive, installed Ubuntu on that and booted, it recognised the SSD just fine. I'm now trying to apt-get upgrade the live-environment. I wonder if there is any way to sort of install Ubuntu from Ubuntu (I boot into the working install on the other drive, install it on the SSD and then boot from the SSD). EDIT2 Ok, so that doesn't work. The install detects the SSD, however, it cannot format it. EDIT3 After a fresh boot I can read out SMART-data and even perform a read-benchmark, but if I try to format it, or do a write-bench, it'll crap out and after that it says SMART is not supported. So basically it seems I can't write to the disk, as it will stop working when I do, I will try to run repeated read-benchmarks to see if that has any effect. EDIT4 I'm running several read benchmarks on the drive right now, they give results that are to be expected from an SSD. If the read-benches don't fail, I can use fdisk on the disk, but it is now stuck trying to re-read the partition table after issueing the 'w' command. EDIT5 Parted Magic did recognize the drive and with hdparm -I even could tell me the drive was in a frozen state. I powercycled it (just pull out the plug from the SSD and plug it back in) and it wasn't frozen anymore. After that I could upgrade the firmware on the drive (still using Parted Magic) and format it to Ext4. After I rebooted into the Ubuntu installer, it wouldn't get recognized and hdparm didn't want to talk to it saying HDIO_DRIVE_CMD(identify) failed: Invalid exchange. EDIT6 For some reason if I enable one of the RAID controllers (the one the SSD is connected to, obviously) Ubuntu will let me format it, mount it and write to it. The installer also recognizes it. However if the raid controller is enabled but no array is defined the motherboard can't boot from it :(

    Read the article

  • fink hangs while compiling Octave on OS X

    - by Mark Bennett
    Disclaimer: I'm totally new to Fink. I'm trying to install Octave (Matlab open source clone) on Mountain Lion using Fink, following instructions at http://wiki.octave.org/Octave_for_MacOS_X It's a new installation of Fink, and I've also installed X11 per instructions. I'm using this command (which I believe is correct since everything's 64 bit now): sudo fink install octave-atlas It's hanging after a while, showing this as it's last output: ... Setting up xft2-dev (2.2.0-2) ... Clearing dependency_libs of .la files being installed Reading buildlock packages... All buildlocks accounted for. /sw/bin/dpkg-lockwait -i /sw/fink/dists/stable/main/binary-darwin-x86_64/x11/xinitrc_1.5-1_darwin-x86_64.deb (Reading database ... 14871 files and directories currently installed.) Preparing to replace xinitrc 1.5-1 (using .../xinitrc_1.5-1_darwin-x86_64.deb) ... Unpacking replacement xinitrc ... Setting up xinitrc (1.5-1) ... I did notice the process name on the terminal's tab was "sort", so the second time I hit this I tried Control-D (End-of-File), and this did seem to unstick it. I'm wondering if there's some misformed command and sort was trying to read from stdin? Questions: 1: Has anybody else seen this? Google wasn't helpful 2: Fink outputs a LOT of warnings and errors.... is that normal? 3: wondering if anybody's got Fink to compile Octave on Mountain Lion specifically? And whether they used just "octave" or "octave-atlas". Or if you got it working with MacPorts or Homebrew? 4: later Fink failed with "Failed: phase compiling: gnuplot-minimal-4.6.1-1 failed". I haven't started googling that yet... but wondering if anybody's see that? Also tried MacPorts but got other errors with Octave. And reading online it looks like HomeBrew also has issues with Octave on Mountain Lion. 5: Generally looking for anybody who's got Octave running on Mountain Lion with any of the package managers. I've been at this for a couple days ;-)

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance, Regarding Eduardo Ivanec's question, here is a netstat -i any port 7000 -nn dump from x.x.x.218: 14:42:15.386462 IP 172.16.1.224.40006 > 172.16.1.218.7000: Flags [S], seq 2804513708, win 14600, options [mss 1460,sackOK,TS val 86702647 ecr 0,nop,wscale 4], length 0 14:42:15.386690 IP 172.16.1.218.7000 > 172.16.1.224.40006: Flags [R.], seq 0, ack 2804513709, win 0, length 0 Update 2: When I run the second command, I get the following error in my browser: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://109.123.109.205/index.php Zero Sized Reply Squid did not receive any data for this request. Your cache administrator is webmaster. Generated Fri, 01 Jul 2011 16:06:06 GMT by remote-site (squid/2.7.STABLE9) remote-site is 172.16.1.224 When I do a tcpdump -i any port 7000 -nn I get the following: root@remote-site:~# tcpdump -i any port 7000 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused

    Read the article

  • Using nginx's proxy_redirect when the response location's domain varies

    - by Chalky
    I am making an web app using SoundCloud's API. Requesting an MP3 to stream involves two requests. I'll give an example. Firstly: http://api.soundcloud.com/tracks/59815100/stream This returns a 302 with a temporary link to the actual MP3 (which varies each time), for example: http://ec-media.soundcloud.com/xYZk0lr2TeQf.128.mp3?ff61182e3c2ecefa438cd02102d0e385713f0c1faf3b0339595667fd0907ea1074840971e6330e82d1d6e15dd660317b237a59b15dd687c7c4215ca64124f80381e8bb3cb5&AWSAccessKeyId=AKIAJ4IAZE5EOI7PA7VQ&Expires=1347621419&Signature=Usd%2BqsuO9wGyn5%2BrFjIQDSrZVRY%3D The issue I had was that I am attempting to load the MP3 via JavaScript's XMLHTTPRequest, and for security reasons the browser can't follow the 302, as ec-media.soundcloud.com does not set a header saying it is safe for the browser to access via XMLHTTPRequest. So instead of using the SoundCloud URL, I set up two locations in nginx, so the browser only interacts with the server my app is hosted on and no security errors come up: location /soundcloud/tracks/ { # rewrite URL to match api.soundcloud.com's URL structure rewrite \/soundcloud\/tracks\/(\d*) /tracks/$1/stream break; proxy_set_header Host api.soundcloud.com; proxy_pass http://api.soundcloud.com; # the 302 will redirect to /soundcloud/media instead of the original domain proxy_redirect http://ec-media.soundcloud.com /soundcloud/media; } location /soundcloud/media/ { rewrite \/soundcloud\/media\/(.*) /$1 break; proxy_set_header Host ec-media.soundcloud.com; proxy_pass http://ec-media.soundcloud.com; } So myserver/soundcloud/tracks/59815100 returns a 302 to /myserver/soundcloud/media/xYZk0lr2TeQf.128.mp3...etc, which then forwards the MP3 on. This works! However, I have hit a snag. Sometimes the 302 location is not ec-media.soundcloud.com, it's ak-media.soundcloud.com. There are possibly even more servers out there and presumably more could appear at any time. Is there any way I can handle an arbitrary 302 location without having to manually enter each possible variation? Or is it possible for nginx to handle the redirect and return the response of the second step? So myserver/soundcloud/tracks/59815100 follows the 302 behind the scenes and returns the MP3? The browser automatically follows the redirect, so I can't do anything with the initial response on the client side. I am new to nginx and in a bit over my head so apologies if I've missed something obvious, or it's beyond the scope of nginx. Thanks a lot for reading.

    Read the article

  • Windows 7 playback of dvr-Microsoft files stutters

    - by Jim Lynn
    I've just had to install Windows 7 on my Media Center machine because my Vista installation had a faulty drive. I've got the latest drivers that I can find - Intel 945GM integrated Graphics, Realtek audio drivers. Things are working OK with one exception. Playback of old recordings, from dvr-Microsoft format files, is choppy. The picture freezes for a fraction of a second, then quickly catches up. The sound is uninterrupted and doesn't pause. These freezes happen once every 5 seconds or so. It's very regular. Playback of Live TV from the digital tuner is perfectly smooth. DVD playback is perfectly smooth. As an experiment, I used the MPEG editing package VideoReDo to create a small test file in three different formats. This program takes the raw MPEG streams and repackages them into the desired container. I took the same clip and created three files in three formats: dvr-Microsoft (Microsoft's old recorded TV format); mpg (standard MPEG); and ts (raw MPEG transport stream of the kind often produced by PVRs). When these three files are played back under Windows 7, the mpg and ts files play smoothly, but the dvr-Microsoft file stutters. The last piece of data I have is that two other Windows 7 machines can play back dvr-Microsoft files smoothly with no stuttering. One is a netbook, with less grunt than the media centre. So there must be something specific about my Media Center machine that's causing the problem. Does anyone have any idea where I can look now? I don't know much about AV software, codecs, filter graphs etc. but I suspect that's where the problem lies. Rendering the video isn't the problem, but extracting the streams is. How would I go about diagnosing the problem? Edited to add: I just used the GraphStudio tool to look at the filter graph on the offending PC. The filter graph it uses by default for dvr-Microsoft looks identical to the other machines, and, interestingly, when I play the files using GraphStudio they run smoothly. Under Windows Media Player and Windows Media Center they stutter. I'd like to see the filter graph for Windows Media Player but GraphStudio won't show it. It looks like Windows Media Player and WMC are using a different decoding path to GraphStudio. Edited again to add: Today I purchased a new HDTV. The same Media Center driving the TV at 1080p is now playing back the old Recorded TV files smoothly, without stuttering. So whatever the cause of the original problem, using a different resolution seems to have removed the problem. It might also explain why nobody else has had this problem. I doubt many people use Media Centre with a 14in portable TV.

    Read the article

  • Linux buffer cache effect on IO writes?

    - by Patrick LeBoutillier
    I'm copying large files (3 x 30G) between 2 filesystems on a Linux server (kernel 2.6.37, 16 cores, 32G RAM) and I'm getting poor performance. I suspect that the usage of the buffer cache is killing the I/O performance. To try and narrow down the problem I used fio directly on the SAS disk to monitor the performance. Here is the output of 2 fio runs (the first with direct=1, the second one direct=0): Config: [test] rw=write blocksize=32k size=20G filename=/dev/sda # direct=1 Run 1: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/205M /s] [0/6K iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4667 write: io=20,480MB, bw=199MB/s, iops=6,381, runt=102698msec clat (usec): min=104, max=13,388, avg=152.06, stdev=72.43 bw (KB/s) : min=192448, max=213824, per=100.01%, avg=204232.82, stdev=4084.67 cpu : usr=3.37%, sys=16.55%, ctx=655410, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 250=99.50%, 500=0.45%, 750=0.01%, 1000=0.01% lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=199MB/s, minb=204MB/s, maxb=204MB/s, mint=102698msec, maxt=102698msec Disk stats (read/write): sda: ios=0/655238, merge=0/0, ticks=0/79552, in_queue=78640, util=76.55% Run 2: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/0K /s] [0/0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4733 write: io=20,480MB, bw=91,265KB/s, iops=2,852, runt=229786msec clat (usec): min=16, max=127K, avg=349.53, stdev=4694.98 bw (KB/s) : min=56013, max=1390016, per=101.47%, avg=92607.31, stdev=167453.17 cpu : usr=0.41%, sys=6.93%, ctx=21128, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 20=5.53%, 50=93.89%, 100=0.02%, 250=0.01%, 500=0.01% lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.12% lat (msec): 100=0.38%, 250=0.04% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=91,265KB/s, minb=93,455KB/s, maxb=93,455KB/s, mint=229786msec, maxt=229786msec Disk stats (read/write): sda: ios=8/79811, merge=7/7721388, ticks=9/32418456, in_queue=32471983, util=98.98% I'm not knowledgeable enough with fio to interpret the results, but I don't expect the overall performance using the buffer cache to be 50% less than with O_DIRECT. Can someone help me interpret the fio output? Are there any kernel tunings that could fix/minimize the problem? Thanks a lot,

    Read the article

  • The server rejected the session-establishment request: WCF hosted on IIS

    - by Dave Hanna
    Background: I'm working on a project where we have about a dozen distinct WCF services implemented in an IIS application, communicating over net.tcp on the default port (808), using the Microsoft Net.Tcp Port Sharing Service. I recently added a self-test method to the base class of each of these services so that I could remotely hit the service and get back a status string verifying that it was in operation. We implement this app in a ladder of environments - Development, QA, UAT, and finally production. My problem: My test program, which instantiates a connection to each service in turn and invokes the self-test method, works fine on all the environments below production. We recently moved the app to production, and I'm getting a weird error that I can't explain: On the first of the services that I hit, I get back an exception: "The server at [URL] rejected the session-establishment request". All the other services respond fine. I initially thought there was something wrong with the particular service that was failing, but I tried rearranging the list of services into a different order, and it SEEMS to always be the first service that I hit that fails. (I say SEEMS because it think once in the early iterations of testing, I saw it happen on the second service that it hit. But I haven't been able to reproduce that.) I've looked at application startup delays, and that doesn't seem to be the problem, because I can come back and run the test again as soon as it finishes - a delay of only a minute or two - and get the same error. Also, in the lower level environments, there is a start up delay of probably 30 seconds to a minute, but the result still comes back as expected. I've tried accessing the services over http from INetManager, and I get intermittent failures on all the services - a particular service will return a yellow screen of death on on invocation, then come up with the expected link to the WSDL on the next one seconds later. I'm completely at a loss to explain this behavior, or how to resolve it. I've googled the error message, and not found anything helpful. It may be a configuration issue - the production servers are newly provisioned VM's, and we may not have the config exactly right (whereas all the lower level environments have been running this and other similar apps for some time), but I have not idea what to look for. I've looked at the properties of the app pool that the app is running on and compared it to the lower level environments without finding any differences. If somebody can point me in the right direction, you would have my undying gratitude.

    Read the article

  • Is dual-booting an OS more or less secure than running a virtual machine?

    - by Mark
    I run two operating systems on two separate disk partitions on the same physical machine (a modern MacBook Pro). In order to isolate them from each other, I've taken the following steps: Configured /etc/fstab with ro,noauto (read-only, no auto-mount) Fully encrypted each partition with a separate encryption key (committed to memory) Let's assume that a virus infects my first partition unbeknownst to me. I log out of the first partition (which encrypts the volume), and then turn off the machine to clear the RAM. I then un-encrypt and boot into the second partition. Can I be reasonably confident that the virus has not / cannot infect both partitions, or am I playing with fire here? I realize that MBPs don't ship with a TPM, so a boot-loader infection going unnoticed is still a theoretical possibility. However, this risk seems about equal to the risk of the VMWare/VirtualBox Hypervisor being exploited when running a guest OS, especially since the MBP line uses UEFI instead of BIOS. This leads to my question: is the dual-partitioning approach outlined above more or less secure than using a Virtual Machine for isolation of services? Would that change if my computer had a TPM installed? Background: Note that I am of course taking all the usual additional precautions, such as checking for OS software updates daily, not logging in as an Admin user unless absolutely necessary, running real-time antivirus programs on both partitions, running a host-based firewall, monitoring outgoing network connections, etc. My question is really a public check to see if I'm overlooking anything here and try to figure out if my dual-boot scheme actually is more secure than the Virtual Machine route. Most importantly, I'm just looking to learn more about security issues. EDIT #1: As pointed out in the comments, the scenario is a bit on the paranoid side for my particular use-case. But think about people who may be in corporate or government settings and are considering using a Virtual Machine to run services or applications that are considered "high risk". Are they better off using a VM or a dual-boot scenario as I outlined? An answer that effectively weighs any pros/cons to that trade-off is what I'm really looking for in an answer to this post. EDIT #2: This question was partially fueled by debate about whether a Virtual Machine actually protects a host OS at all. Personally, I think it does, but consider this quote from Theo de Raadt on the OpenBSD mailing list: x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit. You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes. -http://kerneltrap.org/OpenBSD/Virtualization_Security By quoting Theo's argument, I'm not endorsing it. I'm simply pointing out that there are multiple perspectives here, so I'm trying to find out more about the issue.

    Read the article

  • SMPS stops when I plug in a SATA drive?

    - by claws
    Hello, Part 1: my first question is all the 4 wire power connectors (intended for hardisks/dvd drives not mother board) are same. Right? I've been using all of them same and I had no problem for years. Yesterday I borrowed a SATA disk from my friend and connected it my computer using Sata Power adaptor (4 wire) and when I switched on the computer. There were fumes coming out of the connector. I immediately turned it off (in just one second). I tested the voltages in the 4 wire power connector of my SMPS: They were 5.3v & 12.2V. I couldn't measure the current. But my SMPTS label reads: DC Output: 3.3v (25A) +5v (32A) -5v (0.3A) +12V (17A) -12V (0.8A) And the SATA hardisk label reads Input: +5v (0.72A) +12V (0.52A) I'm shocked! I never noticed this. Does the "sata power adaptor" scale down the current to required? If it doesn't, I've been connecting same way for years. I never had any problem. This is the first time I'm encountering it. Part 2: I wanted to return the drive to my friend. He has two hard disks, SATA & PATA. Its the SATA that I borrowed. When he usually switches on. The CPU fan starts & then stops for a sec and starts again and continues working. That was the earlier situation. I don't know why it stops & starts? Well, Now when I connect this SATA disk and switch ON the computer. CPU fan starts (just for an instant, not even a 0.5 sec) and stops. It doesn't start again, I mean the power from SMPS has stopped. But if I disconnect this SATA disk. It works fine. What seems to be the problem? I've no idea about why there were fumes or why his SMPS starts & stops giving power? What is its relation with the SATA disk connection?

    Read the article

  • So I want to separate my Program Files from the hard disk with the other system files. What is the b

    - by grg-n-sox
    So I am running Windows 7 as my only OS. I have two hard drives on my computer. The first one is a 74GB Western Digital 10K RPM Raptor. The second one is a 1TB Seagate Barracuda (couldn't remember if it was a 7200.12 or some other decimal after the 7200). The OS in installed to the Raptor and I am just using the Barracuda for storage. With this setup, in case you couldn't guess already, the Raptor fills up quick and I am constantly having to maintain file locations. And although it is nice to have that quicker boot time and program loading, the time spent maintaining the drive makes me waste more time overall. So I am looking for a way to try to keep it clear while still keeping up system loading speeds. A performance hit on games and such is easily acceptable and as long as I can guarantee a 5GB space on the Raptor, I can always just temporarily move the disc image there. So I am figuring that having games installed like Boarderlands and Mass Effect, as well as having large files such as linux distro DVD disc images in My Documents, I probably should be moving my personal files and Program Files directories to the Barracuda. I currently have folders on the Barracuda for this, but this means routinely copying files over and I can't really do anything with the Program Files folder that already exists. The best I can do is remember to designate the install directory of any program installation to the alternative install directory, which I can't seem to get to ever work right with Steam. With that in mind, is there a way that is not too drastic to let me just change some folders and system settings once and everything works fine afterwards for my setup? I have considered just reinstalling Windows 7 to the Barracuda but that would defeat the purpose of the Raptor except for running disc images off of. I am also heard a bit about being able to use symlinks to fix this, but I have also heard that symlinks in Windows are not necessarily the same and not as well supported on Windows. An example a friend mentioned was something about how if you have a symlink in Windows on a small hard drive to a large hard drive and the contents the symlink points to is larger than the small hard drive's capacity, then Windows will think the smaller hard drive is full. So is there a fix/workaround that will let me use symlinks across hard drives without the issues or is there a better solution I am not being told about, not mentioning, or not thinking of?

    Read the article

  • How to troubleshoot performance issues of PHP, MySQL and generic I/O

    - by jbx
    I have a WordPress based website running on a shared hosting. Its response time is very decent (around 2s to retrieve the HTML page and 5s to load all the resources). I was planning to move it to a dedicated virtual server (Ubuntu 12.04 LTS), which should theoretically improve things and make them more consistent given its not shared. However I observed severe performance degredation, with the page taking 10seconds to be generated. I ruled out network issues by editing /etc/hosts on the server and mapping the domain to 127.0.0.1. I used the Apache load tester ab to get the HTML, so JS, CSS and images are all excluded. It still took 10 seconds. I have Zpanel installed on the server which also uses MySQL, and its pages come up quite fast (1.5s) and also phpMyAdmin. Performing some queries on the wordpress database directly through phpMyAdmin returns them quite fast too, with query times in the 10 to 30 millisecond region. Memory is also sufficient, with only 800Mb being used of the 1Gb physical memory available, so it doesn't seem to be a swap issue either. I have also installed APC to try to improve the PHP performance, but it didn't have any effect. What else should I look for? What could be causing this degradation in performance? Could it be some kind of I/O issue since I am running on a cloud based virtual server? I wish to be able to raise the issue with my provider but without showing actual data from some diagnosis I am afraid he will just blame my application. UPDATE with sar output (every second) when I did an HTTP request: 02:31:29 CPU %user %nice %system %iowait %steal %idle 02:31:30 all 0.00 0.00 0.00 0.00 0.00 100.00 02:31:31 all 2.22 0.00 2.22 0.00 0.00 95.56 02:31:32 all 41.67 0.00 6.25 0.00 2.08 50.00 02:31:33 all 86.36 0.00 13.64 0.00 0.00 0.00 02:31:34 all 75.00 0.00 25.00 0.00 0.00 0.00 02:31:35 all 93.18 0.00 6.82 0.00 0.00 0.00 02:31:36 all 90.70 0.00 9.30 0.00 0.00 0.00 02:31:37 all 71.05 0.00 0.00 0.00 0.00 28.95 02:31:38 all 14.89 0.00 10.64 0.00 2.13 72.34 02:31:39 all 2.56 0.00 0.00 0.00 0.00 97.44 02:31:40 all 0.00 0.00 0.00 0.00 0.00 100.00 02:31:41 all 0.00 0.00 0.00 0.00 0.00 100.00 My suspicion that this comes from I/O related issue is also because a caching plugin I use to reduce the amount of queries to the database, by precompiling PHP pages is actually making things worse instead of better. It seems that file access is making things worse instead.

    Read the article

< Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >