Search Results

Search found 38126 results on 1526 pages for 'running'.

Page 215/1526 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Strange RDP / Remote Desktop problem

    - by John Landheer
    I'll try to be as specific as I can be: Server is running SBS 2008 R2 (with all updates) Server is connected to the internet Server has 2 NIC's, one is disabled Server is running RDP Service (accessible directly from the internet, I know, not as secure as it should be) Computers A and B are on the same local net. Computers A and B are both Windows 7. Users X and Y are both admins on the server Computer A can connect as user X to the server with mstsc Computer A can connect as user Y to the server with mstsc Computer B can connect as user X to the server with mstsc Computer B CANNOT connect as user Y to the server with mstsc! Error that username/password is incorrect. The last point is the problem, I get an authentication error. This used to work flawlessly for the last year. The server and desktops have been rebooted. EDIT: I tried: prefixing domain to the username prefixing the server computer name to the username change the password copy/paste the password from notepad to make sure it was correct I find it very strange.... EDIT: The computers are not on the same subnet as the server. The server is at my hosting provider. All computers as all users can reach the web app that is running on the server.

    Read the article

  • Proxmox drbd configuration split brain [on hold]

    - by AudioDan
    I am planning a proxmox HA configuration with two Dell R710 machines (dual 6 core processors in each) with enterprise level drive raid arrays. I would be using DRBD with a quorum disk on a third machine. I would dedicate two 1GB nics on each server to the DRBD communications. We would have approximately 12 to 14 Virtual Machines running on this pair of servers. The proxmox manual recommends creating two DRBD resources - one for the Virtual Machines that normally run on ServerA and one for the Virtual Machines that normally run on ServerB. This is because of the Primary/Primary state in which this configuration runs. If both servers have VMs talking to the same DRBD resource and a split brain situation occurs, there is potential for data corruption that must be resolved. While I understand it would take more effort to create new virtual machines, can anybody foresee any potential problems with running a separate DRBD resource for each VM instead? Does anyone have experience running a setup that way and has it worked well? It seems to me that would allow more flexibility in moving machines back and forth.

    Read the article

  • VirtualBox management interface unreliability

    - by Arlen Cuss
    I'm using VirtualBox 3.2.8_OSE with 20 VMs running, and everything's going fine. I find that if I hammer the VBoxManage interface, all sorts of interesting things happen, usually necessitating either a restart of the VM in question, or of all VMs. For instance, if I use VBoxManage guestcontrol execute to run processes, after a few hours of using it maybe once or twice a minute on any given VM, it'll mysteriously start reporting VERR_NOT_IMPLEMENTED and refusing to do anything—sometimes trying to restart /usr/sbin/VBoxService on the VM itself will get it back in working order, but often it won't, and in the meantime, no data can be collected using VBoxManage. Such data includes the VM's IP, so if I hadn't recorded it earlier, I'm usually in trouble and have no option but to portscan the network for it, or kill the VM's process on the host manually and restart it. This one I haven't narrowed down yet, but it seems that even using VBoxManage guestproperty get (to retrieve a machine's IP) frequently and rapidly is enough to cause all VMs' management interfaces to die. The processes are still running fine, but VBoxManage reports them all as "powered off". In the meantime, another process somewhere in the system seems to have decided that their being powered off means they need to be powered on again, and suddenly I have 2x the number of VBoxHeadless processes running than I used to. Has anyone else seen behaviour like this? Is there any workaround? This is a serious impediment to my work, as I've had to resort to a lot of (hacky) caching of data and rate-limiting how often I call VBoxManage, just in case I accidentally bring 20 VMs to their knees.

    Read the article

  • Sending mail results in "Sender address rejected: Domain not found"

    - by user1281413
    The setup: WHM/CPanel CentOS 5 server running Exim and Courier for mail services, and BIND for domain name services. I recently moved servers. The old server was running a HIGHLY similar configuration, and all accounts were ported via WHM. However, the server is unable to send, and sometimes receive email. Errors I am seeing (when I do get an error mail back) state: 450 4.1.8 : Sender address rejected: Domain not found Edit for clarity: this is the error response from remote mail servers. Numerous independent mail servers come back with the same error. (Email address is merely one valid example) My first instinct of course was to check the domain records. However, k-t.org appears to have a valid record (including an MX record), even after running it through domain checks on a completely different server elsewhere and online. Note that the issue appears to happen with all the domains hosted on the server, not just k-t.org I have also ensured that a PTR was created. My Googling has only lead me to people who had fairly basic DNS mistakes, but either I'm blind/dumb (possible, DNS is not my strong suite), or it's something that is a bit more archaic. I've run out of ideas, and I can't seem to find anything that could explain why servers are unable to resolve the domains. There doesn't seem to be anything missing or incorrect.

    Read the article

  • What characters are illegal in Cisco IOS username secret passwords?

    - by Alain O'Dea
    I am using username secret to add users with encrypted passwords to our switches and firewall. I have been battling with the same switches and firewall for a couple of hours trying to get securely generated hard passwords for all admins. Sometimes, the passwords would go into config, but wouldn't work for login. According to the documentation for enable secret a password must not begin with a number and ? has to be entered as Ctrl-V then ? to escape it. I followed that and still got passwords I could not use sometimes. There was no error when I ran username, but the password would be rejected on login by some, but not all of the switches. They are all WS-C2960-48PST-L. The passwords it didn't like contained back ticks "`" (that character under tilde ~ under Esc). The "misbehaving" switches are running: Cisco IOS Software, C2960 Software (C2960-LANBASEK9-M), Version 12.2(50)SE5, RELEASE SOFTWARE (fc1) The "working" switches are running: Cisco IOS Software, C2960 Software (C2960-LANBASEK9-M), Version 12.2(46)SE, RELEASE SOFTWARE (fc2). The "misbehaving" switches are running a newer IOS, so this suggests a regression introduced somewhere between 12.2(46)SE and 12.2(50)SE5. I was unable to find any evidence of this being intentional in the release notes for 12.2(50)SE. I would like to avoid this next time the passwords are changed :) What characters are illegal in Cisco IOS username secret passwords? Thank you for your help :)

    Read the article

  • Migrating to AWS Cloud with auto-scaling - where to put Redis and ElasticSearch?

    - by RobMasters
    I've been trying to research this topic but haven't found anywhere that recommends where to install services such as Redis and ElasticSearch when migrating to a cloud framework. I'm currently running a Symfony2 application on 2 static servers - one is running MySQL and the other is the public facing web server, which also has Redis and ElasticSearch running on it. Both of these servers are virtualised, but they're static in terms of not being able to replicate at present (various aspects are still dependent on the local filesystem). The goal is to migrate to AWS and use auto-scaling to be able to spin up and kill web servers as required, but I'm not clear on what I should put on each EC2 instance. Should they be single-responsibility only? i.e. Set up individual instances for the web server(s), Redis, and ElasticSearch and most likely an RDS instance for MySQL and only set up auto-scaling on the web server(s)? I don't foresee having to scale the ElasticSearch server anytime soon as it's only driving the search functionality, but it's possible that Redis may need to be replicated at some point - but should this be done manually? I'm not sure of how this could be done automatically as each instance needs to be configured to know about it's master/slave(s) as far as I know. I'd appreciate advice on this. One more quick question while I'm here - how would I be able to deploy code changes when there are X web servers currently active? I'm using a Capifony deployment script (Symfony2 version of Capistrano), which I think can handle multiple servers easily enough by specifying an array of :domain addresses...but how can should this be handled when the number of web servers can vary?

    Read the article

  • How to setup a hyper-v domain with internet access

    - by fynnbob
    First off let me say that I'm not a network admin or server guy, I know very little about that stuff. What I'm trying to do is setup a virtualized domain using hyper-V. Here is the configuration: Physical Server: 4Mb RAM Windows Server 2008 R2 running Hyper-V Virtual Environment: One Domain Controller running Windows Server 2008 R2 One Client running Windows Server 2008 R2 I have been successful in setting up a virtual domain controller and adding a virtual client to that domain controller but I'm stuck at trying to give the virtual Environment Internet access. I can give the client VM Internet access if I remove them from the virtual domain but once I add them back to the virtual domain, Internet access is gone. I've read articles describing many different ways this can be done (using RRAS with NAT, using a wireless connection, etc...) but all of those articles only cover a small piece of the setup and also seem to be geared towards people who know there way around networking and servers which I don't. I'd like to know more but my thing is software development and I have my hands full trying to keep up with everything in that realm. I simply want to setup a virtual domain with Internet access for testing. Can anyone point me to any "for Dummy's" type information on how to setup this type of environment or can anyone provide this kind of step-by-step help. Any help would be very much appreciated.

    Read the article

  • Any dangers in using DDR memory with a higher frequency than the FSB?

    - by raw_noob
    I'm looking to upgrade memory in an older motherboard. The processor is an AMD Sempron 2500+ with a maximum speed of 333/166MHz. The motherboard is an MSI MS-7061 (KV3M-V), which accepts up to 2Gb of DDR memory maximum PC2700 in 2 slots and has a maximum FSB of 333MHz. The board does not have dual-channel support. Existing memory includes a stick of 512Mb PC3200, which seems to be running OK (presumably at PC2700) but is rated 200MHz, which is below the FSB speed. The other stick is 256Mb PC2100/133MHz, again below the FSB speed. (All figures from CPU-Z.) I have a chance to acquire a single used stick of PC3200/400MHz memory very cheaply. Crucial's system scanner seems to suggest that this will be OK with my system, but other sites have suggested that running memory with a higher frequency than the FSB can cause instability. Is this true? Would I be better waiting until I can buy the correct PC2700/333MHz stick? I'm assuming that the mixed memory I have at present is running as 768Mb at 133MHz. Is this a reasonable assumption? If so, would you expect the performance differences between 768Mb/133MHz and 1Gb/333MHz to be very noticeable? If I install the new 1Gb/400 or 333MHz stick in slot 1, am I right in thinking that adding back the existing 512Mb/200MHz stick in slot 2 would pull the whole 1.5Gb system memory speed down to 200MHz? If so, which would be better - 1.5Gb/200MHz, or the single 1Gb stick at the full 333MHz that the FSB permits? Is more headroom more important than extra speed? Any help - or even opinions - gratefully received. I can't find reliable information, and I can't afford to make expensive mistakes.

    Read the article

  • Parallels 6 - Is It Just Me or Does It Run Really Slowly?

    - by 5arx
    I've been running Parallels since version 2 with great success. I use it as my .Net development environment and over the last few years have converted so many others to the Parallels/Mac way of doing Windows/.Net development that I feel I should be getting perks/gifts and/or freebies from Parallels Corporation ;-) A month or so ago I upgraded to version 6 and ... immediately wished I hadn't. I'm currently running it on a laptop - a 2009 MacBook Pro (13"/2.53Ghz/4GB) while my MacPros at work and home are still running v5. I have seen nothing in v6 that makes me want to upgrade the install on those. The general problem is performance - upon starting or suspending a vm (always Windows 7 Ultimate), OS X slows down, quite often freezing for a minute or two at a time. The performance of the vms themselves are fine, but for me the point of this set-up is to be able to do web-browsing, email checking etc. on the OS X side of things while doing the stuff that can only be done on Windows (Visual Studio, SQL Server tools) on Windows. I have been using Parallels for a while so at least feel like I know what I'm doing so at the moment I am heading towards forming an opinion that its Parallels thats to blame. I've tweaked and tweaked all the vm configuration properties but to no avail. Support emails to the company have all received replies - there are no documented case of the issue you mention. Has anyone else seen this problem and if so, have you found a fix?

    Read the article

  • Adding subnet to a vsphere with single vcenter and esxi host

    - by Ilya Rakhlin
    Let me start of by saying that I do not specialize in networking, I am in the process of adding additional VMs to a testing environment and wanted some recommendations. In this case I am running a single ESXI 5.1 host and a single Vcenter management server. The problem is, I need another range of IP addresses added to the existing setup; hopefully without reconfiguring everything. Currently the esxi host is configured to IP: 192.168.100.200, gateway: 192.168.100.1 and subnet: 255.255.255.0. All of the VMs are running some version of linux with hard coded IP addresses in that range, and using that subnet. The VMs I am about to deploy I want to be on the 192.168.101.X network. Is it possible to add an additional subnet to this existing system that will also communicate with the current subnet? The esxi host has 6 physical NICs but only one connected as it is only a testing system; not sure if that matters. Are there any other ways to accomplish this hopefully without restarting or at least reconfiguring the IP addresses for each VM? Reason: Due to the configuration of the VMs to run the applications that we need I am using a large amount of the current IPs in the potential range (mostly VIPs). I will be setting up a new version of this “environment” while keeping the old one, thus potentially running out of IP addresses.

    Read the article

  • Why is dwm.exe using so much memory?

    - by Leonard Challis
    I've scoured the web, but I'm sick of reading "scan your computer for viruses" and "upgrade your RAM" on answers to similar questions to this. I understand that dwm.exe is for (simply put) caching bitmaps for things like Aero-peek and similar, but as far as I have read it shouldn't be using vast amounts of memory. My colleague and I both have 4GB of RAM, Core 2 Duo, blah, blah -- essentially they're pretty capable. His dwm.exe is running at around 30mb, mind is currently running at about half a gig, though it does fluctuate quite a lot. This is the same while running the exact same applications (currently Zend studio, FireFox (with firemin - low memory usage), Outlook). Every so often I will get a notification asking me if I want to switch to Aero Basic because it's using too much memory, and sometimes it will just switch itself to basic and let me know why. I know it's possible to stop it switching, but I want to know why it is using too much memory otherwise it's just papering over the cracks. One thing to add is this seems to have started after a robbery on Monday, where two of my monitors were stolen, and I had to temporarily use a couple of alternative monitors. I am now using brand new monitors but the problem is the same. All drivers installed and working seemingly fine. Any ideas why the usage is so high? We are using windows 7 64-bit Professional.

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

  • Unable to connect to second name of Windows 2008 Server R2 machine from XP

    - by Tumba
    I used the command netdom computername /add:newname.domainname.com to add a second name to a server running Windows 2008 Server R2. After restarting the server, I had DNS "A" entries for both names. In addition, the second name was added to HKLM\SYSTEM\CurrentControlSet\services\LanmanServer\Parameters\OptionalNames, which I believe should have taken care of any NetBIOS resolution. From my Windows 7 workstation, I can ping both names and running net view on both names reveals the same list of resources. From Windows XP, I can ping both names, but net view only works on the first name. Running net view on the second name returns: System error 52 has occurred. You were not connected because a duplicate name exists on the network. Go to System in Control Panel to change the computer name and try again. What do I need to do to make the second name usable from XP clients? Update: I was able to resolve the problem by adding the REG_DWORD key DisableStrictNameChecking = 1 to HKLM\SYSTEM\CurrentControlSet\services\LanmanServer\Parameters, then restarting the Server service. However, I do not understand why this was necessary.

    Read the article

  • How to make an "import image" button/field in a PDF form?

    - by Joe
    How to make an "import image" button/field in a PDF form? I am designing a "Lost Pet" poster for a local animal shelter. The idea is to make a PDF file that users of the shelter's website can download, insert their pet's information, and then print it out if their pet goes missing. I will be designing the poster in InDesign CS3, and then exporting it to a PDF file. I will then add form fields for the user to fill out. I am ok with making text entry form fields. That is a simple matter. What's not so simple is figuring out a way to allow the user to insert a photo of their lost pet into the PDF, save the file, and then print it out with the photo in it. I am running Adobe Acrobat Professional 8. I am running it on a Mac. All of the searches I've done have told me that if I was running Acrobat in Windows, I'd have access to this other program called LiveCycle Designer which has pre-built form item libraries, including a Image Field. But I cannot find any similar option on the Mac version. Has anyone has any experience doing something like this? If so, I could certainly use some tips on making this work. Just a quick clarification... This is a volunteer design project that I am doing for the shelter, so I don't want to spend money on any extra software/addons at all. I am hoping this can be done somehow with the pre-existing software I have.

    Read the article

  • CPU temperatures high on new build after gaming

    - by Reznor
    My friend had a problem with his computer a while back. His games were crashing, even within the menus. He was stumped as to what the problem was, so I posted on here requesting help. He found out the day later, when his computer would start up but wouldn't display anything on the screen. His video card must have came screwed up. So, he got a replacement. Now, there's a new problem. His temperatures, which were acceptable before, are now insanely high. His GPU temperature runs 70-80c, which is understandable considering he's running his games maxed out, but the real problem here is his processor and motherboard temperatures. All four of his cores are running at 88-90c after coming out of a game. His motherboard temperature was also 70c at one point. In terms of cooling, his case should definitely be adequate. He has an Antec Twelve Hundred. He's using stock fans. The cable management in his case is very good; better than average. He's using the stock heatsink with the processor too, but note, it was fine before the replacement, so it isn't like there's some inherent problem. He has checked the case too. Everything's fine! No cables in the way. The heatsink is seated properly. He turned his case fans up to high, as well, but the temperatures are persisting. Could the processor be overheating due to running games maxed out? Any ideas?

    Read the article

  • Slow browsing through IE on Windows Server 2012

    - by Volodymyr
    We've run into strange issue on the freshly installed servers. H/W: IBM server X3550 M4 7914; OS: Windows Server 2012 Std. Then we try to browse on the servers thru IE, not all sites are opened or it takes too long time to open the page, i.e. very few of them can be opened. Local FW are disabled. Servers are in a new subnet and traffic is allowed for it. VLAN is configured properly Another Windows Server 2012 host is running OK and Internet access works fine, but it is VM running on Hyper-V 2012. No proxy is used on the network. At the same time, if one tries to establish telnet session to any site on 80/443 ports - it does work. Google works as well. I've tried to configure single Qlogic adapter to check if the issue remains - it does. Teaming is configured with the means of QLogic, not by built-in functionality. IE Enhanced Security is disabled. IE settings were reset, more than once. Why would certain sites work while others not - Idk. I also tried to disable ecncapability and restart server - no luck netsh int tcp set global ecncapability=disabled Any thoughts? UPD1 VMQ is disabled. Servers are not running Hyper-V. UPD2 Servers were rebuilt from scratch, got a mail a few mins ago. Issue still remains. Teaming is now configured with the means of Windows Server 2012.

    Read the article

  • Setting Remote Desktop to allows IPv6 connections

    - by Garrett
    Setup: Basically I have 3 machines (2 virtual and 1 physical) that I would like to be able to RDP in to from outside my NAT (a router). The VMs are Windows 7 and Windows XP, both fully patched with Teredo installed and working, both running in VirtualBox (their host also has Teredo working, though I'm not sure if that matters). They both have bridged network adapters with promiscuous mode enabled. The physical machine is Windows 7 fully patched with an HFS server running on it and a dynamic DNS set up for my public IPv4 address and port forwarded. It also has Teredo installed and working. Symptoms: According to http://test-ipv6.com/ all 3 have public IPv6 addresses, and they can all connect to http://ipv6.google.com/. I can ping the XP VM from the host it's running on but I cannot ping it from any other machine. Also, I cannot ping either of the other machines from anywhere. I cannot connect to any of them over RDP from IPv6, however I can connect to all of them through IPv4. Any ideas what is going wrong?

    Read the article

  • Why does a document in Word 2007 stop recognizing the mouse after the document loses focus?

    - by alt234
    When I open a document in Word 2007, everything works fine, I can edit, highlight text, etc. However, the instant Word loses focus, when I focus back the document doesn't recognize anything the mouse does. The tabbed menu at the top seems to recognize the mouse but the document itself does not. I can scroll through via the scroll-wheel and I can type. However, typing just shows up where the mouse cursor last was before focus was taken away. I've tried clearing some word data registry keys. I've also found that some Word Add-ins can cause problems. LaserFiche is one I see mentioned a lot. As far as I can tell I have no add-ons though. Any ideas? It's crazy-annoying. UPDATE- - Word is the only program that has this problem - Typically I have Toad (Oracle DB management app), an XP virtual machine with various apps running on it, Skype, Google Talk, and maybe a handful of other programs at any given time open... Windows Media player, Outlook. - Yes, this happens even if nothing else is running. From a fresh restart as well. - I'm running Vista 64 with SP1 - According to Windows Update, I have the latest of everything. This has been happening for a couple of months now. Just never took the time to look into because I usually never have to use word.

    Read the article

  • Converting a Windows 2003 server

    - by Jim Bass
    We have a legacy database system based upon MS SQL running on Windows Server 2003. The client software will only run on Windows XP. We have recently had success converting a client into a virtual machine and running it under Fusion on Mac minis. So far, it is working incredibly well. So well, in fact, that we are now considering trying to convert the server to a virtual machine. This raises several questions, though: 1. The server uses a raid array. Does the VM virtualize the raid array? I only ask because in my experience Windows products don't like it when you change core hardware. 2. Is there any reason why running SQL server on a virtual machine won't work? It will be up 24/7. 3. Is there a different converter for servers? 4. Will I have to track down the licensing for MS SQL and Server 2003 or will they come across ok? 5. The company that designed the software is no longer in business. There is some fear that the software is somehow tied to the hardware configuration. We bought the hardware, but their engineers came out and configured the system. Will the virtual machine be able to spoof particular chip sets? Thanks! Jim Bass

    Read the article

  • Accessing clearcase view drive from virtual machine is slow

    - by PermanentGuest
    I have a windows XP virtual machine running under a Windows XP host. On the host : On the host clearcase 7.1.1.2 is installed. I have a dynamic view mapped onto some drive. The view has certain VOB/directory structure where my application DLLs from the nightly build and config files are stored. I run my application on the host machine which uses the DLLs and config files from the VOB and everything runs smooth. Now I want to move this set-up to a virtual machine. On the guest : I'm running the guest with a vm-player. I don't want to install clear-case on this as I don't want to expose this machine onto the network. The network setting in the guest is 'host-only'. I have mapped the host's clearcase view drive as a shared folder and I'm able to access this drive from the virtual machine. Also, the application is running. However, the problem is that the access of the clearcase drive from the virtual machine is very slow. I can experience this from the windows explorer. Due to this, the starting of my application takes several seconds in the virtual machine while on the guest it comes up pretty fast. My question is : Is there any way to speed up the performance? I have managed to copy some of the DLLs which don't change frequently to the virtual machine to improve the performance. However, there are still lot of DLLs which have to be taken from the clearcase drive as they change frequently. VMplayer version is : VM Player 3.0.1 build-227600 Both guest and host is : Windows XP service pack 3 Host clearcase is : clearcase 7.1.1.2

    Read the article

  • Merely installing PHP5 causes my AWS Ubuntu server to die minutes later from a massive CPU spike

    - by Mark Amery
    I have an AWS server with Ubuntu 11.04 as the OS that is running an Apache2 webserver (incidentally Python-based and using Django). We recently needed to add support for php5 to let us use a third party PHP library (incidentally for serving minified versions of js and css files). However, for no reason any of us can discern, if we simply run sudo apt-get install php5 on the server, then the install appears to finish successfully but, without us taking any further action (including not yet running sudo apt-get install libapache2-mod-php5, which I think would be the next step for us if everything worked), or actually running any PHP scripts on the server, a few minutes later the server becomes impossible to connect to, and looking at the 'Monitoring' tab for the server in the EC2 Management Console reveals that a while after the installation, CPU usage spikes to 100% and stays there permanently (until we reboot the server from the AWS Console). After rebooting, the server also reliably dies within a few (between 0 and 10) minutes. We restored the server to a pre-PHP state from an AMI Image, observed that it was stable, and then tried installing PHP5 again and observed the server die in exactly the same way, so we're pretty much certain that installing PHP5 is what causes the symptoms. What on earth could be causing this behaviour, and how can we get PHP installed on the server without it dying?

    Read the article

  • Load is 0, yet site crawls (sometimes). What gives?

    - by Yegor
    I have a ~1.5-2mil page views per day site running on 2 servers. One for mysql, other for everything else. Mysql box has a load of 3, frontend is usually 0.0-0.1. Both are dual quad core with 8GB ram running SAS drives in raid5. CPU is idle for majority of the time, iowait is non-existent. Im running nginx, memcache, and site is built on php. Half the time everything runs perfect, while at other times it lags something severe, when it takes 10-15 seconds for a page to load. Page execution time is always super low, but it seems to hang, waiting for something before it actually loads the page. Whats even more weird is that it only happens to 1 file on the site (but its the one thats most commonly accessed, that actually loads the content on the site). Other pages are super fast at all times, even when it takes 15 seconds to load actual content. I have nginx_stats plugin installed, and if I monitor it, the lag spikes happen when the write column starts going above 100, and it frequently does... all the way to 500-1000. It does so at totally random times... not when traffic is heavy... it can do this in the middle of the night, and work perfectly at 5pm when traffic is at its highest. Any ideas?

    Read the article

  • LXC, Port forwarding and iptables

    - by Roberto Aloi
    I have a LXC container (10.0.3.2) running on a host. A service is running inside the container on port 7000. From the host (10.0.3.1, lxcbr0), I can reach the service: $ telnet 10.0.3.2 7000 Trying 10.0.3.2... Connected to 10.0.3.2. Escape character is '^]'. I'd love to make the service running inside the container accessible to the outer world. Therefore, I want to forward port 7002 on the host to port 7000 on the container: iptables -t nat -A PREROUTING -p tcp --dport 7002 -j DNAT --to 10.0.3.2:7000 Which results in (iptables -t nat -L): DNAT tcp -- anywhere anywhere tcp dpt:afs3-prserver to:10.0.3.2:7000 Still, I cannot access the service from the host using the forwarded port: $ telnet 10.0.3.1 7002 Trying 10.0.3.1... telnet: Unable to connect to remote host: Connection refused I feel like I'm missing something stupid here. What things should I check? What's a good strategy to debug these situations? For completeness, here is how iptables are set on the host: iptables -F iptables -F -t nat iptables -F -t mangle iptables -X iptables -P INPUT DROP iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -t nat -A POSTROUTING -o lxcbr0 -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 7002 -j DNAT --to 10.0.3.2:7000

    Read the article

  • "Access is Denied" when executing application from Command Prompt

    - by xpda
    Today when I tried to run an old DOS utility from the XP Command prompt, I got the message "Access is Denied." Then I found that most of the DOS utilities would not run, even though I have "full control" over them. They worked just fine a few weeks ago, and I have not made any OS changes other than Windows Upgrades. Then I tried running edlin.exe and edit.com from the Windows\system32 folder. Same result - "Access is Denied." I tried running these applications from Windows Explorer and got the message "Windows cannot access the specified device, path, or file. You may not have the appropriate permissions to access the item." I am logged in as a member of Administrators and have full control over these files. I tried logging in as THE Administrator, with no change. I checked the security settings on the files, and have full control over all of them. I have tried copying the files to different drives, booting in safe mode, and running without antivirus and firewall, all with no change. Does anybody know what could cause this?

    Read the article

  • Samba share will not connect (was working yesterday)

    - by David Gard
    I have a CentOS websver with a Samba share set up (\\webserver\websites). I was connected to this share just yesterday without issue, but today my Windows 8 PC will not connect to it. I've also tried making a connection from Windows 7 and Windows XP, all without success. I initially tried restarting my computer, but that did not work. I then tried restarting the Samba service on the webserver (service smb restart), and when that failed I restarted the webserver. All of that was to no avail, and I still cannot connect to the share. The webserver is contactable from my PC (and the others I tried), as the websites it hosts work fine and I'm able to Putty to the server. When connected to the webserver, I can see that Samba is running by using service smb status - service smb status smbd (pid 4685) is running... nmbd (pid 4688) is running... Can anyone please help me to get this share working? Here is my full Samba config (/etc/samba/smb.conf) - [global] workgroup = MYGROUP server string = Samba Server %v log file = /var/log/samba/log.%m max log size = 50 security = user encrypt passwords = yes socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 local master = no [websites] comment = Websites browseable = yes writable = yes path=/var/www/html/ valid users = dgard

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >