Search Results

Search found 1504 results on 61 pages for 'nearly lunchtime'.

Page 39/61 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Webserver max CPU when apache and MYSQL are ran together

    - by Tim
    This website has been running fine without issues, Recently it went down. After some investigation it looks like the combo of MYSQL and Apache bring the box to its knees. Apache can run find serving static web pages and MYSQL can run fine when the website isn't working. As soon as the website is enabled with SQL running the CPU on the box remains at 100%. Picture of the usage: http://i.stack.imgur.com/GG2NC.png I've checked the sql database for errors, tried tuning nearly every parameter in apache/sql's conf file for performance. The server is a redhat based box running the latest software packages. Any help/suggestions are welcome. Doing an strace on a high cpu apache process I see the following: read(14, "", 8192) = 0 close(14) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 14 fcntl64(14, F_SETFL, O_RDONLY) = 0 fcntl64(14, F_GETFL) = 0x2 (flags O_RDWR) connect(14, {sa_family=AF_FILE, path="/var/lib/mysql/mysql.sock"...}, 110) = 0 setsockopt(14, SOL_SOCKET, SO_RCVTIMEO, "\2003\341\1\0\0\0\0", 8) = 0 setsockopt(14, SOL_SOCKET, SO_SNDTIMEO, "\2003\341\1\0\0\0\0", 8) = 0 setsockopt(14, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) setsockopt(14, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 Here is what I see from a mysql process: futex(0x86fc9a4, FUTEX_WAIT_PRIVATE, 39, NULL) = 0 futex(0x86fc734, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 futex(0x86fc734, FUTEX_WAKE_PRIVATE, 1) = 0 gettimeofday({1301465020, 141613}, NULL) = 0 clock_gettime(CLOCK_REALTIME, {1301465020, 141699633}) = 0 futex(0x8707a64, FUTEX_WAIT_PRIVATE, 1, {4, 999913367}) = 0 futex(0x8707a40, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 futex(0x8707a40, FUTEX_WAKE_PRIVATE, 1) = 0 exit_group(0) = ?

    Read the article

  • Software way to cool down an old MacBook Pro

    - by notMacBookProSuperUser
    Hi all, First a little background: I've got lots of computers, including Linux PCs and two MacBook Pro (and a MacMini). My concern is with my 'old' MacBookPro (Core Duo). It really does overheat. Warranty is long void. Years ago (I'd say 2.5 years ago or so) one day it overheated so bad that the battery inflated due to the heat. I got a new battery for free but it's still getting incredibly hot (much other than any other computer I've got: my newer Core 2 Duo MacBook Pro doesn't get nearly as hot as the old one. It s really a pain because I use my old MBP when I m in front of TV, having it on my lap, and it can really become unbearable. I don't want to open that old MBP. On Linux I can force a new CPU 'governor' that decides how the CPU is allowed to operate: it can be 'on demand', 'always max speed', 'always speed x', etc. Does the same exist under MacOS X? Is there a way, say if a 1.86 Ghz Core Duo can run at 1.6 Ghz, to ask MacOS X: "never run this CPU above 1.6 Ghz" ?

    Read the article

  • My client's solution of a Windows SBS 2011 VM on an Ubuntu host and VirtualBox is pinning the host CPU

    - by Scott Stamp
    Here's my situation, I've got a client hosting two servers (one VM), with the host providing VMware Zimbra, the other Windows Small Business Server 2011. Unfortunately, the person before me had configured this setup as follows. Host: Ubuntu Desktop Edition 10.04 (I know, again, not my choice) running VMware Zimbra 8GB of RAM On-board RAID1 of two 320GB Seagate Barracuda drives for the OS Software RAID5 of four 500GB WD Caviar Black drives on MDADM for bulk storage (sorry, I don't know the model #) A relatively competent quad-core Intel Core i7 CPU from the Nehalem architecture (not suspicious of this as the bottleneck) Guest: Windows Small Business Server 2011 4GB of RAM Host-equivalent CPU allocation VDI file for OS hosted on the on-board RAID, VDI file for storage hosted on the on-board RAID For some reason when running, the VM locks up when sitting nearly idle, and the VirtualBox process reports values of 240%+ in top (how is that even possible?!). Anyone have any ideas or suggestions? I'm totally stumped on this one. Happy to provide whatever logs you'd like to take a look at. Ideally I'd drop VirtualBox and provision this with VMware Workstation, but the client has objected to the (very nominal) costs involved. If hardware needs to be purchased to help, it will be, but we're considering upgrades a last-resort at this time. Thanks in advance! *fingers crossed*

    Read the article

  • Multi-WAN bonding across different media

    - by Tom O'Connor
    I've recently been thinking again about a product that Viprinet provide, basically they've got a pair of routers, one that lives in a datacentre, Their VPN Multichannel Hub and the on-site hardware, their VPN multichannel routers They've also got a bunch of interface cards (like HWICs) for 3G, UMTS, Ethernet, ADSL and ISDN adapters. Their main spiel seems to be bonding across different media. It's something that I'd really like to use for a couple of projects, but their pricing is really quite extreme, the hub is about 1-2k, the routers are 2-6k, and the interface modules are 200-600 each. So, what I'd like to know is, is it possible with a couple of stock Cisco routers, 28xx or 18xx series, to do something similar, and basically connect a bunch of different WAN ports, but have it all presented neatly as one channel back to the internet, with seamless (or nearly) failover if one of the WAN interfaces should fail. Basically, If i got 3x 3G to ethernet modems, and each on a different network, I'd like to be able to loadbalance/bond across all of them, without having to pay Viprinet for the privilege. Does anyone know how I'd go about configuring something for myself, based around standard protocols (or vendor specific ones), but without actually having to buy the Viprinet hardware?

    Read the article

  • SSH sometimes screws up connection when terminal overflows?

    - by SeveQ
    I've got a problem with SSH on a Debian Lenny based server (it's a vHost within a Xen environment, booted on a Xen kernel). I hope someone can help me with this. The SSH connection seems somehow getting screwed up frequently when the terminal overflows (new lines beyond the bottom of the terminal, usually forcing it to scroll). The connection gets lost but not regularly disconnected. It nearly always happens when I do the following: an existing SSH connection gets disconnected (regularly) I order putty to reestablish the connection login-prompt appears at the very bottom of the putty terminal window I enter my login-name, press the enter key I'm asked for the password, I enter it, press the enter key and BOOM! Nothing more happens. I have to reconnect again. So it is reproducable. I'm not totally sure if the connection crashes before or after I enter the password. Furthermore it also happens when there is much text to be displayed (for example when I compile something or do an ls -l on a directory with many entries). Using 'screen', however, helps to reduces the frequency of occurence but doesn't solve the problem completely. It's occurence is independent from which terminal software I use. I mostly use putty but it also happens with other clients. I certainly hope somebody can help me solving this problem. Thanks in advance! //edit: I've just made a Wireshark trace of the ssh connection and there is nothing, I repeat, nothing different between the working and the failing connection (at least aside from frame numbers, ports and times that obviously can't be equal). This leads me to the assumption that the error has to happen on the server's side.

    Read the article

  • Why is my connection slow?

    - by Jay R.
    I have a Dell Precision T5400 with a Broadcom 1Gb onboard NIC. For some strange reason, when I access machines on our local network, the best I can get is around 125KB/s download speed. My laptop that has a 10/100Mb NIC onboard usually gets around 300KB/s or better from the same network resource. Both machines are plugged into the same 1Gb switch which connects to our local network wall jack at 100Mb half duplex. There is also a printer plugged into the same switch at 100Mb full. The resource I'm using for the test is a 30MB zip file copied from a jetty webserver that is running as part of a cruisecontrol installation. The cruisecontrol installation is running WindowsXP with full real-time antivirus and Altiris patch management and inventory running. That stuff on its own is eating some of the download speed. I've seen the laptop reach into the multiple MB/s download speed before, but the desktop never seems to get past 125KB/s to 130KB/s. In WindowsXP, before I upgraded the driver in the desktop, it was that slow. In Fedora, it is still slow even though it appears to be using the same driver version as the upgraded Windows driver. The upgraded Windows driver is faster, but still not nearly as fast as the laptop. What gives? Any insight to improve the situation would be appreciated. Could it be that the BroadCom board just isn't that good, or the driver in linux is just not as good as the Windows one?

    Read the article

  • Timeout option not working on efi windows 7/windows8 dual boot machine

    - by Guenter
    I hav a gigbyte GA-Z77m-D3h mobo and installed Windows 8 Pro and Windows 7 Ultimate on two SSDs (in that order) in EFI mode. Now when I start my computer, I get the windows boot menu (text mode) with the two OSses to choose, but I have to manually press RETURN to have the computer boot into the Win OS. Even if I wait an hour, no default action takes place. Using bcdedit (from either of the OSses) I can successfully change the time out value, and it shows up in the bcdedit (no params) output. But it doesn't fire ... Here is my current BCDEdit output (headers are in German, but values should be readable): Windows-Start-Manager --------------------- Bezeichner {bootmgr} device partition=O: path \EFI\Microsoft\Boot\bootmgfw.efi description Windows Boot Manager locale de-DE inherit {globalsettings} integrityservices Enable default {default} resumeobject {5ad2802c-c60a-11e2-acdb-80331c501b11} displayorder {default} {current} {5ad2802a-c60a-11e2-acdb-80331c501b11} {5ad28028-c60a-11e2-acdb-80331c501b11} {5ad28029-c60a-11e2-acdb-80331c501b11} toolsdisplayorder {memdiag} timeout 5 displaybootmenu Yes Windows-Startladeprogramm ------------------------- Bezeichner {default} device partition=W: path \Windows\system32\winload.efi description Windows 7 locale de-DE inherit {bootloadersettings} recoverysequence {5ad2802e-c60a-11e2-acdb-80331c501b11} recoveryenabled Yes osdevice partition=W: systemroot \Windows resumeobject {5ad2802c-c60a-11e2-acdb-80331c501b11} nx OptIn Windows-Startladeprogramm ------------------------- Bezeichner {current} device partition=C: path \Windows\system32\winload.efi description Windows 8 locale de-DE inherit {bootloadersettings} recoverysequence {5ad28033-c60a-11e2-acdb-80331c501b11} integrityservices Enable recoveryenabled Yes isolatedcontext Yes allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \Windows resumeobject {5ad28031-c60a-11e2-acdb-80331c501b11} nx OptIn bootmenupolicy Standard hypervisorlaunchtype Auto (this output is from Win8; the Win7 looks nearly identical) If maybe the problem comes from a bad EFI Windows boot manager installation, can this be fixed without loosing my windows installations?

    Read the article

  • Copy past speed very slow for a large number of tiny files on Windows but not on linux

    - by Arno2501
    I've got this folder which contains 15'000 of tiny images (around 400 bytes each). If I copy past this folder on my laptop (Windows 7, i7 latest gen, superfast ssd) it takes about 30 seconds (yes for 7 megs !!!) the average transfer rate is 400 KBytes / second which is so slow. I mean my usual transfer rate is more like hundreds of MBytes per second !!! I get the same problem on my servers (Windows 2003, 2008 /r2) and on every Windows box that I could get my hands on. On the other hand if I do the same on a linux box (debian base, Ext3 FS) (which runs on the same SAN than all the windows servers I've tested) It's nearly instantaneous !!! I'm pretty sure the size / number of the files may stress such filesystem more than another but such differences !? Why is that ? Why is it so slow on the windows boxes (more that 30 sec for 7 MB) and so fast on the linux ones (one sec or so) (I mean this was not a hardlink that I've created it was a true copy). Is it a normal behaviour or something unusual ?

    Read the article

  • Why do you use a 3PAR SAN? [closed]

    - by Starfish
    If you use a 3PAR SAN, I’d like to hear what you think about it, particularly compared to the HP EVA. What do you see as its advantages over other SANs like the EVA? What’s so special about the ASIC? We had HP quote us an EVA P6500 and 3PAR V400 with equivalent storage and the 3PAR was nearly twice the cost. My site has two EVA SANs with a combined capacity of ~80 TB. We want to replace the older and larger of the two. We’ve been looking at the EVA and the 3PAR to see which would be a better fit for us. I’m struggling to understand how the 3PAR differs from the EVA from a practical technical standpoint. When I read the sales literature and speak with the HP sales engineers, they spend a lot of time talking about how the 3PAR is better because of its ASIC. It’s ASIC this and ASIC that, but when I press them on how a 3PAR with thin provisioning is better than an EVA with thin provisioning, I can’t get a straight answer. Meanwhile, one of my colleagues, who has more say regarding which SAN we get, is enamored by the 3PAR, and he can’t explain clearly to me why he wants it over the EVA. Our needs are pretty simple. We have 10 servers running VMware and ~100 VMs. We use VMware’s thin provisioning currently, but we would like to start using thin provisioning on the new SAN. We don’t have a need for SSDs or migration between storage tiers. We plan on having FC or SAS drives for our most used data and SATA/FATA drives for the lesser used data which is how we have the EVAs configured. We also do not need any SAN-level snapshotting or replication.

    Read the article

  • Error when installing wubi on windows 7

    - by P'sao
    Im installing ubuntu on windows 7(wubi 11.10): when its nearly done it gives me this error in the log file: Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 461, in expand_diskimage File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Users\P'sao\AppData\Local\Temp\pyl10D2.tmp\bin\resize2fs.exe -f C:\ubuntu\disks\root.disk 17744M >>retval=1 >>stderr= >>stdout=resize2fs 1.40.6 (09-Feb-2008) Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] 10-25 20:31 DEBUG TaskList: # Cancelling tasklist 10-25 20:31 DEBUG TaskList: # Finished tasklist 10-25 20:31 ERROR root: Error executing command >>command=C:\Users\P'sao\AppData\Local\Temp\pyl10D2.tmp\bin\resize2fs.exe -f C:\ubuntu\disks\root.disk 17744M >>retval=1 >>stderr= >>stdout=resize2fs 1.40.6 (09-Feb-2008) Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 461, in expand_diskimage File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Users\P'sao\AppData\Local\Temp\pyl10D2.tmp\bin\resize2fs.exe -f C:\ubuntu\disks\root.disk 17744M >>retval=1 >>stderr= >>stdout=resize2fs 1.40.6 (09-Feb-2008) Usage: /cygdrive/c/Users/Psao/AppData/Local/Temp/pyl10D2.tmp/bin/resize2fs.exe -f C:/ubuntu/disks/root.disk 17744M [-d debug_flags] [-f] [-F] [-p] device [new_size] can some one help me?

    Read the article

  • How to use a home network patch panel?

    - by Torben Gundtofte-Bruun
    I'm planning a home network in a house that's not built yet. One recommendation is to add network sockets in various rooms and have them all end in a central place, where it all connects using a network switch. So far so good. Another recommendation says to not connect everything directly to the switch, but to a patch panel which in turn is connected to the switch. I'm unsure why this is good. Is there any practical advantage of using a patch panel if you're not planning to re-wire things very often? How does a patch panel actually work? Let's say it has 24 ports. Does it have another 24 ports on the backside that go to the switch, or what? Wikipedia isn't helpful on this. Clarification: I am planning to run network cables through conduits inside the walls and terminated with network sockets in the wall (as opposed to having just conduits and long regular network cables that have a normal plug in each end). Going by RedGrittyBrick's answer, a patch panel is nearly unavoidable in that case.

    Read the article

  • Setting the default permissions for files uploaded via FTP to a directory

    - by Kerri
    Disclaimer: I'm just a web designer/coder, and server admin stuff is my weakest point of them all. So be easy on me (and very specific). I'm using a simple CMS (Unify) on a site, where part of the functionality is that the client can upload files to a specified directory (using FTP). The permissions for the upload directory are set to 755. But when files are uploaded through the interface, they are uploaded with permissions set to 640 (instead of 644), so site visitors cannot acces the files. When I emailed the CMS's support about this, they told me that it was a server setting, and I need to make sure that files uploaded through FTP are set to 644. Makes perfect sense, but I have no idea how to do this. Any help would be greatly appreciated. This site is a shared site hosted by Network Solutions (Unix), so my access options are limited. I can edit .htaccess files, and php.ini, but that's about all I have access to. It appears I can't even log on via shell. ETA: 11/11/2010 Thanks all. I was able to work around this problem by setting up the CMS's settings in a different way. I'd be interested in following up on Nick O'Niel's suggestions, because I think he's on the right track, but unfortunately I can't access the necessary files on this particular server. So, anyway, I'm leaving this open, since the original questions isn't exactly resolved. Unfortunately, I probably can't put a correct answer to the test, since the shared server in question has nearly all of its config files tightly locked down.

    Read the article

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit

    Read the article

  • Nginx 502 Bad Gateway: It just won't stop

    - by David
    I have the same problem that most people seem to have with Nginx: 502 bad gateway errors. They are intermittent but typically happen more than once per session, which means my users are probably running into it nearly every time they use the app. I've tried adjusting fastcgi_buffers and fastcgi_buffer_size (in both directions) to no avail. I've tried various other things with the configuration file but nothing seems to work. Here's my config (note that I've stripped away most of the things I've tried, since they didn't work and I didn't want to bloat the file with a bunch of un-related directives): server { root /usr/share/nginx/www/; index index.php; # Make site accessible from http://localhost/ server_name localhost; # Pass PHP scripts to PHP-FPM location ~ \.php { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; } # Lock the site location / { auth_basic "Administrator Login"; auth_basic_user_file /usr/share/nginx/.htpasswd; } # Hide the password file location ~ /\. { deny all; } client_max_body_size 8M; } I'm running a small Rackspace cloud server, which should be plenty for handling an app with a small user base...

    Read the article

  • What steps should I take to debug this non-starting hvm virtual machine?

    - by Ophidian
    I have a dom0 machine running CentOS 5.4 with all the latest updates using Xen as my hypervisor. I am using Xen in part because this machine was set up prior to KVM being included in RHEL, and in part because KVM's network bridging configuration is not nearly as simple as Xen's. The dom0 machine is headless and I do all of my VM management via virsh from the command line. I have two hvm domU's: A web server running CentOS 5.4 A mail server running Gentoo Both VM's are backed by LV's on the dom0 but do not use LVM in the domU. Both have virtually identical libvirt configurations (differing by expected things like name, UUID, NIC MAC, VNC port, etc). The web server domU (WSdomU hereafter) does not start since applying the most recent kernel update (kernel-xen-2.6.18-164.15.1.el5.x86_64 and kernel-2.6.18-164.15.1.el5.x86_64 for the dom0 and WSdomU respectively). By 'not start' I mean it appears to be running but it does not use an CPU cycles, does not bring up a graphical console, and does not respond on the network. The WSdomU is listed as no state rather than the normal running or blocked in xentop. The mail server domU starts fine and functions normally. Here are the steps I have taken so far that did not solve the problem: Reboot the dom0 to see if things come up on their own Check xen dmesg on dom0 Check xend logs (a cursory viewing did not show anything blatant; specific suggestions of things to look for would be appreciated) Attempted to connect to the WSdomU's graphical (VNC) console from the dom0 Shutdown the mail server domU and attempt to start the WSdomU Check the SELinux labels on backing LV's (they're the same) Set SELinux to permissive and attempt to start the WSdomU Use virsh edit to try tweaking the WSdomU config virsh undefine, reboot, virsh define the WSdomU config dd the WSdomU LV to an .img file, copy it to my Fedora desktop and run it under KVM (works fine) What steps should I take next to debug this? I will edit in any additional configuration's requested in the comments.

    Read the article

  • IIS serving pages extremely slowly

    - by mos
    TL;DR: IIS 7 on WS2008R2 serves pages really slowly; everyone assumes it's because it's IIS and we should have gone with an Apache solution on Linux. I have no idea where to start debugging the problem. I work in a nearly all-MS shop with a bunch of fellow programmers who think Linux is the One True Way. Management recently added a Windows machine with IIS to serve Target Process (third-party agile system), but the site runs extremely slowly. Everyone, to a man, assumes it's because it's on IIS, and if only management would grow a brain and get some Linux servers in here, we could really start cleaning things up! ...Right. Everyone "knows" IIS isn't fit to serve .txt files. ...Well, as the only non-Microsoft hater in the bunch, I am apparently the only one who thinks maybe the Linux guy who hated being told to set up the IIS server may have screwed things up. I'd like to go fix it, but I don't have any clue as to where to start as I am not a sys admin. Help?

    Read the article

  • Site hanging in iis7 - how do I troubleshoot?

    - by Chris Foot
    I am currently having a problem with a windows 2008 server running IIS 7. The server runs several sites but only seems to have the issue with one particular site. Every so often, the whole server slows to a crawl with nearly all requests timing out! Invariably, when we log in to take a look there is always an IIS process using up around 90% cpu. Looking into the worker processes in IIS there are usually one or two requests that have been running for a long time. They are always in the ExecuteRequestHandler state with ManagedPipeline as the module name and the current ones i'm looking at have been running for 7686248 (what units is this in, it doesn't say?). It is also not always the same page, in fact we have seen at least 3 different pages listed under url when this has happened. It seems that the only way to bring the server back to life is to kill the 90% process! The site is running under .Net 4.0 and the code on it is very similar to other sites on the server which do not have the problem! How do I start troubleshooting this?

    Read the article

  • Why does Facebook Chat through XMPP protocol on Pidgin Portable not authorize?

    - by Sara Neff
    I heard you can use facebook chat on desktops now. Thats awsome! What i didn't hear is that it is a pain in the butt! Not awsome! I've followed six nearly identical sets of instructions from six different websides, including the one that facebook generates for you, to get facebook chat connected through Pidgin. Its the latest portable version, so from what i hear the plugin is out of the question. Whenever I go to try and connect i get a message saying "Not Authorized" and buttons to either modify the account info, or retry. NOTHING i have done has fixed this, and I can't find anything remotely usefull anywhere. I am running windows xp, and running pidgin (portable) off of a flash drive. Someone please tell me what i have to do. I read about authorizing the chat on my actual facebook page. I'd have tried that if i could find out how to do it, but if its there they hid it good. HELP?!

    Read the article

  • Mount an VHD on Mac OS X

    - by janm
    Is it possible (how) to mount an VHD file created by Windows 7 in OS X? I found some information about how to do this on linux. There is a fuse fs "vdfuse" which uses virtualbox libs to mount filesystems supported by virtualbox. However I was unable to compile the package on osx because nearly all headers are missing and I doubt that it would work anyway... EDIT #2: Okay I got my hands dirty and finally compiled vdfuse (http://forums.virtualbox.org/viewtopic.php?f=26&t=33355&start=0) on osx. As a starting point I used macfuse (http://code.google.com/p/macfuse/) and looked at the example file systems. This led me to the following build script infile=vdfuse.c outfile=vdfuse incdir="your/path/to/vbox/headers" INSTALL_DIR="/Applications/VirtualBox.app/Contents/MacOS" CFLAGS="-pipe" gcc -arch i386 "${infile}" \ "${INSTALL_DIR}"/VBoxDD.dylib \ "${INSTALL_DIR}"/VBoxDDU.dylib \ "${INSTALL_DIR}"/VBoxVMM.dylib \ "${INSTALL_DIR}"/VBoxRT.dylib \ "${INSTALL_DIR}"/VBoxDD2.dylib \ "${INSTALL_DIR}"/VBoxREM.dylib \ -o "${outfile}" \ -I"${incdir}" -I"/usr/local/include/fuse" \ -Wl,-rpath,"${INSTALL_DIR}" \ -lfuse_ino64 \ -Wall ${CFLAGS} You actually don't need to compile VirtualBox on your machine, just install a recent version of VirtualBox. So now I can partially mount vhds. The separate partitions appear as block files Partition1, Partition2, ... on my mount point. However Mac OS X does not include a loopback file system and macfuse's loopback fs does not work with block files, so we need a loopback fs to mount the blockfiles as actual partitions.

    Read the article

  • Windows 7 access denied to executables.. by what?

    - by stijn
    Ever since I started using Windows 7 this problem has been bothering me. From time to time I see similar questions popping up on misc forums, but never did I see an answer. Here are two scenarios that nearly always reproduce it: the explorer way with explorer, navigate to a directory containing at least one exe file go one directory up immediately delete the directory just navigated to yields Folder Acces Denied dialog stating You need permission to perform this action You require permission from Administrators to make changes to this folder, with the buttons try Again and Cancel hitting Try Again never works immediately. Waiting a minute or so and then clickig it again does work note: if in step 2 and waiting a minute or more before going up one directory, the problem does not occur and the folder can be deleted the visual studio way build a project producing an exe file run the executable then close it immediately build the project again (by changing a single character in a source file for example) yields fatal error LNK1168: cannot open /path/to/the.exe for writing note: if in step 2 and waiting a minute or more before building again, the problem does not occur some specs happens both on Windows 7 32 and 64 bit, with VS2008/2010/2011 happens on 3 different machines I do not have a virusscanner of any kind I do have a bunch of services disabled, but nothing that prevents Windows from running normally, UAC is disabled as well happens on any type of disc I always use a user account that is in the Administrators group Obviously both scenarios are very similar and extremely reproducable. So I figured some process must have the file open for some reason, and release it again later. However, using systinternal's handle -a the exe file in question never shows up. (that is the correct way to use handle, right?) So while explorer/VS are reporting they cannot access the file, handle.exe says it's not in use anywhere. This leaves me rather clueless, so I'm wondering if someone can come up with a solution: why does this happen, and how to solve it?

    Read the article

  • Daisychain external USB drive to WD My Book World Edition (Blue rings)

    - by d03boy
    I recently bought a new My Book Essential 1TB (WDBACW0010HBK-NESN) to daisychain to my older My Book World Edition 500GB (blue rings) with one of the version 01.xx.xx firmwares. At first when I connected the USB drive to the MBWE, it showed up in the System Summary section of the administration page without any issue. I was able to set up a new share on the new drive. The administration website moved very, very slow though. The administration pages became nearly unresponsive during this setup process. Once the share was set up I could access the new share but again, it was very slow, now through Windows Explorer. I looked around the internet and it seems that this is caused by the USB drive being formatted with NTFS. I tried reformatting it (again, as NTFS) just to double check and the same problem occurred. I then tried FAT32 but realized it would only support files of approximately 2GB and that is not acceptable for me. I decided to try a firmware upgrade on the MBWE to version 02.00.19. The firmware upgrade completed successfully but now the MBWE does not display the USB drive in the System Summary like it did with the earlier firmware version. The USB drive works perfectly fine when connected directly to my computer. Is there a way to solve this issue?

    Read the article

  • How can I setup nginx to serve virtualhosts with rails(unicorn/passenger) and php-fpm

    - by NewAlexandria
    I would like to serve multiple sites on one instance. I install nginx, php-fpm, and a rails app. I use sites like this to guide me. I configure php-fpm to listen to a local socket listen = /var/run/php-fpm/php-fpm.sock I configure ngnix with multiple hosts: include /etc/nginx/conf.d/*.conf I have several site php conf files like /etc/nginx/conf.d/site1.conf server { listen 80; server_name site1.com www.site1.com; root /var/www/site1; location / { index index.html index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; } } and rails site conf files like upstream rails { server 127.0.0.1:3000; } server { listen 80; server_name site2.com www.site2.com; root /var/www/site2; location / { proxy_pass http://rails; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header X-Url-Scheme $scheme; } } I have a unicorn rails server running via rails s -p 3000 Yet, no sites come up for either site1.com or site2.com. I can get to the rails site at www.site2.com:3000 What is wrong? I've spent 2 days (nearly 30hr) trying many different blogs, SO / SF questions, etc. Please share your insight or answer. edit 1: No log entries are created when I try to visit either site. It's like the requests never come in.

    Read the article

  • How do you gracefully upgrade mission critical systems to wildly disparate systems?

    - by Ernie
    In the span of the 12+ years of my career, I have yet to overcome this hurdle and I suspect the answer simply isn't easy or even possible, so I ask everyone here for their experience. Say that you're running into egregious problems that can only be fixed by moving from one platform to another - either from making a mistake in choosing the platform that was chosen years ago, or simply growing beyond what the system was originally designed for. You know for certain that the cruft that has built up over time will invariably mean that it will be nearly impossible to test for all the things that will certainly lead to tech support hell - which we all know leads to the loss of customers. Not that customers aren't already complaining about the egregious problems that already exist! The best possible way that I've discovered so far is to maybe devise a plan for the changeover, test it on a few clients, test it on a dozen clients, test it on a hundred clients, then finally finish the changeover for everyone and pray that you've worked out all the bugs with those first hundred and twenty, and that the animal by-products will not hit the ventilation system in the most spectacular fashion possible. However, that doesn't mean that it won't anyway. So say that you're moving from Exchange to Exim (or even just Sendmail to Exim). How do you handle it?

    Read the article

  • Linux will not activate wireless after device has been re-enabled

    - by XHR
    Using a Eee 900A netbook by Asus. By pressing Fn + F2, I can disable or enable the wireless chip on the netbook, a blue LED indicates the status. I've been able to connect to wireless networks just fine with this netbook. However, if the wireless chip ever becomes disabled, I have to reboot to get my network connection back. This generally happens when suspending. For some reason the LED will be off and I have to hit Fn + F2 for it to light up again. However, after doing so, Linux will not reconnect to the network. It simply changes the wireless status from "wireless is disabled" to "device not ready". Even worse, I've recently had issues with the chip being enabled at boot, thus making it nearly impossible to get connected. I've searched around on-line but haven't found much of anything useful on this. This happens on all kinds of different distros including Ubuntu 9.10 Netbook, EeeBuntu 4 beta, Jolicloud and Ubuntu 10.04 Netbook. Edit I noticed this question is getting a lot of views. To give a quick update, I never did resolve this issue with the given distro's. However, I'm currently running Ubuntu 10.10 netbook edition and this issue has gone away.

    Read the article

  • Windows XP Disappearing Folders

    - by XenoFoxx
    I am researching a problem for a friend, and unfortunatly do not have direct access to his computer. I've tried to gather as much information as possible and I have researched it on various websites. I've not found anyone having the same problem my friend is having. So here goes: He has a media server in his home running Microsoft Windows XP. It has 3 drives, 1 for the OS and 2 for mass storage. Not long ago he went to access one of the mass storage media drives and it was empty, except for a single folder. His first assumption was that his roommate had deleted everything on the drive (excluding the remaining folder). He then checked the properties of the drive and it was still saying that the hard drive was nearly full. I told him to check the recycling bin, thinking that whoever deleted them didn't clear them from recycling and that they were still taking up space on the drive. My friend said the recycling bin was empty. So we have a drive that the windows file management system says is empty (again except for the remaining folder), but the properties of the drive say it's mostly full. Now it gets weirder My friend tried to create a new folder on this drive and it auto-named itself "New Folder(1)" which means that it recognizes there is already a "New Folder" in that directory. He tried to rename it to a name that he KNEW was there previsouly, and windows wouldn't allow it because it was a duplicate folder name. SO now it seems the folders are there, but not displaying in Windows Explorer. Both of us have no idea why this is occuring, why the folders vanished, why the one remaining folder didn't vanish, or how to make them visable again. Anyone else ever experience this? I can get more details if needed.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >