Search Results

Search found 68825 results on 2753 pages for 'problem'.

Page 549/2753 | < Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >

  • How restore qmail backup files

    - by Maysam
    We are using qmail as our mail application on a linux server. A few weeks ago our server crashed and we had everything installed from scratch and our users started to send & receive email again. The problem is they have lost their old emails. We have a back up of the whole qmail directory. But I don't know how to restore the old emails without losing the new ones. It's worth mentioning that I don't have any problem with restoring old sent mails. When I copy email files into .sent-mail/cur directory, I have them restored in sent box of users, but restoring files in /cur directory doesn't work for inbox emails and I can't get them restored.

    Read the article

  • 32 bit programs can't access Internet in Windows 7 64 bit

    - by korona
    I recently got a new ASUS laptop with Windows 7 Home Premium pre-installed. It worked OK for a while but a couple of days ago, suddenly I couldn't access the Internet any more. After narrowing down the problem, I've reached the conclusion that what's happened is that 32 bit programs are suddenly not able to use the Internet, but 64 bit applications work just fine. Examples of programs that DON'T work any more: Google chrome Firefox Internet Explorer 8 World of Warcraft Examples of programs that DO work: Internet Explorer 8 (64 bit) ping (command line) nslookup (command line) ftp (command line) I'm pretty sure that those command line apps are 64 bit native. A re-install of Windows using the recovery partition on the laptop did fix the problem temporarily, but now it's back again. And I seem to be stuck between a rock and a hard place getting someone to take the responsibility for this; the vendor says to talk to ASUS, ASUS says it's a software issue, and Microsoft doesn't give support on OEM licenses... Does anyone know how to solve this issue?

    Read the article

  • Organize code in Chef: libraries, classes and resources

    - by ColOfAbRiX
    I am new to both Chef and Ruby and I am implementing some scripts to learn them. Now I am facing the problem of how to organize my code: I have created a class in the library directory and I have used a custom namespace to maintain order. This is a simplified example of my file: # ~/chef-repo/cookbooks/mytest/libraries/MyTools.rb module Chef::Recipe::EP class MyTools def self.print_something( text ) puts "This is my text: #{text}" end def self.copy_file( dir, file ) cookbook_file "#{dir}/#{file}" do source "#{dir}/#{file}" end end end end From my recipe I call both methods: # ~/chef-repo/cookbooks/mytest/recipes/default.rb EP::MyTools.print_something "Hello World!" EP::MyTools.copy_file "/etc", "passwd" print_something works fine, but with copy_file I get this error: undefined method `cookbook_file' for Chef::Recipe::EP::FileTools:Class It's clear to me that I don't know how to create libraries in Chef or I don't know some basic assumptions. Can anyone help me, please? I am looking for a solution of this problem (organize my code, libraries, use resources in classes) or, better, a good Chef documentation as I find the documentation very deficient in clarity and disorganized so that research through it is a pain.

    Read the article

  • Reverse Proxy (mod_rewrite) and Rails (absolute paths)

    - by SooDesuNe
    I have front end rails app, that reverse proxies to any of a number of backend rails apps depending on URL, for example http://www.my_host.com/app_one reverse proxies to http://www.remote_host_running_app_one.com such that a URL like http://www.my_host.com/app_one/users will display the contents of http://www.remote_host_running_app_one.com/users I have a large, and ever expanding number of backends, so they can not be explicitly listed anywhere other than a database. This is no problem for mod_rewrite using a prg:/ rewrite map reverse proxy. The question is, the urls returned by rails helpers have the form /controller/action making them absolute to the root. This is a problem for the page served by mod_rewrite because links on the proxied page appear as absolute to the domain. i.e.: http://www.my_host.com/app_one/controller/action has links that end up looking like /controller/action/ when they need to look like /app_one/controller/action mod_proxy_html seems like the right idea, but it doesn't seem to be as dynamic as I would need, since the rules need to be hard coded into the config files. Is there a way to fix this server-side, so that the links will be routed correctly?

    Read the article

  • Why does my general frame rate slow down to 40fps randomly?

    - by Joshua
    This has been bugging me for a while. Every once in a while, I find my computer to be sort of laggy and I thought it was because it was busy or something. However, I recently noticed that it wasn't any performance issue...I thought my computer was laggy because the frame rate slowed from 75fps right down to ~40 fps and caused very visible tearing. This is not rare. It happens many, many times a day. I have no idea what is happening...I have an AMD 5670 on Windows 7 32-bit by the way, and I've heard bad things about AMD's driver support. Could this be the problem? P.S. The frame rate slowdown is not just for games (I rarely play games, and have not played games in the time since I noticed this problem), it seems it's an issue for the entirety of Windows. I first noticed the tearing when I was moving around tabs in Google Chrome.

    Read the article

  • Manually forcing TCP connection to retry

    - by Vi.
    I have a TCP connection (SSH session to some computer for example) Network suddenly goes down and drops all packets (disconnected cable, out of range). TCP resends packets again and again, retrying with increasing delays. I see the problem and plug the cable back (or restore network somehow). TCP connection finally successfully resends some packet and continues. The problem is that I need to wait for a some timeout on point 5. I want to use my opened SSH session now and not wait for 5-10 seconds until it finds out that connection is working again. How to force all TCP connections to resend data without delays in GNU/Linux?

    Read the article

  • Can't select text with mouse in Word / Office 2007

    - by asc99c
    I'm having a very weird problem here for the last few months. In Word, and in fact all programs from the Office 2007 suite, I can't drag the mouse pointer to select text. I can click at a point in the text and the cursor moves correctly to that point. If I double click, the word under the cursor is selected, and triple clicking selects the whole line. However if I hold the mouse button down and drag the mouse, no text is selected. Occasionally the problem disappears and everything works fine, but it then reappears a few minutes later. Text selection with the mouse works everywhere else (Firefox, PuTTY, OpenOffice), just not in Office. The only addins are Google Desktop Office Addin, and Person Name (). For info it is Office 2007 SP3, running on Windows 7 64-bit.

    Read the article

  • Hyper-V Ubuntu Networking Problems Copying Large Amounts of Data

    - by Anonymous
    I am trying to copy a large amount (about 50 GB) of data over my network from a Hyper-V-hosted virtual machine running Ubuntu 11.04 (Natty Narwhal) to another (non-virtual) Ubuntu host that I plan to use for testing upgrades to one of our web applications. The problem I am having is with the virtual machine, which I shall refer to in what follows as "source.host". This machine is running 64-bit Ubuntu Server with the 2.6.38-8-server kernel and the Microsoft Linux Integration Components for Hyper-V kernel modules (hv_utils, hv_timesource, hv_netvsc, hv_blkvsc, hv_storvsc, and hv_vmbus) loaded. It uses a Hyper-V "synthetic network adapter" for its networking interface. To do the copy, I log on to the machine with the data and run the following commands (Call the remote machine "destination.host".): $ cd /path/to/data $ tar -cvf - datafolder/ | ssh [email protected] "cat > ~/data.tar" This runs for a while and then suddenly stops after transferring somewhere from 2-6 GB. The terminal on the source.host machine displays a Write failed: broken pipe error. The odd part is this: after this occurs, the "source.host" machine is no longer able to talk to the rest of the network. I cannot ping any other hosts on the network from the "source.host" machine, and I cannot ping the "source.host" machine from any other host on the network. I am equally unable to access the any of the web services hosted on "source.host". Running ifconfig on "source.host" shows the network adapter to be up and running as usual with the correct IP address and everything. I tried restarting the networking service with $ /etc/init.d/networking restart but the problem does not go away. Restarting the machine makes it capable of talking to the network again -- it can ping and be pinged by other hosts, and the web services are also accessible and usable as normal -- but attempting the copy operation again results in the same failure, requiring another restart. As an experiment, I tried replacing the tar -- ssh pipeline above with a straight scp: $ scp -r datafolder/ [email protected]:~ but to no avail Thinking that the issue might have to do with the kernel packet-send buffers filling up, I tried increasing the buffer size to 12 MB (up from the 128 KB default) with # echo 12582911 > /proc/sys/net/core/wmem_max but this also had no effect. I'm guessing at this point that it might be a problem with the Microsoft synthetic network driver, but I don't really know. Does anyone have any suggestions? Thank you very much in advance!

    Read the article

  • Laptop white screen on power-up. Still displays via HDMI output

    - by Inno
    my wife's laptop recently started displaying a white screen. It doesn't show post or anything, just a white screen when it's powered on. However, it works normally with HDMI output to our television. I took it apart and fiddled with both ends of the display cable, but I either didn't fiddle correctly or that's just not the problem. I also noticed that the screen won't turn off anymore when the laptop is closed. Is there a name for the mechanism that controls this function, so I can try and locate it? My guesses are that the problem lies with the screen itself or the display cable, but I'm curious if there's anything else I might be overlooking. Also of note is that the left hinge is partially broken. The corner of the plastic computer case broke off, so the hinge is exposed and doesn't stay in place. I've tried holding it in place, wiggling it around, tapping various parts of the computer, but the white screen remains.

    Read the article

  • Having internet on a VirtualBox

    - by S4M
    I am running a linux laptop and I set up a VirtualBox under windows XP. My only problem is the VirtualBox doesn't seem to be connected to internet - When I do the connection diagnosis it tells me there is no connection. I am using an NAT adapter, and I bind the port 80 from my computer to the port 80 of the VirtualBox, and same for the ports 8080, but still no result. I would be grateful is someone could help me to sort this out. EDIT: thinking of it, what is making the problem hard - and painful - is the absence of error message. If I try to use a given adapter to share my computer's connection with the VirtualBox, and it doesn't work, I have no way to know why. So, it would be really helpful if someone could share a way to access this king of information (is there a log file somewhere or a way to run VirtualBox from command line in a verbose mode?).

    Read the article

  • What happens to missed writes after a zpool clear?

    - by Kevin
    I am trying to understand ZFS' behaviour under a specific condition, but the documentation is not very explicit about this so I'm left guessing. Suppose we have a zpool with redundancy. Take the following sequence of events: A problem arises in the connection between device D and the server. This causes a large number of failures and ZFS therefore faults the device, putting the pool in degraded state. While the pool is in degraded state, the pool is mutated (data is written and/or changed.) The connectivity issue is physically repaired such that device D is reliable again. Knowing that most data on D is valid, and not wanting to stress the pool with a resilver needlessly, the admin instead runs zpool clear pool D. This is indicated by Oracle's documentation as the appropriate action where the fault was due to a transient problem that has been corrected. I've read that zpool clear only clears the error counter, and restores the device to online status. However, this is a bit troubling, because if that's all it does, it will leave the pool in an inconsistent state! This is because mutations in step 2 will not have been successfully written to D. Instead, D will reflect the state of the pool prior to the connectivity failure. This is of course not the normative state for a zpool and could lead to hard data loss upon failure of another device - however, the pool status will not reflect this issue! I would at least assume based on ZFS' robust integrity mechanisms that an attempt to read the mutated data from D would catch the mistakes and repair them. However, this raises two problems: Reads are not guaranteed to hit all mutations unless a scrub is done; and Once ZFS does hit the mutated data, it (I'm guessing) might fault the drive again because it would appear to ZFS to be corrupting data, since it doesn't remember the previous write failures. Theoretically, ZFS could circumvent this problem by keeping track of mutations that occur during a degraded state, and writing them back to D when it's cleared. For some reason I suspect that's not what happens, though. I'm hoping someone with intimate knowledge of ZFS can shed some light on this aspect.

    Read the article

  • Vim: I don't want to insert!!!

    - by bhh1988
    I could not find anything online addressing this problem, which is surprising. The problem is that I find it very easy to accidentally insert stuff in Vim. I know I can undo by with 'u', but it still is quite annoying and frequent. Often, I enter a command like 'sp file.txt' without realizing that I haven't entered the ':' character yet (so I'm not yet on the command line). Unfortunately, there are several characters that take you to insert mode, including 's', 'a', 'i', 'o'. I'd rather have insert mode mapped to just one keybinding which is very deliberate, like shift-space. Can anyone point me to something that might have what I'm looking for? Thanks.

    Read the article

  • Debug unstable Apache server under Debian

    - by almo
    Since yesterday my Apache server that runs on a Debian machine runs very unstable. Sometiems my websites load and sometimes not. I think it has to do with the memory since my Apache log is full of Out of memory (allocated 262144) (tried to allocate 4480 bytes). I also attached a screenshot of the memory graph. A server restart resolves the problem temporarily. I looked at the processes that are using memory but the biggest one is MySQL with 6.5%. Where else can look for the problem? Edit: I did a free -m right after rebooting and one about 2 hours later. I think the trend is visible: root@xxx:~# free -m total used free shared buffers cached Mem: 4016 731 3284 0 80 200 -/+ buffers/cache: 449 3566 Swap: 459 0 459 root@xxx:~# free -m total used free shared buffers cached Mem: 4016 2466 1550 0 92 473 -/+ buffers/cache: 1900 2115 Swap: 459 0 459

    Read the article

  • Long lag and errors clicking gmail links in google chrome

    - by Doug T.
    I recently downloaded Google Chrome with Ubuntu 9.04 and love it. Ironically one site I consistently have issues with is gmail. When I enter gmail I will click a link, say an email or the inbox link. After a very long wait (on the order of 30 seconds to a couple of minutes) my page will load with an error such as: Some Gmail features have failed to load due to an Internet connectivity problem. If this problem persists, try reloading the page, using the older version, or using basic HTML mode. Learn More. Googling the symptoms has not helped. Has anyone else had any similar issues? Has anything helped?

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • How can I run my program on a large number of computers? [closed]

    - by zenpoy
    I'm looking for a (preferably free) service for running an executable I wrote? It's not malicious, it's not a virus, it's not scam, and if this is really important I can upload the python source code instead. I wrote a small crawler to gather information regarding the style of web pages for my MA project, and I need a lot more data. EDIT Here is more information on my problem and how I approach on solving it, and where I'm stuck. As part of my research I'm trying to classify text based on it's style (font-family for now), my data is based web pages, so I wrote a client/server application - the client is a crawler that gathers this data and send it to the server. The problem is that like 99% of the internet is Arial, Verdana and Helvetica - other fonts are far more rare, so I need to spend very long time to gather enough data regarding these fonts. Hope this explains it.

    Read the article

  • Rsync fails for files that start with underscore when destination is zfs

    - by Eric
    everyone. I'm using rsync3.1.0pre1 on Mac OS X 10.8.5, and am trying to rsync one folder to another. The destination is a ZFS volume mounted via SMB. The problem I'm having is that files that start with underscore (e.g., '_filename.jpg') are not being successfully synced to the destination. I get the following error message: rsync: mkstemp "/path/to/destination/._filename.jpg.NUgYJw" failed: Permission denied (13) In this case, '_filename.jpg' does not make it to the destination. I understand that rsync creates hidden, temporary files at the destination which are preceded with '.' and have a random file extension appended on the end. But the original filename starts with '', not '.', and I haven't asked rsync to copy extended attributes / resource forks over (unless it always does it). The rsync command I'm using is: rsync -avE --exclude='.DS_Store' --exclude '.Trash' --exclude 'Thumbs.db' --exclude '._*' --delete /source/ /destination/ Has anyone found a way around this problem? Thank you!

    Read the article

  • Synckolab will not start automatically

    - by EBV2010
    We have a Kolab server for e-mail, calendar and contacts with Thunderbird for a client. The add-ons are Lightning and Synckolab. The workstations are Kubuntu, most 10.04, some 11.10. It basically works but for one nagging problem: the automatic sync (that is, the settings that starts Synckolab upon start of Thunderbird and every x minutes as per setting) does not fire. We went through the whole routine: setting it, setting it to zero and back to any number of minutes, stopping/starting Thunderbird or the entire computer to make sure it sees and sets it. In the configuration console is reflects the changes. But still it will not automatically fire Synckolab. Manual syncs work without any problem (none that we've seen - it reflects all the added, changed etc calendar events). In short: Synckolab does not fire automatically with any setting who have thought of.

    Read the article

  • Can't connect to Internet through WiFi, but can with cable

    - by aldy505
    I'm using Windows 7 32-bit, Toshiba Portege laptop. I want to connect to the WiFi, usually I can do that easily. But, I don't know.. maybe when I tried to install a Microsoft Research: Mesh Virtual WiFi that could connect more than 2 wireless networks. I wanna try in connecting both Wireless router and my personal Ad-Hoc. Now, my laptop don't recognize my WiFi, well, I can connect to the WiFi but it says: "Limited access" and doesn't really connect to the internet. But when I plug the LAN cable, it works. I know the problem is in the laptop's wireless connection or in the properties. Any help for this? UPDATE: The IP and DNS settings, I set to automatic, and when I ran diagnostic, Windows tells me that the wireless network adapter is the problem, but they told me to insert the LAN cable, so how to fix the wireless? They didn't tell me how to fix that.

    Read the article

  • How do I fix Nginx config to work with multiple hosts of Unicorn?

    - by fred deAlmeida
    I have no problem instantiating multiple instances of unicorn on different unix sockets and ports. Works fine if I do url:port. My problem comes in correctly formatting nginx.conf to allow multipe upstream conditions. Whatever i do does not seem to work. One instance is fine works fine. Multiple gives me a ""upstream" directive is not allowed here error I am using the base nginx sample from the unicorn site. and doubling up the upstream area with differing terms. each is part of the http set. Any help would be amazing!

    Read the article

  • Dlink search is hijacking my browser

    - by James
    For months now "DLink search" has been hijacking my search engines. I use google chrome, and I have organized my search engines in the handy dandy "manage search engines" tool about a TRILLION times. It never even says D-link is hacking my search engines. It does not show up! I have read many posts on this forum and others saying that to fix this problem from internet explorer: Setup, internet options, yadayada, magical fairies, and you are solved, but my browser is google chrome! How am I supposed to do this from there! I do not know how to re-setup my Dlink router, which is the cause of the problem! HOW? In those posts with the magical fairies fixing it, HUNDREDS responded saying, "yep, those fairies definitely fixed it right. :)" These people were so satisfied. IT WORKED FOR THEM, WHY NOT ME. I look at it and go ":(" because it does not help me. There are no options for anything to do with this in GOOGLE chrome. PLEASE EXPLAIN and HELP. I see no "SETUP" option, no "Internet Options" button, no anything. BTW the exact posts are these: "Uncheck Advanced DNS in the router internet setup. This will take care of it. I had this problem with my DLink router before." "I had this issue with my DIR-655 and unchecking the Advanced DNS setting in Setup - Internet - Manual Internet Connection Setup fixed it." "If this is just internet explorer, you can go to Tools Internet Options or Internet Options in Control Panel. From here, go to the advanced tab and click the Reset button." "I would set the router's DNS to a site like OpenDNS, and I would ensure the machines are set to get their DNS settings via DHCP or set the machine's DNS setting to OpenDNS. If the router's DNS looks like it was messed with, some bad software know the default passwords for routers and could have changed it. If you don't already I would make sure the password to the router is not default or easy to guess. I've had spyware change a machine's DNS, but the fact it is happening on all machines makes me wonder if it is the router." "Something got into your router and changed the dns server most likely, do a hard reset of the router and then change the password to something strong. Also check for a firmware update for the router and apply it as soon as possible."

    Read the article

  • Uncorrectable machine check

    - by GregC
    I am experiencing rare but real unrecoverable machine checks on HP DL370 G6 dual-core Xeon server. I ran memtest86+ before, and ran CPU-intensive operations without any problems. In your opinion, does this indicate a real problem, or is it normal and expected behavior? How would you approach this problem? EDIT after some troubleshooting, it seems that these machine checks, as well as problems when showing device manager can be traced back to NC375i NICs. All is well when the NICs are not in the server. Further improvements to stability of HP Gen6 with Intel Xeon have been brought in with BIOS update in September 2013 HP Update DVD. Intel's newer microcode makes these CPUs much more stable. We haven't seen hardware-related BSODs since the update in September.

    Read the article

  • Internet Connection Dying after some time

    - by Rahul
    I'm using BSNL 3G USB Datacard(BSNL is a ISP in India). After connecting to internet, while browsing in browser, the connection is dying. So after disconnecting & connecting back, it works proper for some time & the same problem repeats. But if I'm downloading any movie or some file through uTorrent, the connection is not dying. I'm experiencing this problem in Vista & also in Ubuntu. Help me with this please. Thank you.

    Read the article

  • localhost works 127.0.0.1 does not IIS

    - by NickatUship
    Very weird problem on IIS. Never had it before: localhost works, but 127.0.0.1 does not. localhost pings to 127.0.0.1. www.mydomain.com also pings to that IP, which is set up in the hosts file, but that also doesnt work locally. I've ipconfig /flushdns 'd without success. Ive even restarted the server. Another server set up the exact same way works fine. Any ideas? To be clear, im accessing the URLs in IE like this: http://localhost http://127.0.0.1 http://www.mydomain.com I can telnet to port 80 without a problem for all 3

    Read the article

  • Sony Vaio laptop constant "bi" noise when on battery

    - by Dominick1978
    I have sony vaio vgn fs215s laptop. When I use it with battery alone it makes a constant "bi" noise that gets louder on more power consuming tasks. Sometimes on startup the screen goes black before xp logo. Then i hear the windows startup sound but sill black screen. I bypassed the xp logo (via msconfig) and it gets me all the way ok but the noise is still there. There is no problem when the laptop is plugged to a socket. What do you think is the problem and how do i fix it (how much money)? Thanks a lot.

    Read the article

< Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >