Search Results

Search found 68825 results on 2753 pages for 'problem'.

Page 526/2753 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • GMail IMAP + Apple Mail / iPhone - "Account exceeded bandwidth limits. (Failure)"

    - by bpapa
    Started seeing this this morning in Apple Mail. I have one of those exclamation point error indicators next to "Inbox", with this error message when I click on it: There may be a problem with the mail server or network. Verify the settings for account “IMAP Account” or try again. The server returned the error: Account exceeded bandwidth limits. (Failure). This is in Snow Leopard. I'm using GMail IMAP, and I am way below the size quota - I've never heard of there even being a bandwidth quota. I'm also not getting mail from the same account to the mail app on my iPhone. EDIT - a month later I'm seeing this, and I'm thinking of just switching Mobile Me. EDIT AGAIN - Making community wiki. I stopped seeing the problem once I updated Snow Leopard to the latest version, but since others continue to see it...

    Read the article

  • Kernel upgrade CentOS 5.3 mount: could not find filesystem '/dev/root'

    - by matt
    We have a CentOS 5.3 x64 server that by default runs kernel version 2.6.18-164.11.1 and we are attempting to upgrade the box to 2.6.31.12 The drive is LVM +ext3, and the problem I'm having is when I upgrade the kernel and attempt to boot from it, no matter what version of the kernel I use, I get /dev/root not found towards the end of the boot process, and the kernel panics, and than reboots. I'm installing the kernel exactly as it says in this doc. I've tried it "The centOS way " using make rpm and than installing that. I've updated my mkinitrd. The most interesting part of this problem is that it has been so frustrating that I decided to try and clean install centos on an identical machine without LVM, and the result is EXACTLY the same. After upgrading the kernel, I get /dev/root not found. Does anyone know how to fix this, or what information would be relevant to remedy it? I'm open to try anything at this point. One more interesting thing about this problem is that in the new version of the kernel, during boot it complains that dm-mapper is started twice, than panics right after that. I've tried this with other kernel versions, and the result is the same. What am I missing here? If you need any more files, please just ask. Linux cg 2.6.18-164.11.1.el5 #1 SMP Wed Jan 20 07:32:21 EST 2010 x86_64 x86_64 x86_64 GNU/Linux /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 default=1 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title CentOS (2.6.31.12-rt20) //NOT WORKING!!!! root (hd0,0) kernel /vmlinuz-2.6.31.12-rt20 ro root=/dev/VolGroup00/LogVol00 isolcpus=8,9,10,11,12,13,14,15 panic=10 initrd /initrd-2.6.31.12-rt20.img title CentOS (2.6.18-164.11.1.el5) //WORKING!! root (hd0,0) kernel /vmlinuz-2.6.18-164.11.1.el5 ro root=/dev/VolGroup00/LogVol00 isolcpus=8,9,10,11,12,13,14,15 panic=10 initrd /initrd-2.6.18-164.11.1.el5.img

    Read the article

  • Determining currently-serving files in IIS 7

    - by Nat Papovich
    serverfault showed me this topic, and I think I want to do the same thing, but in IIS, not Apache. I have a "dashboard" application I'm building and I want it to show what files are currently being served by IIS. They'll mostly all be large files. I believe that the ILogScripting COM Interface would have been one good place to start, but it's not available in IIS 7, and it relies on the underlying IIS logs for its data. And therein, I believe, lies my problem. How do I make IIS put in, essentially, two log entries, one as the request begins, and one when the connection is closed? Also, it looks like IIS doesn't "commit" log entries as they're occuring, in "real-time". There's some kind of delay/batch-job. That will cause a problem for me too. Or do I need to do something in isapi instead?

    Read the article

  • Mac OS X Lion 10.7.2 update breaks SSL

    - by mcandre
    Summary After updating from 10.7.1 to 10.7.2, neither Safari nor Google Chrome can load GMail. Spinning Beachballs all around. The problem isn't GMail; Firefox loads GMail just fine. The problem isn't limited to Safari or Google Chrome; Other applications also have trouble with SSL: Gilgamesh and Safari. Any program that uses WebKit (Google Chrome, Safari) or a Cocoa library (Gilgamesh) to access the Internet has trouble loading secure sites. The various forums online suggest a handful of fixes, none of which work. Analysis Fix #1: Open Keychain Access.app and delete the Unknown certificate. The 10.7.2 update also prevents Keychain Access from loading. The Keychain program itself Spinning Beachballs. Fix #2: Delete ~/Library/Keychains/login.keychain and /Library/Keychains/System.keychain. This temporarily resolves the issue, and lets you load secure sites, but a minute or two after rebooting or hibernating somehow magically undoes the fix, so you have to delete these files over and over. Fix #3: Delete ~/Library/Application\ Support/Mob* and /Library/Application\ Support/Mob*. There is a rumor that the new MobileMe/iCloud service ubd is causing the issue. This fix does not resolve the issue. Fix #4: Open Keychain Access, open the Preferences, and disable OCSP and CRL. This fix does not resolve the issue. Fix #5: Use the 10.7.0 - 10.7.2 combo installer, rather than the 10.7.1 - 10.7.2 installer. When I run the combo installer, it stays forever at the "Validating Packages..." screen. The combo installer itself is bugged to He||. I force-quit the installer, ran "sudo killall installd" to force-quit the background installer process, and reran the combo installer. Same problem: it stalls at "Validing Packages..." Recap The only fix that works is deleting the keychains, but you have to do this every time you reboot or wake from hibernate. There is some evidence that ubd continually corrupts the keychain files, but the suggested ubd fix of deleting ~/Library/Application\ Support/Mob* and /Library/Application\ Support/Mob* does not resolve this issue. Evidently, something is corrupting the keychain over and over and over. Also posted on the Apple Support Communities.

    Read the article

  • What equipment do real ISP's use?

    - by Allanrbo
    In a dormitory of 550 residents, people often mistakenly set up DHCP servers for the whole network by plugging in their private Wi-Fi routers wrongly. Also recently, someone mistakenly configured their PC to a static IP being the same as that of the default gateway. We use cheap 3Com switches at the moment. I know that Cisco switches support DHCP snooping to solve the DHCP problem, but that still does not solve the default gateway IP takeover problem. What sort of switch equipment do real ISP's use so their customers cannot break the network for the other customers?

    Read the article

  • XP Pro product keys

    - by Bill
    I have a very serious problem. After my XP Home OS was trashed by rogue software - a trial of a thing named TuneUp - I did a clean install, including HDD reformat of Windows XP Pro from a purchased OEM disk. This was Service Pack 2, subsequently upgraded to SP3. I had conclusively mislaid the product key. I had to access data on the machine VERY urgently and I did not then know that under some circumstances Microsoft might agree to provide a replacement. I found what I now know must have been a pirate key on the Net which enabled installation but NOT activation. This of course left me functional but 30 days before meltdown - about 20 days left as I write this. Various retailers want around £100 for retail with matching product key. - this would be paying twice over just to continue use on the same computer. I have neither need nor intention of installing XP Pro on any other computer. I have tried a number of applications claiming to deal with this problem but none of them work. A Belarc profile shows that the pirate key has replaced the original one on the system. I have now found two keys, one of which might be the original, but neither work. I am about to upgrade the HDD and it looks like I will just be passing the problem on when I install XP. I have retrieved a key from the disk, but it is seemingly one Microsoft use in production and does not work. It is 76487-OEM-0015242-71798. The keys I have, one of which which might or might not be the original, are CD87T-HFP4G-V7X7H-8VY68-W7D7M and FC8GV-8Y7G7-XKD7P-Y47XF-P829W (or P819W - I believe it to be the latter, but the box will not accept it). The pirate key which has enabled this install and which is now stored on the system but will not activate is QQHHK-T4DKG-74KG7-BQB9G-W47KG. In these circumstances is it likely that Microsoft would issue a replacement? Is there any other solution? I am not trying to defraud anyone, just to keep on using the product I legitimately bought. Bill

    Read the article

  • Mac has become insanely slow : Processes SystemUIServer, UserEventAgent and loginwindow using a lot of memory

    - by SatheeshJM
    I have been using my Mac for for many months without any problem. But recently all of a sudden the Mac became insanely slow. I opened Activity Manager to see what was happening. For three processes SystemUIServer, UserEventAgent and loginwindow, the memory gradually increases and reaches upto 2 GB for each process. This completely hangs up my Mac. I tried the following : 1. Restart Mac 2. Restart Mac in safe mode 3. Manually kill the processes 4. Remove Date and Time from Menu bar(this was supposed to be the problem for the SysteUIServer process's memory according to many users) 5. Removed the externally connected keyboard and mouse(some had suggested this for UserEventAgent's memory) No luck with any of those. The moment I log in, the memory spikes up. Any idea what the hell is happening? Please help.

    Read the article

  • Can't launch Oneiric x64 instance on Eucalyptus

    - by Bruno Reis
    EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. More details in the end. I didn't manage to fix it, and I will file a bug. EDIT 2: I managed to fix it, it apparently works. I have a 4-machine cluster running Ubuntu Server Natty (11.04) x64. I've installed "Ubuntu Enterprise Cloud" from the installtion CD (then updated it) on each of these machines. The cloud seems to work fine, I have lots of virtual machines running Natty servers on them. Now I'd like to run Oneiric in a virtual machine, but somehow I can't. I downloaded Oneiric's (x64) image from http://cloud-images.ubuntu.com/oneiric/current/, published it (uec-publish-tarball oneiric-server-cloudimg-amd64.tar.gz oneiric-server-cloudimg-amd64) exactly as I did with Natty, then tried to launch an instance (euca-run-instances -n 1 -k my-key -t m1.small -z my-cloud emi-XXXXXXXX) using Oneiric's image, but the instance is not able to boot. With euca-get-console-output I get the following: [ 0.461269] VFS: Cannot open root device "sda1" or unknown-block(0,0) [ 0.462388] Please append a correct "root=" boot option; here are the available partitions: [ 0.463855] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 0.465331] Pid: 1, comm: swapper Not tainted 3.0.0-13-generic #22-Ubuntu [ 0.466526] Call Trace: [ 0.466989] [<ffffffff815d3ee5>] panic+0x91/0x194 [ 0.467860] [<ffffffff81ad1031>] mount_block_root+0xdc/0x18e [ 0.468891] [<ffffffff81ad126a>] mount_root+0x54/0x59 [ 0.469829] [<ffffffff81ad13dc>] prepare_namespace+0x16d/0x1a7 [ 0.470883] [<ffffffff81ad0d76>] kernel_init+0x140/0x145 [ 0.471837] [<ffffffff815f38e4>] kernel_thread_helper+0x4/0x10 [ 0.472889] [<ffffffff81ad0c36>] ? start_kernel+0x3df/0x3df [ 0.473884] [<ffffffff815f38e0>] ? gs_change+0x13/0x13 The filesystem is labeled "cloudimg-rootfs", inside the image both /etc/fstab and /boot/grub/grub.cfg always refer to the image by the label, everything seems to be correct, yet the kernel says it can't find the root file system. I've spent many hours googling, but nothing came out. I've asked on #ubuntu-server, but nobody knew what to do. I've asked on #eucalyptus but got no answer at all. Any ideas on why this is happening and how to solve it? Thanks EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. The first problem is that the Kernel in the image is a -generic kernel, while I suppose it should be a -virtual one. I chrooted into the image, removed the -generic packages, replaced it with the -virtual ones. Then I extracted the new kernel (and replaced the original one (-generic) that came with the tarball) because I need it when I publish and launch an image with Eucalyptus. The problem described above was solved. But then, the console started showing this: mount: mount point ext4 does not exist If you check the /etc/fstab file in the image, it says: LABEL=cloudimg-rootfs ext4 defaults 0 1 Damnt, where's my mount point? Note that it is missing /proc as well. Well, when you think it is over, you will notice that your instance will have no network connectivity. Let's check /etc/network/interface: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback Oh my! It is missing eth0... here I stopped. I can't take no more. I give up. Looks like Canonical has just forgotten to properly set up this image. At first, I though: "have I downloaded a server image by mistake?", but no, I double checked. It is really the cloud image, it has even "cloud-init" installed (which is not, by default, on server images). They just forgot to prepare it. I will file a bug (and reference it here once this is done), and hope they fix it soon! EDIT 2: it looks like the network configuration was the last thing missing. I decided to test it with the fixes above, and it booted properly! However, I haven't got the slightest idea if the image is now good to go...

    Read the article

  • Squirrelmail got error after installation whm

    - by Pleerock
    Just now installed whm/cpanel, created some accounts, created mail account in one of them and entered squirrelmail to check the mail. Unfortunately it gives me an error: Warning: fsockopen() [function.fsockopen]: php_network_getaddresses: getaddrinfo failed: Name or service not known in /usr/local/cpanel/base/3rdparty/squirrelmail/plugins/login_auth/functions.php on line 129 Warning: fsockopen() [function.fsockopen]: unable to connect to localhost:143 (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /usr/local/cpanel/base/3rdparty/squirrelmail/plugins/login_auth/functions.php on line 129 How do I fix it? I don't know exactly, but I read some sites and maybe the problem in dns? I changed ns1.com to ns1.myhost.com in a Basic cPanel & WHM Setup P.S. Im sure that it server configuration problem, not squirrelmail, other mail clients are not working too..

    Read the article

  • ThinkPad T400 backlight doesn't always turn on when booting from battery

    - by srunni
    I have a Lenovo ThinkPad T400, which I've owned for about a year. A while back, I noticed a problem in which the LED backlight would not always turn on when booting from battery. Usually, if I hold the power button down until the BIOS screen shows up, the backlight turns, but sometimes even that doesn't work. If I just press the power button and let it go right away, the backlight usually doesn't turn on. This happens before the OS (I dual-boot Linux and Windows) gets a chance to boot - the BIOS screen itself is displayed without the backlight if it fails to turn on, so the problem isn't at the OS level. Any ideas? Thanks!

    Read the article

  • Delay at Windows Log On Screen

    - by Ryan Elkins
    I'm getting a delay when entering my password at the log on screen in Windows 7. What I mean is that I can select the text box but pressing keys does nothing. I had this problem initially on this computer with Vista installed - when I reformatted and installed Windows 7 the problem went away - but over the weeks it is slowly coming back (the delay seems to be getting longer). Any clues as to why this is happening and how I can fix it? It currently takes about 4-8 seconds before it will accept keyboard input.

    Read the article

  • Lan, vpn on Amazon EC2, how to?

    The problem is as follows: I have 2 windows2003 server instances running on the cloud. 1) How can I create a local area network from these 2 instances? 2) Assuming that I want to create a VPN network from these 2 instances, how do I do that? (I'm not very good in networking, therefor the above problem description might be incomplete or not very clear.) A detailed answer or clarification would be praised and appreciated! What I tried: 1) Setting up OpenVPN, but I got lost in the process. 2) Creating a VPN from windows2003 server in the following manner: on instance a): set up a dhcp server; set up an "accept income vpn" connection; with the followin tcp ip settings: obtain an ip from the dhcp server; on instance b): created a new vpn connection, tried to connect to intance A, using the instance A static IP but error 806 was thrown, something relate to a GRE protocol.

    Read the article

  • Adding new users

    - by user36651
    I have an FTP server that is running Fedora Core release 6 (Zod) the problem is I need to create new users and I have root access saved in WinSCP, so I can run useradd or adduser via the fake terminal, but every time I try to use passwd <username> it crashes on me and won't allow me to change or add a password. my questions are this: --Is there a place the adduser script stores the default passwords? or what is the default? --Is there another way I can set passwords for new users? I don't want to change the root pass because EVERYONE has root access and it's saved in WinSCP (I'm sure you see the problem here...) I want to create User accounts for each user instead of giving them all blatant root access. the goal here is to gradually migrate everyone over to their new account and then change the root p/w. Any suggestions would be greatly appreciated.

    Read the article

  • Upgrade the Graphics Card for a Dell Dimension 3100

    - by Pat Foran
    Hi, I have a Dell Dimension 3100 Desktop with a 128MB Graphics Card Integrated into the Mother Board. I need to upgrade this 128MB to at least 256MB or 512MB if the system will support same. I am told by Dell that all I have is a PCIx1 slot and that they do not stock a Graphics Card for this. I was told to shop around at Amazon and ebay etc and I would find one there. I have shopped around for some time now and do not know exactly what I am looking for. There are several PCI Graphics Card out there but which one would be the correct one for a Dell Dimension 3100. Can you help me resolve this problem. If you know of a PCIx1 card that will sort out my problem you might please let me have all the details for to purchase it. Regards, Pat,

    Read the article

  • How to bind old user's SID to new user to remain NTFS file ownership and permissions after freshly reinstall of Windows?

    - by LiuYan ??
    Each time we reinstalled Windows, it will create a new SID for user even the username is as same as before. // example (not real SID format, just show the problem) user SID -------------------- liuyan S-old-501 // old SID before reinstall liuyan S-new-501 // new SID after reinstall The annoying problem after reinstall is NTFS file owership and permissions on hard drive disk are still associated with old user's SID. I want to keep the ownership and permission setting of NTFS files, then want to let the new user take the old user's SID, so that I can access files as before without permission problem. The cacls command line tool can't be used in such situation, because the file does belongs to new user, so it will failed with Access is denied error. and it can't change ownership. Even if I can change the owership via SubInACL tool, cacls can't remove the old user's permission because the old user does not exist on new installation, and can't copy the old user's permission to new user. So, can we simply bind old user's SID to new user on the freshly installed Windows ? Sample test batch @echo off REM Additional tools used in this script REM PsGetSid http://technet.microsoft.com/en-us/sysinternals/bb897417 REM SubInACL http://www.microsoft.com/en-us/download/details.aspx?id=23510 REM REM make sure these tools are added into PATH set account=MyUserAccount set password=long-password set dir=test set file=test.txt echo Creating user [%account%] with password [%password%]... pause net user %account% %password% /add psgetsid %account% echo Done ! echo Making directory [%dir%] ... pause mkdir %dir% dir %dir%* /q echo Done ! echo Changing permissions of directory [%dir%]: only [%account%] and [%UserDomain%\%UserName%] has full access permission... pause cacls %dir% /G %account%:F cacls %dir% /E /G %UserDomain%\%UserName%:F dir %dir%* /q cacls %dir% echo Done ! echo Changing ownership of directory [%dir%] to [%account%]... pause subinacl /file %dir% /setowner=%account% dir %dir%* /q echo Done ! echo RunAs [%account%] user to write a file [%file%] in directory [%dir%]... pause runas /noprofile /env /user:%account% "cmd /k echo some text %DATE% %TIME% > %dir%\%file%" dir %dir% /q echo Done ! echo Deleting and Recreating user [%account%] (reinstall simulation) ... pause net user %account% /delete net user %account% %password% /add psgetsid %account% echo Done ! %account% is recreated, it has a new SID now echo Now, use this "same" account [%account%] to access [%dir%], it will failed with "Access is denied" pause runas /noprofile /env /user:%account% "cmd /k cacls %dir%" REM runas /noprofile /env /user:%account% "cmd /k type %dir%\%file%" echo Done ! echo Changing ownership of directory [%dir%] to NEW [%account%]... pause subinacl /file %dir% /setowner=%account% dir %dir%* /q cacls %dir% echo Done ! As you can see, "Account Domain not found" is actually the OLD [%account%] user echo Deleting user [%account%] ... pause net user %account% /delete echo Done ! echo Deleting directory [%dir%]... pause rmdir %dir% /s /q echo Done !

    Read the article

  • "No route to host" with ssl but not with telnet

    - by Clemens Bergmann
    I have a strange problem with connecting to a https site from one of my servers. When I type: telnet puppet 8140 I am presented with a standard telnet console and can talk to the Server as always: Connected to athena.hidden.tld. Escape character is '^]'. GET / HTTP/1.1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://athena.hidden.tld:8140/"><b>https://athena.hidden.tld:8140/</b></a></blockquote></p> <hr> <address>Apache/2.2.16 (Debian) Server at athena.hidden.tld Port 8140</address> </body></html> Connection closed by foreign host. But when I try to connect to the same host and port with ssl: openssl s_client -connect puppet:8140 It is not working connect: No route to host connect:errno=113 I am confused. At first it sounded like a firewall problem but this could not be, could it? Because this would also prevent the telnet connection. As Firewall I am using ferm on both servers. The systems are debian squeeze vm-boxes. [edit 1] Even when I try to connect directly with the IP address: openssl s_client -connect 198.51.100.1:8140 #address exchanged connect: No route to host connect:errno=113 Bringing down the firewalls on both hosts with service ferm stop is also not helping. But when I do openssl s_client -connect localhost:8140 on the server machine it is connecting fine. [edit 2] if I connect to the IP with telnet it also is not working. telnet 198.51.100.1 8140 Trying 198.51.100.1... telnet: Unable to connect to remote host: No route to host The confusion might come from IPv6. I have IPv6 on all my hosts. It seems that telnet uses IPv6 by default and this works. For example: telnet -6 puppet 8140 works but telnet -4 puppet 8140 does not work. So there seems to be a problem with the IPv4 route. openssl seems to only (or by default) use IPv4 and therefore fails but telnet uses IPv6 and succeeds.

    Read the article

  • Set up ad hoc wireless connection between Windows Vista and Mac OS X

    - by Skarab
    I have the following problem - Windows Vista does not connect to adhoc wireless network created on my Macbook. I have tried to create secured (with 40 bit key) and unsecured network but Windows Vista still has problems to connect. Windows VISTA informs me -- after 5 minutes of attempts - that setting up the connection -- with my adhoc network -- took too much time. My question: do I need to configure some settings on Vista to connect it to my Macbook? Maybe it is a problem with DHCP? Edited: I have tried the other way: http://superuser.com/questions/202890/set-up-an-adhoc-network-in-windows-vista-to-connect-to-and-share-the-internet-con

    Read the article

  • Wireless is not currently enabled

    - by ikartik90
    I have a HP Pavilion TX2000 tablet PC with me with windows 7 OS running on it. I used to access the Internet using my D-Link DIR 615 wireless adapter on the tablet and it used to work quite fine until one day, when I hibernated my Windows 7, the wireless went off and the problem seems to persist even after my hard efforts to clear it. I checked if the router works fine, and yes it did as my iPod was still catching wireless signals on the other hand, when I checked mu device manager, I realized that I now had no wireless driver. I checked on HP's website for one, but ironically even they didn't have wireless drivers meant for my tablet for Windows 7. Please help me find a solution to this problem. Further queries will be entertained as frequently as possible. Thanks.

    Read the article

  • Programs minimized for long time takes long time to "wake up"

    - by bart
    I'm working in Photoshop CS6 and multiple browsers a lot. I'm not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar - it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it's slow, unresponsive and even sometimes totally freezes for minute or two. It's not a hardware problem as it's been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me - does it also happen to you? I guess OSes somehow "focus" on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let's say, Photoshop, so it won't slow down after long period of inactivity?

    Read the article

  • this operation has been canceled due to restrictions in effect on this computer

    - by Dan
    I have this HUGELY irritating problem on Windows 7 (x64). Whenever I click on ANY link (that exists on a Word document, excel or Outlook), I get an alert box with the message: "This operation has been canceled due to restrictions in effect on this computer" I have been scouring my settings and the internet for a solution, but to no avail. Has anybody else encounted this problem? It even happens when I click anchors in word documents i.e. I can't even click on an entry in a Table of Contents to go to the appropriate page - I get this same error then. Is this a Windows 7 thing? Anyway to turn this off?

    Read the article

  • Munin does not show Apache/mySQL stats in web view

    - by Chris
    I'm facing a very strange Problem. I just set up Munin on a fresh Ubuntu slice with a common LAMP Stack. Everything works great, except that Munin does just not show the Apache/mySQL stats in the web view. Everything else in the web view works great, Apache works, mySQL works. I even tried calling the plugins via console: sudo munin-run apache_accesses And it works fine. AFAIK Munin log files are not telling me any problems.. My only hint: when I run munin-run without sudo it gives me a "Permission denied" - could this be the problem?

    Read the article

  • POST data not being received

    - by Alexander
    I've got an iPhone App that is supposed to send POST data to my server to register the device in a MySQL database so we can send notifications etc... to it. It sends it's unique identifier, device name, token, and a few other small things like passwords and usernames as a POST request to our server. The problem is that sometimes the server doesn't receive the data. And by this I mean, its not just receiving blank values for the POST inputs but, its not receiving ANY post data at all. I am logging all POST inputs to my server into some log files and when the script that relies on the POST data from the device fails (detects no data) I notice that its because NO POST data was sent. Is this a problem on the server, like refusing data or something or does this have to be on the client's side? What could be causing this?

    Read the article

  • Detecting/Reactivating serial port that becomes inactive on Ubuntu Linux 10.10

    - by Tom
    I am using a usb2serial port to communicate with some old equipment (using my code built upon the boost asio library - I think my code is fine because it works almost all of the time). Every so often (maybe once every few days) the communication stops with my device with no error at all - the device just does not respond. I then restart my computer and everything is fine again. Does anyone know where I can start to analyse this problem? My serial port loads up fine (in /dev/ttyUSB0) and the boost library does not throw an error. The device just does not respond. If I restart the device no change - only when I restart my pc does it make a difference. I have also tried unplugging and replugging the usb connector. Does anyone know what gets cleared in the reboot (w.r.t the serial device) or what I can probe when the problem happens again (rather than just restarting with hope)

    Read the article

  • Remote desktop session ends abruptly with a "protocol error"

    - by Jon
    Intermittently we get a problem where a remote desktop session will get disconnected with the error message “Because of a protocol error, this session will be disconnected. Please try connecting to the remote computer again.” We are getting this with one server only which is running Windows Server 2008, connecting with Windows 7 clients. The session itself stays running, you just get disconnected, and you can try and reconnect. Sometimes you get in for a while then it will kick you out. We are connecting from Windows 7 clients. We have tried connecting using Cord on a Mac and this works fine, so it's not like the session itself is corrupted. One problem is that there are some critical applications running under the session (I know, let's not discuss the idiocy of that), so we cannot reset the session in any way during the working day – so any diagnostics must have minimum impact. Thanks, Jon

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >