Search Results

Search found 27958 results on 1119 pages for 'failed to load viewstate'.

Page 727/1119 | < Previous Page | 723 724 725 726 727 728 729 730 731 732 733 734  | Next Page >

  • Where should CentOS users get /usr/share/virtio-win/drivers for virt-v2v?

    - by Philip Durbin
    I need to migrate a number of virtual machines from VMware ESX to CentOS 6 KVM hypervisors. Ultimately, I wrote an RPM spec file that solved my problem at https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec but I'm not sure if there's another RPM in base CentOS or EPEL (something standard) I should be using instead. Originally, I was getting this "No root device found in this operating system image" error when attemting to migrate a Window 2008 VM. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm virt-v2v: No root device found in this operating system image. . . . but I solved this with a simply yum install libguestfs-winsupport since the docs say: If you attempt to convert a virtual machine using NTFS without the libguestfs-winsupport package installed, the conversion will fail. Next I got an error about missing drivers for Windows 2008. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: Installation failed because the following files referenced in the configuration file are required, but missing: /usr/share/virtio-win/drivers/amd64/Win2008 . . . and I resolved this by grabbing an iso from Fedora at http://alt.fedoraproject.org/pub/alt/virtio-win/latest/ as recommended by http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers and building an RPM from it with this spec file: https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec Now, virt-v2v exits without error: [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: my-vm configured with virtio drivers. [root@kvm01b ~]# Now, my question is, rather that the virtio-win RPM from the spec file I wrote, is there some other more standard RPM in base CentOS or EPEL that will resolve the error above? Here's a bit more detail about my setup: [root@kvm01b ~]# cat /etc/redhat-release CentOS release 6.2 (Final) [root@kvm01b ~]# rpm -q virt-v2v virt-v2v-0.8.3-5.el6.x86_64 See also Bug 605334 – VirtIO driver for windows does not show specific OS: Windows 7, Windows 2003

    Read the article

  • Wicked VNC Viewer acting out on Windows desktop and CentOS 6.3 server

    - by Johnny Lee
    What we have here is the only way to open the TightVNC viewer on this Windows XP desktop is to have a TigerVNC viewer open on the CentOS 6.3 server desktop. I know it sounds really weird and we’re looking for hints to make it go away. Any ideas? Here is the recipe: We are using Putty on the Windows desktop as SSH (Secure Shell) and a Terminal Emulator. We open and login to Putty then open a login to TightVNC viewer. After many failed attempts, much Googling, and lots of reading to no avail I decided to open the TigerVNC viewer on the CentOS 6.3 server by way of the GNOME desktop Application menu -- Internet tab. After opening and logging into the TigerVNC viewer on the CentOS 6.3 Server, Voila!! We have a remote desktop opened on the server. But what was an interesting discovery was that the TigerVNC viewer on the server had a request on the desktop that was not on the server desktop. This turned out to be a login request that once the password was entered it opened the TightVNC viewer on the Windows desktop. Weird huh? -Why is that password request showing up on the CentOS 6.3 server in the TigerVNC viewer as oppose to showing up on the Windows desktop when logging in using TightVNC viewer to the server?

    Read the article

  • Apache, Tomcat 5 and problem with HTTP basic auth

    - by Juha Syrjälä
    I have setup a Tomcat with a webapp that uses http basic auth in some of its URLs. There is a Apache server in front of the Tomcat. I have setup Apache as a proxy like this (all traffic should go directly to tomcat): /etc/httpd/conf.d/proxy_ajp.conf: LoadModule proxy_ajp_module modules/mod_proxy_ajp.so ProxyPass / ajp://localhost:8009/ ProxyPassReverse / ajp://localhost:8009/ There is a webapp installed to root of Tomcat (ROOT.war), so I should be able to use http://localhost/ to access my webapp. But it is not working with http basic auth. The problem is that everything works until I try to access URL that are protected by the HTTP basic auth. URLs without authentication work just fine. When accessing this url via apache I am getting an error message from Apache. If I access the same URL directly from tomcat, everything works just fine. I am getting this to Apache error log: [Wed Sep 01 21:34:01 2010] [error] proxy: dialog to [::1]:8009 (localhost) failed access log looks like this: ::1 - - [01/Sep/2010:21:34:01 +0300] "GET /protected_path/ HTTP/1.0" 503 360 "-" "w3m/0.5.2" I am using: Fedora release 13 (Goddard) httpd-2.2.16-1.fc13.x86_64 tomcat5-5.5.27-7.4.fc12.noarch The basic auth is implemented in the webapp (not in Apache or Tomcat). The webapp is actually implemented in Scala/Lift, but that shouldn't matter. The auth works if I access the tomcat directly. Error message that I am getting from Apache. It is curious that the title is Unauthorized and not Internal error: Unauthorized The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.16 (Fedora) Server at my.server.name.com Port 80 It could be that Apache is seeing a some thing else than 200 OK response and thinks that it is an error when it actually should pass the received 401 Unauthorized response directly to browser. If this is the problem, how to fix it?

    Read the article

  • Task Scheduler not able to execute .vbs successfully

    - by Django Reinhardt
    Hi there, got this weird problem, which will hopefully have an obvious solution for some enlightened soul: We have several daily tasks we run via a .vbs script on our server (through the Task Scheduler), and for months it has been fine, but recently we've hit a problem. The .vbs script stopped successfully executing... but oddly it worked fine when ran manually! The error given in these circumstances was always "Timeout". We thought we try a little creative thinking, and run the .vbs another way: Via a .bat file. Again we hit weird issues, but with a little more debugging information, this time around. The .bat file is nothing more than... CScript "C:\location\script.vbs" > Log.txt But the Task Scheduler fails with the following error: 0x1: An incorrect function was called or an unknown function was called. The log.txt file says: CScript Error: Initialization of the Windows Script Host failed. (Not enough storage is available to process this command. ) But get this: The .bat file executes perfectly (vbs script and all) if it's executed with a double click! There's only a problem when it's run by Task Scheduler. What the hell? We're running Windows Server 2008 R2 (x64) and yes, the Task Sheduler's results are the same whether the user is logged in or not. Also, the user that can run the scripts successfully manually, is also the same user that runs the scripts in Task Scheduler. Thanks for any help for this weird problem!

    Read the article

  • Outlook users connected to exchange can email from other email accounts

    - by Sherriffwoody
    We have found an issue on our systems whereby an outlook user (both 2007 and 2010) connected to our Exchange server (2007) can send emails as other users using the following steps Within Outlook Click <New Email> Select the <From> button to show a list of accounts outlook contains, but it also shows the option Select<Other Email Address>. This brings up a small dialog box with another button which when selected allows the user to select an email from their contacts or the Active Directory. The user in most cases can select any email within the Active Directory and send an email as if it were coming from that selected email. It seems not everyone has this ability and I'm guessing it is something to do with settings in exchange or AD(version 6) or is there a group policy that can be implemented to stop users being able to do this. We have no idea what allows this and I have failed to find anything using Dr Google. No one has setup delegates within outlook but it does seem to be something similar? Does anyone know how to lock this down? Thanks in advance

    Read the article

  • Problems building nodejs on MacOS Snow Leopard

    - by mrwooster
    I am having trouble building nodejs on MacOS Snow Leopard. I think it might have something to do with my PATH variable not being set correctly for the developer tools location. For some reason, the Developer tools (gcc, g++, make etc) are all stored in /Developer/usr/bin I added it to my PATH variable as follows: $ export PATH=$PATH:/Developer/usr/bin $ echo $PATH /opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin:/usr/X11/bin:/Developer/usr/bin When i try to configure it complains about not finding open-ssl, ok, not a big problem. So I try with --without-ssl : $ ./configure --without-ssl Checking for program g++ or c++ : /Developer/usr/bin/g++ Checking for program cpp : /Developer/usr/bin/cpp Checking for program ar : /usr/bin/ar Checking for program ranlib : /Developer/usr/bin/ranlib Checking for g++ : ok Checking for program gcc or cc : /Developer/usr/bin/gcc Checking for gcc : ok Checking for library dl : yes Checking for library util : yes Checking for library rt : not found --- libeio --- Checking for library pthread : yes Checking for function pthread_create : not found /Users/Guy/git_src/node/node/deps/libeio/wscript:13: error: the configuration failed (see '/Users/Guy/git_src/node/node/build/config.log') Anyone know how I can get round this? I am suspicious that it might be something to do with the PATH or another ENV variable, but not sure. Thanks G

    Read the article

  • Compiling Mono on Fedora 7

    - by Gary
    Trying to install Mono from source on Fedora 7.. running # ./configure --prefix=/opt/mono works fine, but doing the make # make ; make install ends up with the following: Makefile:93: warning: overriding commands for target `csproj-local' ../build/executable.make:131: warning: ignoring old commands for target `csproj-local' make install-local make[6]: Entering directory `/opt/mono-2.6.4/mcs/mcs' Makefile:93: warning: overriding commands for target `csproj-local' ../build/executable.make:131: warning: ignoring old commands for target `csproj-local' MCS [basic] mcs.exe typemanager.cs(2047,40): error CS0103: The name `CultureInfo' does not exist in the context of `Mono.CSharp.TypeManager' Compilation failed: 1 error(s), 0 warnings make[6]: *** [../class/lib/basic/mcs.exe] Error 1 make[6]: Leaving directory `/opt/mono-2.6.4/mcs/mcs' make[5]: *** [do-install] Error 2 make[5]: Leaving directory `/opt/mono-2.6.4/mcs/mcs' make[4]: *** [install-recursive] Error 1 make[4]: Leaving directory `/opt/mono-2.6.4/mcs' make[3]: *** [profile-do--basic--install] Error 2 make[3]: Leaving directory `/opt/mono-2.6.4/mcs' make[2]: *** [profiles-do--install] Error 2 make[2]: Leaving directory `/opt/mono-2.6.4/mcs' make[1]: *** [install-exec] Error 2 make[1]: Leaving directory `/opt/mono-2.6.4/runtime' make: *** [install-recursive] Error 1 I've been following the instructions at http://ruakuu.blogspot.com/2008/06/installing-and-configuring-opensim-on.html. This is all in an effort to get OpenSimulator running.

    Read the article

  • Lion server profile manager, device enrollment doesn't work

    - by user964406
    I am in the process of setting up Lion Servers profile manager to manage iPads on our local school network. I don't need to manage them while they are outside the network. I have successfully had it working on my personal network. The school network is behind a proxy which we have no control over. I can get the iPads to view the mydevices page and install a trust cert. I have managed to get an iPad to successfully install the remote management profile. After this the profile manager bugs out. It will list the active task of 'new device (sending)' but it's unable to complete the task. If I click on the device on profile manager and try any of the actions out they will all fail to complete. I am using the auto generated certificates and this works if I bring the server and iPad outside of the school network. Shortly after device enrollment the system log on the Lion server reports the following Replaced the actual ip address with INTERNALIP Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini applepushserviced[778]: Got connection error Error Domain=NSPOSIXErrorDomain Code=1 "The operation couldn\u2019t be completed. Operation not permitted" UserInfo=0x7fa483b1a340 {NSErrorFailingURLStringKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS, NSErrorFailingURLKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS} Jun 4 08:40:53 mini applepushserviced[778]: Failed to get client cert on attempt 2, will retry in 15 seconds Does anyone have any ideas on how to get past this stage? Thanks in advance.

    Read the article

  • Solaris 10 zlogin logs in, logs out immediately

    - by Spelevink
    On a SPARC v445 running Solaris 10 9/10, had to rebuild rpool and reattached the three existing mirrored zpools on the other existing disks, with their zfs filesystems and NG zones intact. The zones have been configured with zonecfg -z ZONENAME create etc. ... and are now online using zoneadm -z ZONENAME attach -U then simply booting after being in installed state, but I cannot zlogin to any of the zones except one. It shows that I am logged in, then a blank line, then immediately logged out again. When I try to login using zlogin -C ZONENAME I cannot; the error message is: May 15 15:43:46 <hostname> login: open_module: stat(/usr/lib/security/pam_mkhomedir.so.1) failed: no such file or directory. May 15 15:43:46 <hostname> login: load_modules: cannot open module /usr/lib/security/pam_mkhomedir.so.1 But /usr/lib/pam_mkhomedir.so.1 does not exist, and it does not exist on my other servers, but those zones are accessible using zlogin. I can only zlogin to the zones with zlogin -S ZONENAME. What to do next? Thank you.

    Read the article

  • Need help with some IIS7 web.config compression settings.

    - by Pure.Krome
    Hi folks, I'm trying to configure my IIS7 compression settings in my web.config file. I'm trying to enable HTTP 1.0 requests to be gzip. MSDN has all the info about it here. Is it possible to have this config info in my own website's web.config file? Or do i need to set it at an application level? Currently, I have that code in my web.config... <system.webServer> <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" /> <httpCompression cacheControlHeader="max-age=86400" noCompressionForHttp10="False" noCompressionForProxies="False" sendCacheHeaders="true" /> ... other stuff snipped ... </system.webServer> It's not working :( HTTP 1.1 requests are getting compressed, just not 1.0. That MSDN page above says that it can be used in :- Machine.config ApplicationHost.config Root application Web.config Application Web.config Directory Web.config So, can we set these settings on a per-website-basis, programatically in a web.config file? (this is an Application Web.config file...) What have i done wrong? cheers :) EDIT: I was asked how i know HTTP1.0 is not getting compressed. I'm using the Failed Request Tracing Rules, which reports back:- DYNAMIC_COMPRESSION_START DYNAMIC_COMPRESSION_NOT_SUCESS Reason: 3 Reason: NO_COMPRESSION_10 DYNAMIC_COMPRESSION_END

    Read the article

  • Mounting NAS drive with cifs using credentials file through fstab does not work

    - by mahatmanich
    I can mount the drive in the following way, no problem there: mount -t cifs //nas/home /mnt/nas -o username=username,password=pass\!word,uid=1000,gid=100,rw,suid However if I try to mount it via fstab I get the following error: //nas/home /mnt/nas cifs iocharset=utf8,credentials=/home/username/.smbcredentials,uid=1000,gid=100 0 0 auto .smbcredentials file looks like this: username=username password=pass\!word Note the ! in my password ... which I am escaping in both instances I also made sure there are no eol in the file using :set noeol binary from Mount CIFS Credentials File has Special Character chmod on .credentials file is 0600 and chown is root:root file is under ~/ Why am I getting in on the one side and not with fstab?? I am running on ubuntu 12 LTE and mount.cifs -V gives me mount.cifs version: 5.1 Any help and suggestions would be appreciated ... UPDATE: /var/log/syslog shows following [26630.509396] Status code returned 0xc000006d NT_STATUS_LOGON_FAILURE [26630.509407] CIFS VFS: Send error in SessSetup = -13 [26630.509528] CIFS VFS: cifs_mount failed w/return code = -13 UPDATE no 2 Debugging with strace mount through fstab: strace -f -e trace=mount mount -a Process 4984 attached Process 4983 suspended Process 4985 attached Process 4984 suspended Process 4984 resumed Process 4985 detached [pid 4984] --- SIGCHLD (Child exited) @ 0 (0) --- [pid 4984] mount("//nas/home", ".", "cifs", 0, "ip=<internal ip>,unc=\\\\nas\\home"...) = -1 EACCES (Permission denied) mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Process 4983 resumed Process 4984 detached Mount through terminal strace -f -e trace=mount mount -t cifs //nas/home /mnt/nas -o username=user,password=pass\!wd,uid=1000,gid=100,rw,suid Process 4990 attached Process 4989 suspended Process 4991 attached Process 4990 suspended Process 4990 resumed Process 4991 detached [pid 4990] --- SIGCHLD (Child exited) @ 0 (0) --- [pid 4990] mount("//nas/home", ".", "cifs", 0, "ip=<internal ip>,unc=\\\\nas\\home"...) = 0 Process 4989 resumed Process 4990 detached

    Read the article

  • Experience with AMCC 3ware 9650se raid cards? Ours seems dead

    - by antiduh
    We have a 8-port 3ware 9650se raid card for our main disk array. We had to bring the server down for a pending power outage, and when we turned the machine back on, the raid card never started. This card has been in service for a couple years without problems, and was working up until the shutdown. Now, when we turn the machine on, the bios option rom that normally kicks in before the bootloader doesn't show up, none of the drives start, and when the OS tries to access the device, it just times out. The firmware on it has been upgraded in the past, so it's possible we've hit some sort of firmware bug. We're using it in a Silicon Mechanics R272 machine with gentoo for the OS. The OS eventually boots, but alas, without the card. We've ordered a new one, but I'm worried that if we replace the card it won't recognize the existing array. Has anybody performed a card swap before? Any help would be greatly appreciated. Edit: These are the kernel errors we see: 3ware 9000 Storage Controller device driver for Linux v2.26.02.012. 3w-9xxx 0000:09:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 3w-9xxx 0000:09:00.0: setting latency timer to 64 3w-9xxx: scsi0: ERROR: (0x06:0x000D): PCI Abort: clearing. 3w-9xxx: scsi0: ERROR: (0x06:0x001F): Microcontroller not ready during reset sequence. 3w-9xxx: scsi0: ERROR: (0x06:0x0036): Response queue (large) empty failed during reset sequence. 3w-9xxx 0000:09:00.0: PCI INT A disabled

    Read the article

  • Network corruption - corrupt downloads, corrupt streams, etc.

    - by rfrankel
    I've been having some problems with my home LAN. Downloaded executables won't run, my remote desktop sessions keep getting interrupted due to encryption errors, flash video streams show visible corruption (both Hulu and YouTube), and I've had a couple downloads for which the md5 hashes don't match. The problem has even occurred with a couple images embedded in webpages, though that's rare enough (presumably because images are relatively smaller files). I've had this problem across two Windows machines and a Mac, so it's neither machine-specific nor at the app or OS level. Comcast claims it's nothing to do with them, and my Linksys/Cisco RV016 router is out of warranty, so I have no access to official support. When I log into my router, it shows no error packets or dropped packets received. I plugged a laptop directly into the router and was able to download a 5.5 MB file and verify its MD5 hash, which is not proof that the problem is downstream of the router, but makes it seem quite likely, since I failed to download the same file several times from two desktops (one Mac, one Windows). Could this be a wiring problem? If so, is there any way clever/elegant to determine which wiring is faulty with just software? If I can avoid tracing all the wires throughout my entire house it would make my life quite a bit easier.

    Read the article

  • saslauthd authentication error

    - by James
    My server has developed an expected problem where I am unable to connect from a mail client. I've looked at the server logs and the only thing that looks to identify a problem are events like the following: Nov 23 18:32:43 hig3 dovecot: imap-login: Login: user=, method=PLAIN, rip=xxxxxxxx, lip=xxxxxxx, TLS Nov 23 18:32:55 hig3 postfix/smtpd[11653]: connect from xxxxxxx.co.uk[xxxxxxx] Nov 23 18:32:55 hig3 postfix/smtpd[11653]: warning: SASL authentication failure: cannot connect to saslauthd server: No such file or directory Nov 23 18:32:55 hig3 postfix/smtpd[11653]: warning: xxxxxxx.co.uk[xxxxxxxx]: SASL LOGIN authentication failed: generic failure Nov 23 18:32:56 hig3 postfix/smtpd[11653]: lost connection after AUTH from xxxxxxx.co.uk[xxxxxxx] Nov 23 18:32:56 hig3 postfix/smtpd[11653]: disconnect from xxxxxxx.co.uk[xxxxxxx] The problem is unusual, because just half an hour previously at my office, I was not being prompted for a correct username and password in my mail client. I haven't made any changes to the server, so I can't understand what would have happened to make this error occur. Searches for the error messages yield various results, with 'fixes' that I'm uncertain of (obviously don't want to make it worse or fix something that isn't broken). When I run testsaslauthd -u xxxxx -p xxxxxx I also get the following result: connect() : No such file or directory But when I run testsaslauthd -u xxxxx -p xxxxxx -f /var/spool/postfix/var/run/saslauthd/mux -s smtp I get: 0: OK "Success." I found those commands on another forum and am not entirely sure what they mean, but I'm hoping they might give an indication of where the problem might lie. If it makes any difference, I'm running Ubuntu 10.04.1, Postfix 2.7.0 and Webmin/ Virtualmin.

    Read the article

  • How do I install the pdo_mysql driver on Red Hat Enterprise Linux 6.1?

    - by Will Martin
    I have a RHEL box running PHP 5.3.3, which was installed using the binary packages provided by yum. I have installed the php-pdo package: # yum info php-pdo Loaded plugins: product-id, rhnplugin, subscription-manager Updating Red Hat repositories. Installed Packages Name : php-pdo Arch : x86_64 Version : 5.3.3 Release : 3.el6_1.3 Size : 168 k Repo : installed From repo : rhel-x86_64-server-6 Summary : A database access abstraction module for PHP applications URL : http://www.php.net/ License : PHP Description : The php-pdo package contains a dynamic shared object that will add : a database access abstraction layer to PHP. This module provides : a common interface for accessing MySQL, PostgreSQL or other : databases. It appears to be working correctly for SQLite databases, but not MySQL. There's no file including pdo_mysql.so in /etc/php.d, and there is no copy of pdo_mysql.so in /usr/lib64/php/modules. I'm pretty sure I just need the driver file and a line in the PHP configuration. A yum search pdo mysql didn't turn up any useful packages, and Google has failed me. If I were on Ubuntu or Debian, I'd apt-get install php5-mysql and be done with it. So ... where in Red Hat land do I get a copy of pdo_mysql.so, and install it properly?

    Read the article

  • "Cannot perform a differential backup for database "myDb", because a current database backup does no

    - by krimerd
    Hi there, I have what seems to be a pretty common problem when trying to take a differential backup. We have a SQL Server 2008 Standard (64bit) and we use Litespeed v 5.0.2.0 to take our backups. We take full backups once a week and a differential on a daily basis. The problem is, every time I try to take a diff backup I get the following error: "VDI open failed due to requested abort. BACKUP DATABASE is terminating abnormally. Cannot perform a differential backup for database "myDb", because a current database backup does not exist. Perform a full database backup by reissuing BACKUP DATABASE, omitting the WITH DIFFERENTIAL option." The problem is that I know 100% I have a full backup because I just double checked. Only once I was able to take a diff backup and that was when I took it immediately after I took a full backup. I have searched around and noticed that this is pretty common (although mostly with SQL 2005) and a solution that a lot of ppl suggest and that I haven't tried yet is to disable the SQL Server VSS Writer service. The problem with this is #1 I think I might need this service since I am using a third party backup software and #2 I am not sure exactly what the service does and don't want to disable it just like that. Has any of you ever experienced this problem and how did you go about fixing it? Thank you,

    Read the article

  • Application losing Printer within Terminal Services for remote users

    - by Richard
    Question: What I need to do is have a permanent link to a printer, normally only accessible through Terminal Services (Printer Redirect), to allow Sage Line 50 layouts to see that printer persistently, even after users have disconnected and reconnected to the Terminal Services session? Although the printer is accessible each time a user connects to the Sage Server via Terminal Services, it is given a different session number and therefore the Sage Layout sees it as a different printer. History behind question: Users using Terminal Services connecting to a Sage Server on a different site Using Sage Line 50 v 15 on that Server Users want to print invoices (sage layouts) locally Sage Server cannot see the users local printers, to get around this user uses the Print redirect features of Terminal Services The individual reports can be edited to point to a specific printer by default. This means the user just has to select an invoice and click print, then select the layout/report wanted and it auto prints that invoice to the default printer specified. The problem occurs because the layouts are edited to point to the users local printer "Ricoh 1018d (session#)", note the "(session#)" as this is the users local printer being redirected through the terminal services session. Users are able to print using the sage layouts once the default printer is setup within the layout and saved, but as soon as the users disconnects from the Terminal Services session and then reconnect in the morning go to print, it has lost the connection to that printer. I understand why its failed, because that the printer is on a per session basis and the layout would not be able to hold on to the connection from a previous session. Thanks in advance for any assistance...

    Read the article

  • Smart Array P400 battery failure

    - by RobIII
    The P400 Smart Array controller in my HP DL380G5 is indicating: Battery Status: Failed, Replace Battery 1 in the System Management Homepage. Also the IML indicates: POST Error: 1794-Drive Array - Array Accelerator Battery Charge Low I do have several replacement batteries (actually, several controllers including batteries) lying around at work but never had to actually replace one. I am wondering if the battery replacement (swap) could be done hot or if I need to take down my server to do the battery replacement. I have replaced other spare parts like fans and drives before and all those can be replaced hot. I just don't know about the battery of the P400 Smart Array controller. Any help would be appreciated. EDIT1 For those interested: straight from the horse's mouth: With thanks to Frands Hansen pointing it out here. EDIT2 In the end I did just power down, to be on the safe side and because the manual says so, and replace the battery. Couldn't be easier. Unplugged the old one of the end of the cable (not the end connected to the controller) and reconnected a spare one. The replaced battery is now in the server for about an hour and currently (still) recharging. I'm assuming all will end wel.

    Read the article

  • How do I fix libdispatch problem crashing Mac OS X apps?

    - by david-ocallaghan
    In the last day I have started having a lot of brokenness on my Mac (MacBook Air running Mac OS X 10.6.2 with all software updates). Most noticably, iTunes no longer syncs with my iPhone. It fails with a crash dialog reporting "AppleMobileDeviceHelper quit unexpectedly" and an error dialog "iTunes was unable to load dataclass information from SyncServices. Reconnect or try again later." I've attempted the fix at support.apple.com/kb/HT1747 but it failed. I've also been having problems (at first seemingly unrelated) with the horrible Cisco VPN client, which started giving me this error: Error 51: Unable to communicate with the VPN subsystem I followed the steps at www.anders.com/cms/192/CiscoVPN/Error.51:.Unable.to.communicate.with.the.VPN.subsystem which don't seem to work for me, although I can connect if I use the command line with sudo : sudo vpnclient connect MyProfile I had a look in the Console app at the diagnostic messages and I noticed a pattern, that a number of apps were reporting "BUG IN CLIENT OF LIBDISPATCH". The affected programs are: AppleMobileBackup AppleMobileDeviceHelper Safari Webpage Preview Fetcher cvpnd (the Cisco VPN daemon) Of these, only the last is non-Apple software! The common text in the diagnostic messages is: Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes: 0x0000000000000001, 0x0000000000000000 Crashed Thread: 1 Dispatch queue: com.apple.libdispatch-manager Application Specific Information: BUG IN CLIENT OF LIBDISPATCH: Do not close random Unix descriptors I'm beginning to wonder if there's a permissions problem, or corruption of an important library, ... I should note that I've rebooted several times and verified the disk permissions and the disk. Any help would be great!

    Read the article

  • Unable to boot with Windows 2008 DVD / USB Key

    - by r0ca
    Hi everyone, I am trying to install Windows 2008 server on a HP Proliant DL180 G5. There is no built-in DVD reader so I need to use my LaCie USB one. When I put the CD in and boot from the USB DVD on the server, I get the error message: Boot Failed! Please insert boot media in selected boot device. So I tried with another Windows bootable CD and still no luck. What I've done then, I copied the installation DVD on my 16go USB key. Again, impossible to boot from the USB Key. I have 2 147go SAS 15k HDD on my server. They are not showing in the Bios. I was wondering if this is a reason why nothing will boot on it. I am trying to find a way to deploy Windows 2008 server on my HP server as soon as possible. If you guys have ideas, feel free to let me know :) Best regards, David. System Information: HP Proliant DL180 G5 Quad-Core 2.5 4GO Ram 2x 147GO SAS 15k P.S. This is my first installation ever on SAS/SCSI HDD. Thanks a bunch! Edit: Well, my bad! I purchased a new USB DVD and now I can install Windows 2008 server. Thanks a bunch for your help!

    Read the article

  • D-LINK 2450U DSL router: Port forwarding forwading to the modem itself, not the specified IP

    - by axk
    I found a similar question but it has no satisfactory answers. I have a D-LINK 2540U DSL router. It has a basic port forwarding(under DNS - Virtual Servers) configuration in the administration panel where you specify: external port range, protocol, internal port range, server IP address and it is supposed to forward that port to that IP address. When I first set it up for a Real VNC connection it worked fine, just as I expected. Then I added a DynDNS configuration entry in the router's 'Dynamic DNS' section and added an additional SSH (22) forwarding rule. The SSH forwarding also worked fine (now with the dynamic hostname, but I suppose it doesn't make any difference as far as SSH is concerned). Then I removed the SSH rule and after that the VNC forwarding stopped working with the VNC client failing to connect (I have tried to connect with telnet and it also failed to connect, so it wasn't a VNC problem). After adding a rule for port 80 it turned out it would forward on port 80 though not to the specified server IP but to the modem itself. At least it is what it looks like, because it gives me the administration panel when I connect to my external IP (both using a browser and plain telnet in which case I can see that it is mini_hhtpd sitting on the port, which is obviously the modem's administration panel). Have anybody encountered a similar problem with port forwarding? I have tried to do a reset through the administration panel and to restore a backup of the settings made before I started playing with port forwarding, but it didn't help. Should I do a 'hard' reset with the button on the modem? Is it any different from the administration panel's reset (Restore default)?

    Read the article

  • VPN Error 619: Behind Cisco Router WRT310N

    - by ty91011
    I've researched a lot on all the forums and this error is too generic for any of the proposed solutions to work. I'll try to give as much detail and tried solutions. I'm running a CentOS PPTP server behind a Cisco WRT310N Router. Multiple clients from outside with different OS have failed with the same error 619, along with turning off windows firewall and disabling antivirus. I believe this is a router and IP routing issue, and not a client issue. When I connect from a client on the same router as the VPN server, it works when I use the 192. network address- but doesn't work with the public IP address. I've tried telnet to port 1723 from an outside server and I get in. I've opened up the VPN port (1723) on the router, VPN udp port (500), and the GRE port (47) to route to the VPN server's ip. Also, the server's router is behind a DSL modem. I had a glimmer of hope when this site: http://www.chicagotech.net/casestudy/vpnerror619.htm suggested that the PPoE authentication should reside on the router and not the modem. But I still came up empty. So does anybody know what the problem is?

    Read the article

  • "A disk read error occurred" when booting XP disk image in VirtualBox

    - by intuited
    I'm trying to boot an XP installation cloned into VirtualBox from a real drive. I'm getting the message A disk read error occurred Press Ctrl+Alt+Del to restart whenever* I try to boot the machine. * This is not strictly true: with AMD-V enabled, the boot process appears to not make it this far and instead hangs at a black screen with cursor. I created the VirtualBox image from the original drive using the following method: $ sudo ddrescue -n /dev/sdd sdd.img logfile # completed without errors $ VBoxManage convertfromraw sdd.img disk.vdi The original disk (and the image) contain a single NTFS partition with XP installed on it. The owner of the drive indicates that it did boot okay the last time the system made it that far. The (Pentium 4) system has a broken (enormous) heat sink, so at some point it failed to boot because it would quickly overheat and shut down. If I boot the VM from a live cd, I am able to mount its /dev/sda1 without any problems. I ran ntfsfix and didn't have any luck. I've read through the instructions on doing this. I didn't really follow them. For example, I didn't run MergeIDE before imaging because the machine was not bootable. However, the symptom of that problem seems to be quite different. The emitted message is contained in the volume boot record of the XP partition, which leads me to suspect that this is a problem with the core operating system bootstrap procedure, and not related to anything in the registry. I don't have an XP boot CD.

    Read the article

  • Firefox isn't using my download manager (flash videos)

    - by John22
    I installed "Free Download Manager." I see the plugin in Tools-Add-ons (it doesn't have any options). I use several different flash video downloaders, because I haven't found one that works period on any site. When I save the video with two I tried, they are being downloaded by Firefox's default download manager (which means simultaneously - which is why I installed the download manager - I need them to download one at a time - in a prioritized queue.) [I used to use Flashgot (long ago), and it worked with some download manager I had installed - but over time it failed to see most videos. I installed Flashgot again, and it still fails to see anything but images and video ads.] Currently, I have to manually start Free Download Manager (from outside of Firefox), start the download in Firefox, stop it, copy the link location from Firefox's download menu, and then add it manually in Free Download Manager. Yuck. Do I need a different download manager (that takes over - recommendations?), or did I somehow install this one wrong or miss a setting somewhere in Firefox? Thanks for any help.

    Read the article

  • Nginx with PAM authentication through pam_script

    - by Envek
    Have anyone set up such a configuration? It's not work for me. So, I've installed nginx-extras on Ubuntu 12.04 (it's built with PAM module), and write to site config: location ^~ /restricted_place/ { auth_pam "Please specify login and password from main_site"; auth_pam_service_name "nginx"; } Afterwards, in /etc/pam.d/nginx: auth required pam_script.so dir=/path/to/my/auth_scripts And wrote simplest /path/to/my/auth_scripts/pam_script_auth (also I've tried to write complicated scripts) #!/bin/sh exit 0 # should allow anyone Doesn't work. The script is launched (I've wrote full functional script, that successfully executes, check credentials, writes to its own log and returns correct exit code, and executes noticeably long). But no access granted. Only rejected. In /var/log/nginx/error.log appears next record: 2012/09/13 10:44:42 [alert] 1666#0: waitpid() failed (10: No child processes) If I'm specify in /etc/pam.d/nginx: auth required pam_unix.so and grant for www-data user right to read /etc/shadow, unix authorization works fine. But script auth doesn't work. Can't understand, where is trouble. In nginx module, or in pam_script module.

    Read the article

< Previous Page | 723 724 725 726 727 728 729 730 731 732 733 734  | Next Page >