Search Results

Search found 59084 results on 2364 pages for 'local system'.

Page 426/2364 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • win7 amd64 guest in kvm does not have sound

    - by davidshen84
    hi, my host system is gentoo amd64, guest system is win 7 amd64. the guest system can work, except it does not have sound. i start kvm with -soundhw ac97, QEMU_AUDIO_DRV='alsa', and after i get into the guest system, i can see a 'Multimedia Audio Controller' in the device manager. but win7 cannot find the driver for it. i searched the network for a long time, and i cannot find a driver for intel ac97 for win7 amd64. i also tried -soundhw sb16, es1370, none of them work. please help me fix this.

    Read the article

  • Exchange 2010 OWA with Client Certificates

    - by Christian
    I have enabled Client Certificate Authentication for Exchange 2010 through IIS7 and the users are prompted to choose their User Certificate when they log in, but they are all then presented with the following error message Request Url: https://<domain_name>:443/owa/ User host address: <server_ip_address> OWA version: 14.1.355.2 Exception Exception type: System.NullReferenceException Exception message: Object reference not set to an instance of an object. Call stack Microsoft.Exchange.Clients.Owa.Core.RequestDispatcher.GetUserIdentities(OwaContext owaContext, OwaIdentity& logonIdentity, OwaIdentity& mailboxIdentity, Boolean& isExplicitLogon, Boolean& isAlternateMailbox, ExchangePrincipal& logonExchangePrincipal) Microsoft.Exchange.Clients.Owa.Core.RequestDispatcher.InternalDispatchRequest(OwaContext owaContext) Microsoft.Exchange.Clients.Owa.Core.RequestDispatcher.DispatchRequest(OwaContext owaContext) Microsoft.Exchange.Clients.Owa.Core.OwaRequestEventInspector.OnPostAuthorizeRequest(Object sender, EventArgs e) System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) The method I followed to enable Certificate authentiaction was from this post: http://www.miru.ch/2011/04/how-to-enable-certificate-based-authentication-on-exchange-2010/ Any ideas? Google isn't being very helpful

    Read the article

  • Cannot write log file 'ffmpeg2pass-0.log' for pass-1 encoding: Permission denied

    - by matt_tm
    Our PHP application is installed as 'root' on a Redhat5/CentOS system at: /var/www/html/beta/ After disabling SELINUX in order to allow these scripts to execute other programs on the system - http://serverfault.com/questions/192951/what-permissions-are-needed-to-run-a-system-command-within-a-php-script-that-wr I faced the error that the Apache error_log showed this: Cannot write log file 'ffmpeg2pass-0.log' for pass-1 encoding: Permission denied

    Read the article

  • qemu command not running directly

    - by Dr. Death
    Can I use "qemu://localhost/system " command directly inplace of "virsh -c qemu://localhost/system " command if my both machines are physically connected in a network as virsh will simply creating the virtual shell between two systems? can I use ssh in place of virtual shell here? I tried this but system gives no directory or file name for qemu even when i had qemu installed properly in my system. but when i use virsh i did not get such errors. Do i need to open any unix socket for doing this?

    Read the article

  • PHP extension causes symbol lookup error

    - by Christian
    Dear, I installed - or better tried to - the NMCryptGate Extension for PHP on my Debian 5.0.8 server. I did this by compiling the sources which came up with no error message. Calling phpinfo() I can see the extension as enabled. BUT, whenever I try calling a method from this extension I get an error logged to the apache error log: /usr/sbin/apache2: symbol lookup error: /usr/lib/php5/20060613+lfs/nmcryptgate.so: undefined symbol: nmlistalloc What is missing? I got two packages from the software company: the php module sources and some files which should - according to their path inside the tar - go to /usr/local/bin|doc|include|lib. I moved them there without any effect. Each of these two packages has its own config file almost looking the same: \# libnmcryptgate.la - a libtool library file \# Generated by ltmain.sh - GNU libtool 1.3.4 (1.385.2.196 1999/12/07 21:47:57) \# \# Please DO NOT delete this file \# It is necessary for linking the library \# The name that we can dlopen(3) dlname='' \# Names of this library library_names='libnmcryptgate.so.1 libnmcryptgate.so libnmcryptgate.so' \# The name of the static archive old_library='' \# Libraries that this one depends upon dependency_libs=' -L. -L/usr/ssl/lib -L/usr/local/ssl/lib -L/usr/local/lib -lssl -lcrypto' \# Version information for libnmcryptgate current=1 age=0 revision=29 \# Is this an already installed library installed=yes \# Directory that this library needs to be installed in libdir='/usr/local/lib' I tried several ways to get it right: moving files, symlinking, changing configurations - always followed by restarting apache - no success. I guess I just have to move the files to the correct location or change the libdir inside the config files but meanwhile I'm totally confused by the two packages: do I need both, which config rules what, do I have to use the libdir variable? And for what? ... Anybody out there hinting me to my source of failure? Thank you in advance, regards, Christian

    Read the article

  • NFS automounts hang

    - by Yang
    Hi, I have been mounting NFS shares on my x86 Ubuntu with NIS/am-utils fine for a long time, but today my system got into a state where it could no longer access automounted directories and instead frequently got hung up trying to access them, returning either "Input/output error" or "Permission denied" (almost randomly), as well as "stale file handle." I can, however, manually mount that share fine. Restarting am-utils doesn't help get my system out of its funk; is there any other way of getting my system un-stuck?

    Read the article

  • Windows 2008 R2 Scheduled Task Not Running With Admin Privileges even if granted?

    - by j.rightly
    I have a scheduled task that is running as USER. I have checked the box "Run with highest privileges" in the scheduled task properties. The task is a powershell script that, among other things, reboots the system. The script executes and runs normally, but as a scheduled task, it fails to reboot the system. Here is the kicker: When I manually run the script as USER using the exact same command line as what's in the scheduled task, the script still runs but this time it actually reboots the system. I have UAC disabled and USER is a member of the local Admins group. The local Admins group has the right to shut down the system. Nothing in the event logs offers any clues. Why would the same script running under the same credentials work interactively but not as a scheduled task? UPDATE: This is too weird. When the task ran on schedule, everything worked normally.

    Read the article

  • Error during configuring kerberos5 using macports

    - by ario
    While trying to install libmemcached via MacPorts, I hit the following issue: libmemcached @0.40 +universal ---> Computing dependencies for libmemcached ---> Dependencies to be installed: cyrus-sasl2 kerberos5 ---> Configuring kerberos5 Error: org.macports.configure for port kerberos5 returned: configure failure: command execution failed Error: Failed to install kerberos5 It tells me to look in the log for details. Here's the last bit of the log file: :info:configure checking for setupterm in -lcurses... no :info:configure checking for setupterm in -lncurses... no :info:configure checking for tgetent... no :info:configure configure: error: Could not find tgetent; are you missing a curses/ncurses library? :info:configure configure: error: /bin/sh './configure' failed for appl/telnet :info:configure Command failed: cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/work/krb5-1.7.2/src" && ./configure --prefix=/opt/local --disable-dependency-tracking --mandir=/opt/local/share/man :info:configure Exit code: 1 :error:configure org.macports.configure for port kerberos5 returned: configure failure: command execution failed :debug:configure Error code: NONE :debug:configure Backtrace: configure failure: command execution failed while executing "$procedure $targetname" :info:configure Warning: targets not executed for kerberos5: org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install :error:configure Failed to install kerberos5 :debug:configure Registry error: kerberos5 not registered as installed & active. invoked from within "registry_active ${subport}" invoked from within "$workername eval registry_active \${subport}" :notice:configure Please see the log file for port kerberos5 for details: /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/main.log It seems to say it's missing ncurses. Looks like it's there though, since if I run port installed I see these: ncurses @5.7_0 ncurses @5.9_1 (active) ncursesw @5.7_0 Any ideas on how to get around this error?

    Read the article

  • Ubuntu 12.04 glusterfs volume failed to mount at boot time

    - by user183394
    I have just setup 7 KVM guests, all running Ubuntu 12.04 LTS 64bit Minimal server to test out glusterfs 3.2.5 from the Ubuntu official repo. Two of them form a mirrored pair (i.e. replica 2), and five of them are clients. I am still new to this file system and would like to gain some "hands-on" experience. The setup was mostly uneventful, until I put in the following into each glusterfs client's /etc/fstab: 192.168.122.120:/testvol /var/local/testvol glusterfs defaults,_netdev 0 0, where 192.168.122.120 is the IP address of the first "glusterfs server". If I issue either a manaul mountall or a mount.glusterfs 192.168.122.120:/testvol /var/local/testvol on CLI, a mount shows that the volume is successfully imported. But once a client is rebooted, after it comes back up, the volume is not mounted! I searched the Internet, and found this article, but since I am not running both client and server on the same node, IMHO it's not strictly applicable. So, as a kludgy "get-around", I put in a sleep 3 && mount.glusterfs 192.168.122.120:/testvol /var/local/testvol into each client node's /etc/rc.local. It seems to be able to get the volume mounted on each node, as far as I can tell. But this is quite ugly, and I would appreciate a hint as to how to resolve this glusterfs-non-boot-time-mounting issue correctly. Note that I used the IP address of the first "glusterfs server" although the /etc/hosts of all nodes have been populated with their hostnames. I figured that the use of IP address is more robust. --Zack

    Read the article

  • SocketException (Timeout) only when running as scheduled task

    - by BVartin
    I'm running a C# web-scrapper application (that I wrote) on a Windows Server 2003 instance under a user belonging to the local Administrator group. When I run it within a desktop/remote-desktop session the application runs successfully but when I schedule it to run under the same user/security-context outside of the desktop session, all socket connections timeout. The scheduled task calls a batch file which in-turn calls the application. The Windows Server 2003 instance has a very basic configuration and isn't even connected to a domain. I cannot find anything in any firewall or security configuration which is preventing this but maybe I have overlooked something, can anyone be of any assistance? System.Net.WebException: Unable to connect to the remote server --- System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond X.X.X.X:443 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • getting warning messages "avahi-daemon[3201]: Invalid query packet." on suse 10.1 server with raid 1

    - by jarus
    I had recently replaced a failing hard drive on my software raid 1 system with suse 10.1 , and i am checking for any warning messages on " var/log/warn " and i found this message "avahi-daemon[3201]: Invalid query packet." more than 12 times , i am new to these stuff , should i be concerned , is there something wrong with the system , can i check anywhere else to find out if there is any problem with the system. Any help will be greatly appreciated. Thanx in advance.

    Read the article

  • Recommendation for Document Management Solution

    - by BillN
    We've just been informed by our software vendor that the custom document management system they'd written is no longer in development, and will not be supported in the future. So we are looking at new document management systems. Requirements: Multiple input vectors, we receive documents via e-mail, fax, scanning, and from the originating application Ability to Redact or obscure data. Customers may fax an order with CC data, we want to attach the image of the order form with the order record, but the CC data needs to be protected. Same with Tax IDs. Certain users should be able to see the redacted data, but access should be logged. Version control on documents. We'd like Product Development and Marketing to be able to track various versions of documents like Packaging Designs, but ensure that users have the latest approved version. AD integration, my users don't need another password. Ability to integrate to other apps. Our current system, offers function keys in the order-entry system, that will spawn the viewer application, and open the correct document. Mass import facility, we have a half a terabyte of existing documents in the old system that we would like to import. Retention Policy. I'd like a way to have the system comply with the corporate retention policy, so that when a document of a certain type reaches a certain age, it gets deleted, or atleast marked for manual deletion. We are a Windows Server and HP-UX shop. Does anybody have any experience with Document Management systems that they would like to share? Thanks.

    Read the article

  • How do I get a Mac to request a new IP address from another DHCP server running in parallel while Ne

    - by huyqt
    Hello, I have an interesting situation. I'm trying to us a Linux based machine to allow Mac's to Netboot (similiar to PXE boot) by running a DHCP service in parallel with the "global" DHCP server. The local DHCP server hands out IPs in a private subnet, e.g., 10.168.0.10-10.168.254-254, while the "global" DHCP server hands out IPs from the IP range 10.0.0.1 - 10.0.1.254. The local DHCP range is only supposed to be used in Preboot Execution Environment and Netboot. The local DHCP server is something I have control over, but I do not have access to the global DHCP server. I have a filter to only allow members with the vendor strings "AAPLBSDPC/i386" and "PXEClient". PXE works fine, but Netboot has a quirk. The Apple systems that haven't been connected to the network yet can Netboot fine. But once it grabs a "real" IP address from the global DHCP server, it will "save" it and request it the next time we want it to netboot (which the local dhcp server won't give it). This is what I want: Mar 30 10:52:28 dev01 dhcpd: DHCPDISCOVER from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:29 dev01 dhcpd: DHCPOFFER on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPREQUEST for 10.168.222.46 (10.168.0.1) from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPACK on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:32 dev01 in.tftpd[5890]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5891]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5893]: tftp: client does not accept options Mar 30 10:52:54 dev01 in.tftpd[5895]: tftp: client does not accept options This is what I get when it already has a "stored" IP: Mar 30 10:51:29 dev01 dhcpd: DHCPDISCOVER from 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:30 dev01 dhcpd: DHCPOFFER on 10.168.222.45 to 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:31 dev01 dhcpd: DHCPREQUEST for 10.0.0.61 (10.0.0.1) from 00:25:xx:xx:xx:xx via eth1: ignored (not authoritative). Do you have any suggestions? It would be much appreciated.

    Read the article

  • Configure Domino to use SMTP routing and hMailServer

    - by Sébastien Lachance
    I have been trying for a couple of days to set up a Domino 8.5 server. Basically, I want everything to be run inside a local network. Right now I can send email to other user in the Domino directory without any mail address. I am pretty new to all this stuff, so maybe the answer will be really obvious. What I need to do is be able to send a mail from somewhere else to a domino user that will be redirected to his account. On the Domino server, I also have hMailServer installed on port 25. I configured Domino to use port 26. I followed those step to get where I am now. -I have set the Fully qualified Internet host name to "preview.notes". -Smtp Listener task changed to Enabled to turn on the Listener so that the server can receive messages routed via SMTP routing -Setting up SMTP routing within the local Internet domain (http://www.h2l.com/help/help85%5Fadmin.nsf/f4b82fbb75e942a6852566ac0037f284/7f9738a49efc4f58852574d500097b01?OpenDocument) -I modified the person to use the [email protected] address. -I'm using the hMailServer (which have the local "preview.local" domain name) to send mail to [email protected]. When sending mail I got an error telling that the DNS is not set up correctly. Is using the Domino Smtp server instead of hMailServer will solve the problem? I can Telnet the Domino Smtp Server.

    Read the article

  • Switch Windows 8 from a hybrid MBR/GPT => GPT only on Macbook Pro Retina

    - by Sid
    I used DiskUtility+Bootcamp Wizard to setup my hard drive for Windows 8 (final MSDN). Somewhere in that process, the Apple tools turned my GPT disk into a hybrid MBR/GPT. All my 4 primary MBR partitions are used up, so when I try turning on Bitlocker in Windows 8, it complains about not finding a System drive. I know on Windows 8 the Bitlocker setup tries to create the 200(?)MB system partition if it's missing. However with all 4 partitions filled I suspect it can't create system drive = it can't find it = throws back an error like "BitLocker Setup could not find a target system drive. You may need to manually prepare your drive for BitLocker". I've already tried disabling hibernation, swap file etc. Now I'm thinking that if I were to get rid of the MBR scheme altogether, perhaps I can be alright within the GPT world without MBR's 4 primary partitions limit. So, how can I get rid of the MBR tables on the hybrid scheme in a manner that still leaves Mac OS and Windows 8 in working conditions? Details: Hardware is the MacbookPro Retina. Primary MBR partitions are consumed as follows: EFI partition HFS+ partition (=encrypted, therefore ="Apple_CoreStorage") HFS+ partition (Recovery partition, contains unencrypted Mac bootloader) NTFS partition (Windows8 all-in-one partition) diskutil list output sid-mbpr:~ sid$ diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *251.0 GB disk0 1: EFI 209.7 MB disk0s1 2: Apple_CoreStorage 160.0 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 4: Microsoft Basic Data Win8 90.1 GB disk0s4 GPT vs MBR addresses sid-mbpr:~ sid$ sudo gptsync /dev/rdisk0 Password: Current GPT partition table: # Start LBA End LBA Type 1 40 409639 EFI System (FAT) 2 409640 312909639 Unknown 3 312909640 314179175 Mac OS X Boot 4 314179584 490233855 Basic Data Current MBR partition table: # A Start LBA End LBA Type 1 1 409639 ee EFI Protective 2 409640 312909639 ac Apple RAID 3 312909640 314179175 ab Mac OS X Boot 4 * 314179584 490233855 07 NTFS/HPFS Status: GPT partition of type 'Unknown' found, will not touch this disk.** **: Ignore this message, the gptsync tool is old and doesn't understand the UUID for "Apple_CoreStorage" / FileVault2 partitions. Since LBA addresses are alright, safe to ignore this message.

    Read the article

  • Disabling kextcache on 10.5.8 and 10.6.3

    - by Jeff Kelley
    We use Radmind to manage our Mac OS X loadsets and, as such, often run into difficulty when new OS releases come out due to, among other things, updated kernel extensions. The workflow in the past (OS revisions <= 10.4) was to delete the kernel extension cache, update the extensions, and then reboot. That worked just fine, as the system would re-create missing caches on boot. In Leopard, you need to delete the caches after replacing the kernel extensions with their new versions, as the system will automatically start creating them when you replace them; the only way to ensure that you don't have invalid extensions cached is to delete the cache before rebooting. I'm looking for a way to prevent the kernel extensions cache from being re-created until the next reboot. If you modify the contents of /System/Library/Extensions/, kextcache will start up automatically. I've looked through /System/Library/LaunchDaemons/ and other places, but I can't find whatever it is that's starting kextcache. Any ideas?

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • Desktop.ini Issues/Confusion

    - by EpicDavi
    BACKSTORY: I was out of town for a while and I forgot to turn my computer off. When I came back I saw that a desktop.ini file was on my desktop (using Windows 7). I thought that was odd because I knew it was a system file and it usually didn't show up due to the fact that I had disable the feature to show system files. Also it wasn't translucent like the other system files. I went to my control panel and saw that the "Hide protected operating system files" was indeed enabled. This puzzled me so I disabled the setting and another one was on my desktop like it usually is hidden. So now I have to desktop.ini files on my desktop: one hidden and one not hidden. I am doing an antivirus check to see if anything was going on and I will give an update soon. I am pretty sure these files are harmless and could be deleted but I would rather get another person's opinion on the subject. Thanks! UPDATE: I did an anti-virus scan and it seems I have no problems. It is odd because the file seems to maintain system file properties such as not being able to be edited and other things. Also I have tried restarting my computer and it is still not hidden. So the question remains: What should I do with the file and what caused it?

    Read the article

  • How do *you* track and document routine maintenance?

    - by Zak
    What software or system do you guys out on server fault use to remind you to do routine maintenance? How do you checklist and log the various items you are supposed to check? Do you have an internal process document? Do you have cron mail you every week with reminders to check system logs? Also, do you work on a team to do system maintenance, and if so, how do you coordinate who will do what maintenance? If you use a bug/issue tracking system to enter tasks, do you have a cron job enter recurring tasks?

    Read the article

  • Setting up vncserver on OpenSolaris zone

    - by k.park
    I am running OpenSolaris 5.10 and set up a sparse zone(inherits most of bin directories from global zone). I ended up copying many etc and var files from global zone, eventually most of the stuff(firefox,gvim, etc.) working through ssh via X11. However, I am having problems setting up vncserver on the zone. This is what I get if I tried to start the vncserver. vncext: VNC extension running! vncext: Listening for VNC connections on port 5911 vncext: created VNC server for screen 0 Fatal server error: could not open default font 'fixed' _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 xsetroot: unable to open display '%zone%:11' _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 vncconfig: unable to open display "%zone%:11" twm: unable to open display "%zone%:11" xterm Xt error: Can't open display: %zone%:11 I already chmoded /tmp/.X11-pipe with 777, and there is no pipe in /tmp/.X11-pipe or /tmp/.X11-unix directory. Here is my cat /etc/release: OpenSolaris 2009.06 snv_111b X86 Copyright 2009 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 07 May 2009 BRAND: ipkg

    Read the article

  • How can I tell if a KB or newer has been installed for Windows?

    - by IguyKing
    I have a Windows system that I need to audit. The requirements is that (for example) KB2160329 has been installed onto the system. I know from lots of digging that KB2731847 that we have installed in the environment superseded the earlier KB. MSkbfiles.com works if you know the file name such as TCPIP.SYS. Doesn't do anything if you are just looking for KB Hotfixes. How can I say feed in a script that I'm looking for KB2160329 and it can check for superseded patches? Or is there a website somewhere that I'm missing? [Edited 7 May 2014 8:54am CST] I'm looking for a way to say that KB2731847 which is on the system does fix the same issue (plus more fixes) as KB2160329 which is not in the list as being installed on the system.

    Read the article

  • Changing dual-monitor settings without closing the laptop lid on OS X

    - by hekevintran
    I have a Unibody MacBook hooked up to an external display. By default when I boot up, the system will go to dual-monitor mode. I want to use only the external display. The Apple supplied solution to this problem is to close the lid of the laptop which puts the machine into sleep mode and then move the mouse around to wake it up again. Because the machine is being woken up with the lid closed, when the displays are detected the system finds only the external. After the system is functional again, you can open the lid if you want and the laptop screen will be non-functional until you either tell the system to detect displays from the system preferences or you turn off the external display. Every time I want to use only the external display, I must reach my hand over to close the lid, wait for the machine to sleep, jigger the mouse, wait for the machine to wake up, and finally open the lid again because I don't want the machine to overheat. I feel that this is very stupid to have to do. Why is there no button or menu option that says "don't use this screen"? Is there any third-party software way to change the screen setup that does not involve physically closing the lid and playing a game of "are you sleeping" in order to switch such a simple software setting? We are in the 21st Century and honestly this is childish.

    Read the article

  • WebDAV "PROPFIND" exception in IIS due to network share?

    - by jacko
    Hi all, We're finding continuous exceptions in our event viewer on our live box to the following exception: [snippet] Process information: Process ID: 3916 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: HttpException Exception message: Path 'PROPFIND' is forbidden. Thread information: Thread ID: 14 Thread account name: OURDOMAIN\Account Is impersonating: True Stack trace: at System.Web.HttpMethodNotAllowedHandler.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Other Specs: Windows Server 2003 R2 & IIS 6.0 We've narrowed it down to occuring when people try to access shares on the box from within the network, and have discovered (we think) that its due to the WebDAV web services extension being previously disabled by past staff. The exceptions are being thrown when trying to access directories that are virtual dirs in IIS, and plain old UNC network shares What the implications for enabling the WebDAV extensions on our live web server? And will this solve our problems with the exceptions in our event log?

    Read the article

  • Windows 7 Slowness following Virtual PC and Visual Studio Install

    - by Elliot Hughes
    I'm running Windows 7 32bit on a 3.2ghz Pentium D with 2gb RAM and a 1TB SATA hard drive. My system was running as fast as it ever has until I installed Visual Studio and Virtual PC a few days ago. Ever since - regardless of whether either application has been running the system has been running incredibly slowly. For example flash video plays jumpily, 3D games that used to run fine are now unplayable and even the smallest amount of multitasking makes the system unusable. I'm confident there is no virus or other such things present following scans in safemode and I'm fairly confident I've made no other changes to my system. Any ideas - I've run out of things to try!

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >