Search Results

Search found 26810 results on 1073 pages for 'fixed point'.

Page 800/1073 | < Previous Page | 796 797 798 799 800 801 802 803 804 805 806 807  | Next Page >

  • MacBook Pro - Aquamacs - spell check

    - by peggy Li
    I have tried to use spell check for aquamac. I highlighted a region of the text. Then clicked Edit, then spell check region. I got the error message: Error : No word lists can be found for the language "en_US". Then I went to the website to download the following dictionaries: CocoAspell : I just clicked the download button. It was reported that the download was successful. However, when I tested it and highlighted a text region and clicked spell check. The same error message came out. Do I need to pull the downloaded .pkg to a certain place, such as the application folder, before I opened the .pkg? Or what else do I need to do make it work? I also downloaded the base package Aspell (for Intel) and the pre-built dictionaries as (as the instruction of the website), just the same way as point 1. I still got the same error message. Again, Do I need to pull the downloaded .pkg to a certain place, such as the application folder, before I opened the .pkg? Or what else do I need to do make it work? I would be greatly appreciated if someone could give me some help? Peggy Li

    Read the article

  • Windows product key is valid but wont activate

    - by pnongrata
    Last month, I needed to install Windows XP (Pro Version 2002 SP3) from a Reinstallation CD a co-worker gave me, and with a product key the IT team told me to use. Everything installed successfully and I have been using the XP machine for the last 30 days without any problems; however it kept reminding me to activate Windows, and of course, I never did (laziness). It now has me locked out of my machine and won't let me log in until I activate it. So I proceed to the Activation Screen which asks me: Do you want to activate Windows now? I choose "Yes, let's activate Windows over the Internet now.", and click the Next button. It now asks me: Do you want to register while you are activating Windows? I choose "No, I don't want to register now; let's just activate Windows.", and click the Next button. I now see the following screen: Notice how the title reads "Unauthorized product key", and how there are only 3 buttons: Telephone Remind me later Retry Please note that the Retry button is disabled until I enter the full product key that IT gave me, then it enables. However, at no point in time do I see a Next button, indicating that the product key was valid/successful. So instead, I just click the Retry button, and the screen refreshes, this time with a different title Incorrect product key Could something be wrong with the Windows XP reinstallation CD (do they "expire" after a certain amount of time, etc.)? Or is this the normal/typical workflow for what happens when you just have a bad product key? I ask because, after this happened I emailed IT and they supplied me whether several other product keys to try. But every time its the same result, same thing happening over again and again. So I guess it's possible that IT has given me several bad keys, but it's more likely something else is going on here. Any thoughts or ways to troubleshoot? Thanks in advance!

    Read the article

  • Current wisdom on SQL Server and Hyperthreading?

    - by BradC
    Lots of articles out there (see Slava Oks's original SQL 2000 article and Kevin Kline's SQL 2005 update) recommend disabling hyperthreading on SQL servers, or at least testing your specific workload before enabling it on your servers. This issue is gradually becoming less relevant as true multi-core processors replace hyperthreaded ones, but what's the current wisdom on this issue? Does this advice change any with SQL 2005 64-bit, or SQL 2008, or Windows Server 2008? Ideally, this should be tested in advance in a staging environment, but what about for servers that have already made it into production with HT enabled? How can I tell if performance issues we're experiencing might be related to HT? Is there some specific combination of perfmon counters that might point me in that direction, as opposed to all the other things I normally pursue when working on improving SQL performance? Edit: This is especially attractive because of the potential for an across the board improvement for some of my high-cpu servers, but the client is going to want to see something concrete that helps me identify which servers really could benefit from disabling hyperthreading. Of course, conventional performance troubleshooting is ongoing, but sometimes any little bit helps.

    Read the article

  • Tomcat deployment overwrites context.xml

    - by Kristoffer
    Hi, I'm pretty new to Tomcat in general, so please point out if got anything wrong. My question is regarding updates to already deployed apps, using the Tomcat manager. But first thing first. I'm using the META-INF/Context.xml for storing connection info for the database connections, so this is unique to every server the application is deployed to. I'm not sure if this is optimal but it's the only way I know. So, when updating the application, it's important that this file doesn't get modified, because I don't want to have to go in and remake all changes every time I update my app. For updating, I'm using the Tomcat Manager, and I've tried different approaches but everything seems to build on the process of undeploy, then deploy the new version. This way, the Context.xml gets removed/replaced by an empty Context.xml file. So my question is basically, how do I update a running webapp, and at the same time having the Context.xml left untouched? Btw, I'm running Tomcat 6.0.24.

    Read the article

  • How expensive to run PC 24/7 or how to figure out how to determine it?

    - by jasondavis
    I realize this question is difficult to answer as it would be different based on users location, what there PC is doing and what hardware it consist of, along with other factors but I am hoping someone could give me a very rough estimate. I have always ran many PC's in my home 24/7 and I am just now looking at it from a money/cost of electric point of view. 1) I live in Central Florida. Can anyone guesstomate/estimate the avaerage monthly or daily cost of running your average PC? Intel quad core processor, 1 SSD drive for OS and programs and 4-5 1-2 TB hard drives in a RAID setup for data. 750watt PSU. What would your guess be? 2) Also is there an accurate way to figure this out (non-super technical and confusing to a non-math person please) Also I have seen those kill-a-watt devices, do they figure this kind of stuff out for you? 3) Does a larger PSU make your PC consume more power? Thanks for any help, you can most likely tell I am somewhat lost about this!

    Read the article

  • Computer causing WiFi interference?

    - by Mannimarco
    I came back from college and brought my desktop computer. Family recently switched to Verizon FIOS and got a new router because of it. Unfortunately, my connection to the new wifi network is awful, with the download speeds (tested through speedtest.net) fluctuating wildly and often dropping below 1.5 Mbps. A laptop in the same room gets 20 Mbps. I've tried a new wireless card, thinking that mine got damaged in the move home but no luck. Here's where it gets weird: if I place the laptop near the computer, the laptop's download speeds often suffer greatly. Pulling the laptop away always fixes this. So now I'm under the impression that there's something in the computer (which I built a year ago and has had 0 issues up to this point) is causing an insane amount of wireless interference. Also bizarre: the upload speeds seem unaffected by this problem. On the laptop and desktop, upload speeds are generally around 5 Mpbs. Any ideas as to what could be causing this and how to test said theories would be fantastic.

    Read the article

  • Debian network bridge configuration - /etc/network/interfaces

    - by Mathias
    I'm running a Lenny Xen dom0 hosting multiple virtual machines in a routed IP setup. To get an additional private subnet, I created the bridge xenbr0 in the dom0 with the following commands: brctl addbr xenbr0 ifconfig xenbr0 10.0.0.1 netmask 255.255.255.0 ifconfig xenbr0 up This works as expected, and domU interfaces are added to the bridge by Xen on VM start. My only problem is: how the heck do i specify this configuration in /etc/network/interfaces that it remains permanent and the bridge is available after a reboot? I tried the following config as found on a lot of tutorials: auto xenbr0 iface xenbr0 inet static address 10.0.0.1 netmask 255.255.255.0 network 10.0.0.0 broadcast 10.0.0.255 bridge_stp no I get 2 different errors, depending on if the bridge already exists or not. If it doesn't exist: root@dom0:~# brctl show bridge name bridge id STP enabled interfaces root@dom0:~# /etc/init.d/networking restart Reconfiguring network interfaces...if-up.d/mountnfs[eth0]: waiting for interface xenbr0 before doing NFS mounts (warning). SIOCSIFADDR: No such device xenbr0: ERROR while getting interface flags: No such device SIOCSIFNETMASK: No such device SIOCSIFBRDADDR: No such device xenbr0: ERROR while getting interface flags: No such device xenbr0: ERROR while getting interface flags: No such device Failed to bring up xenbr0. done. And if it exists: root@dom0:~# brctl show bridge name bridge id STP enabled interfaces xenbr0 8000.000000000000 no root@dom0:~# /etc/init.d/networking restart Reconfiguring network interfaces...if-up.d/mountnfs[eth0]: waiting for interface xenbr0 before doing NFS mounts (warning). RTNETLINK answers: File exists Failed to bring up xenbr0. done. Could anyone point me in the right direction please? The bridge works fine when created manually, i just need the right config file entries. The most tutorials I found add some devices to the bridge in the config, is that maybe the problem why it is not working? I don't have any interfaces I want to add to the bridge on creation as they get added later on VM start... Thanks, Mathias

    Read the article

  • Prevent Roaming profiles from syncing certain elements

    - by user29919
    Hello everyone, I'm somewhat new to the Server 2008 front, and I'm afraid I've hit my first snag: I've set up roaming profiles, and they appear to be working too well. Is there a way to limit, ideally on a folder/object basis, what gets synced with a roaming profile? What I'm trying to do is: 1) stop my roaming profile from syncing desktop layout - I run a dual-screen desktop and a laptop, and it's really annoying to have to reposition everything after logging onto the laptop, because it forces everything onto one screen. 2) stop it from syncing registry variables - specifically, I want Visual Studio to load different setting files on each computer. Currently, the variable that contains that path is getting synced whenever I log in, so I get the settings from whatever box I last logged out from. 3) stop syncing the start menu - this one's not as big, but I'm noticing 'program not found' icons even for programs that are installed. they work when I click them - they just look ugly. I'm running Windows SBS 2008 x64 with two Win7 clients (x86 Pro, and X64 Ultimate). Is there a simple way to do that? Or am I trying to work too much against what roaming profiles are designed for? I could, of course, set up different profiles for the desktop and laptop, but that seems to defeat the point of roaming profiles entirely... Thanks in advance! Any help will be much appreciated =)

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • How to achieve the following RTO & RPO with logshipping only using SQL Server?

    - by Jimmy Chandra
    Trying to come up with viable backup restore & logshipping solution for achieving the following: 15 minutes Recovery Point Objective (no more than 15 minutes data loss at any time) 5 minutes Recovery Time Objective (must be able to get the db up and running back by 5 minutes) Considering using logshipping only (which I think is kind of pushing it, but I want to know if anyone else know how to achieve this). Some other info for consideration: Using 40 Gbit / sec fiber channel between the primary and disaster recovery (DRC) sites The sites are about 600 km apart. At close of business, the amount of data generated is predicted to be about 150 MB/sec. Log backup is planned for every 5 min. Doing some rough calculation I came up w/ the following numbers: 40 Gbit / sec = 5 MB / sec @ 100% network efficiency. 5 MB / sec = 300 MB / min. @ 300 MB / min, the total amount of data that can be transfer considering the 5min RTO is about 1.5GB, but that will left no time for the actual backup and restore, so if we cut it down to 3min logshipping time, which equals to ~900 MB over 3 minutes at 100% network efficiency, that will left about 1 min backup time and 1 minute restore time. Currently don't have any information if the system being used is capable of restoring 900 MB in 1 min, but assume it can. for COB scenario... 150 MB/sec, and considering the 3 min logshipping time, which should equal to about 27 GB of data over 3 mins...??? I think this is where the SLA will break... since there is no way to transfer 27 GB of data over a 40Gbit/sec line in 3 min. Can I get someone else opinion? I am thinking database mirroring might be a better answer for this...

    Read the article

  • Help setting up a secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

  • Portable USB drives hidden pertition - New request

    - by ZXC
    This question was made by Francesco on Jul 29 '11 at 17:14. and the replies were not satisfactory due they not point to an important problem that´s: Why could anyone want to make certain data only accesible for a program but not to the users?. For example: If I want to do a safe distribution of original music for demostration purposes I will need several requisites: 1) The music should be heard using a simple procedure like selecting the name of each song on a playlist of a mediaplayer. 2) The portable media, ussually a portable USB drive, must hide for complete and should make unaccesible the files that contain the audio data to anything but the mediaplayer, that must be in the first partition, the one that is visible. 3) Considering that´s impossible to really hide files in a non-hidden partition, a second hidden partition should be created in the USB drive and the audio data will be stored there. 4) The trick is to read the audio data files stored in the hidden partition with a mediaplayer stored in the visible partition, the media player also should be a complete standalone program and independent from any library of the operating system except of the OS audio system. 5) The hidden partition should have a copy protection scheme that could impede to do copies of the data or create working ISO images of it. I know that this description could not be technically accurate but it has a complete logic from the needs of a music producer against the problem of piracy. The philosophy that surrounds the concept is to transform a virtual object like a digital string of audio in a solid object like the analog vinyl discs are.

    Read the article

  • DNS server not working?

    - by Behrooz A
    I just set up a DNS Server on my windows 7, called SimpleDNS I added a zone for example sag.com and defined www.sag.com and sag.com to 192.168.1.2 (my Network IP Address) . the problem is when I try to ping sag.com the SimpleDNS logs says that it answered the request with 192.168.1.2 , but the ping doesn't resolve anything . SimpleDNS logs: > 14:00:43 Request from 192.168.1.2 for A-record for www.sag.com > 14:00:43 Sending reply to 192.168.1.2 about A-record for > www.sag.com: 14:00:43 -> Answer: A-record for www.sag.com = > 192.168.1.2 14:00:43 -> Authority: NS-record for www.sag.com = mehr-pc nslookup : > C:\Users\Mehr\Desktop>nslookup www.sag.com DNS request timed out. > timeout was 2 seconds. Server: UnKnown Address: 192.168.1.1 > > DNS request timed out. > timeout was 2 seconds. DNS request timed out. > timeout was 2 seconds. DNS request timed out. > timeout was 2 seconds. DNS request timed out. > timeout was 2 seconds. > *** Request to UnKnown timed-out the DNS server IP is 192.168.1.2 , and Access point address : 192.168.1.1 what should I do?

    Read the article

  • What does a status of "Backup" mean for Windows 7 local user profiles?

    - by Howiecamp
    Summary: Upon logging on to Windows 7 RTM I get a message that my profile can't be loaded and a temporary user profile is created. I logged off and back on as Administrator. The user profiles dialog shows my user profile with a Type of "Local" and a Status of "Backup" rather than "Local" which it should be. How can I change this to make my user profile accessible? The long story: My PC has a single hard drive partitioned into a C: and a D:. I'd moved my user profile directory (c:\Users) to d:\Users, removed c:\Users and then used mklink.exe to create a directory symbolic link c:\Users -- d:\Users. Worked like a charm since I did it. Today, I make a System Restore Point for drives C: and D:. Next, I dismounted D: and used the Disk Management tool to remove the "D:" drive letter from the D volume. (My plan was to reboot and then redirect the symbolic link.) Upon reboot, I got the user profile error described above. Finally, I restored the System Restore Points that I'd created for both drives and then rebooted again. Same issue.

    Read the article

  • How to re-join an AD2003 domain with Samba after deleting the machine account?

    - by Guss
    During some troubleshooting I deleted the machine account for a Linux server running samba from our AD 2003 domain. We are using Kerberos for authentication, and after I deleted the machine account I tried to join the domain again using net ads join -U Administrator But I keep getting Kerberos errors like these: [2009/08/18 16:14:36, 0] libads/kerberos.c:ads_kinit_password(228) kerberos_kinit_password [email protected] failed: Client not found in Kerberos database Failed to join domain: Improperly formed account name It appears as if samba remembers that it once had an account with the AD and keeps trying to reconnect to it, but I want to create a new account from scratch. I tried to delete all the .tdb files I could find as well as everything under /var/cache/samba but to no avail - it still behaves the same. I also tried to create the machine account on the AD side, but then I get a similar error when I try to join, about failure to authenticate with the machine account - it looks like samba tries the previous machine account password and I don't know how to reset it, or even if I could figure out what samba uses - how to set it in the AD. Any help would be greatly appreciated, as at this point the only thing I can think about is to reformat and reinstall the machine, and I would really REALLY love to not do that. Thanks in advance.

    Read the article

  • Deleting files in the windows installer folder

    - by qw3n
    How do you clean up the windows/installer folder on a xp machine. I looked on different forums, but the tool many mention is no longer officially supported and from what I understand not specifically for this task. Also, I was confused on which tool to use or how to use it. The reason I ask this is I have an older computer with ~86gb drive and ~80gb of is being used by the windows/installer. I'm assuming that at least some of these are glitches and shouldn't be in there. Note that the person who uses the computer mentioned trying to interrupt an install at some point and I don't know if this has anything to do with it. Also, there are not that many programs installed on this computer ~25. Also, I know that similar questions has been asked several times already, but the accepted answer Is it safe to delete from C:\Windows\Installer? is mainly talking about is it safe to delete (along with most of the duplicates). I'm asking how to find and delete the files that shouldn't be there especially since were not talking 5-10gb but something that practically fills the entire hard drive, and for those who are wondering I ran CCleaner, but it doesn't seem to check this folder.

    Read the article

  • PGB Multipath & return routes

    - by Dennis van der Stelt
    I'm probably a complete n00b concerning serverfault related questions, but our IT department makes a bold statement I wish to verify. I've searched the internet, but can find nothing related to my question, so I come here. We have Threat Management Gateway 2010 and we used to just route the request to IIS and it contained the ip address so we could see where it was coming from. But now they turned on "Requests apear to come the TMG server" so ip addresses aren't forwarded anymore. Every request has the ip of the TMG server. Now the idea behind this is that because of multipath bgp routes, the incoming request goes over RouteA, but the acknowledgement messages could return over RouteB. The claim is that because the request doesn't come from the first known source, our proxy, but instead from IIS, some smart routers at the visitor of our websites don't recognize the acknowledgement message and filter it out. In other words, the response never arrives. Again, this is the claim. But I cannot find ANY resources on the internet that support this claim. I do read about pgb multipath, but more in the case that there are alternative routes when the fastest route fails for some reason. So is the claim completely bogus or is there (some) truth to it? Can someone explain or point me to resources? Thanks in advance!

    Read the article

  • Sun-JRE on CentOS-4.8 RPM error: post-install scriptlet failed, exit status 5

    - by Emyr
    I have a server with CentOS 4.8 installed. The provided is rubbish, but there's only a few months left, and they're busy being sued by Chase bank, so I doubt I can get CentOS 5. I wiped the server clean using Virtuozzo, and found that the default image is VERY empty. I even had to install yum myself. I've reached the point where I want to install TomCat. I downloaded the Sun JRE as a .rpm.bin file, did chmod a+x and ran it. That produced a .rpm file, which I tried installing: [root@host java]# rpm -Uvh jre-6u20-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... localedata.jar... plugin.jar... javaws.jar... deploy.jar... error: %post(jre-1.6.0_20-fcs.i586) scriptlet failed, exit status 5 [root@host java]# rpm -qi jre Name : jre Relocations: /usr/java Version : 1.6.0_20 Vendor: Sun Microsystems, Inc. Release : fcs Build Date: Mon Apr 12 19:34:13 2010 Install Date: Thu May 6 06:36:17 2010 Build Host: jdk-lin-1586 Group : Development/Tools Source RPM: jre-1.6.0_20-fcs.src.rpm Size : 50708634 License: Sun Microsystems Binary Code License (BCL) Signature : (none) Packager : Java Software <[email protected]> URL : http://java.sun.com/ Summary : Java(TM) Platform Standard Edition Runtime Environment Description : The Java Platform Standard Edition Runtime Environment (JRE) contains everything necessary to run applets and applications designed for the Java platform. This includes the Java virtual machine, plus the Java platform classes and supporting files. The JRE is freely redistributable, per the terms of the included license. [root@host java]# I couldn't find any results on Google for any parts of that error message, and I have very little experience of rpm (I usually use Debian). Is this a broken package, or am I missing something or some setting?

    Read the article

  • How to stop/kill a virtual machine that hangs in "Stopping" state?

    - by SvetP
    Hi, I have a virtual machine that constantly hangs in the “Stopping” state. I’ve red several posts suggesting killing the vmwp.exe process of the machine but I’ve never been able to kill this process neither from the Windows Task Manager nor from an administrative command prompt by using prockill /PID xxxx /F where xxxx was the process ID. The only result that I have is that my machine enters in “Stopping-Critical” state. Even worse, from that point (having a virtual machine hung at stopping) I am unable to manage (stop or start) any other virtual machine on the same host. The only “solution” in that case for me is to stop the Virtual Machine Management Service (vmms.exe) and to restart the physical host. Without first stopping the vmms.exe service my physical host also hangs during the restart. Moreover, there is no any error logged in the Event Viewer. I’ve found some other posts complaining about them problem. On all of them the only suggestion was to kill the vmwp.exe process, which obviously doesn’t work for them too. Can somebody help us with this, pls? Thanks

    Read the article

  • When pointing to new DNS servers is there any chance of E-mails being lost if the old E-mail hosting service is still up?

    - by LaserBeak
    I am changing webhosts and will be using the new hosts mail servers instead of the old ones. I have created all the correctly named mailboxes on the new service but have also not yet cut ties with the old webhost. I am expecting that even if the new DNS values which point to the new hosts DNS servers and respective SOA\zone file with the new MX values have not yet propagated and an E-mail is directed at the old hosts mail servers as per the mx records in the SOA\zone records which the old hosting provider holds, the E-mail would still come through to the mailbox that's on the old host providers mail servers. So I am just trying to reaffirm if I got this right and it's essentially impossible for me to loose an E-mail since it will hit either the old hosts mail servers or the new ones ? Also is it possible to configure the same E-mail account to check and collect mail from different mail servers by entering multiple pop3 addresses ? And if I choose to keep the old web hosts mail hosting services as a backup by specifying the mx records for it with a lower priority in the SOA records hosted by the new webhost, is it possible to have any incoming E-mails sent to both servers by the mail daemon so I have two copies? Or is my only option having the primary mail server forward the E-mail somehow to the old mailserver ?

    Read the article

  • Config Apache HTTP Server for Eclipse

    - by hqt
    Maybe this question is silly but I really don't know how to solve. First, as other server, I want to define new server. So, in Eclipse, I go to: WindowsPreferenceServer: 1) When I add new server, in list, no category for apache HTTP server. Just has apache tomcat. So, I click into download additional server adapter--still don't have in list. 2) So, I search. I point to location I have installed. Good, Eclipse sees that is a HTTP Server. And Eclipse see folder to put project into for me (because I use LAMPP so that folder isn't in Apache folder). But here is my problem. When I want to run a new PHP Project. Right click, run on server. A new dialog appear take me to choose which server to run. And, in list of server, no HTTP Server, So, I don't know how to choose Apache HTTP Server !!! (because Eclipse doesn't see which server that I have defined, eclipse just find adapter first) So, if I want to run this project, I must copy all and paste to apache folder. Too handy !!! Please help me. Thanks :)

    Read the article

  • Windows, never "lose focus" of the current window

    - by Mazura
    I want the task bar to light up orange and never lose focus to anything. If I'm installing something and then go play a game at some point it will drop out to the finished instillation. Also, if installing multiple programs at once my 'next' button can all of sudden become "click here to install this crappy toolbar" of another program's install. Of course there are settings for some programs to not "lose focus" or "stay on top" but I really want windows to handle it. If its somehow an exe called say, Taskswtich.exe, I could possibly use Process Blocker however I'm assuming its part of a function call or some such. For XP I found this: How to disable auto focus of opened Windows applications? but what about Windows7? And this old post Preventing applications from stealing focus with a bunch of long answers that say "no". I'd appreciate this not being merged with a 4 year old question. I'd like to avoid 3rd party software. This is 2014, don't we know how to hack windows yet?

    Read the article

  • Win8/7/XP print spooler not getting along with Zebra ZT230 via WIFI

    - by Jonathan M
    I have a graphics-intensive 4"x6" label I'm printing to the ZT230. I'm printing multiple (10) copies. When connected via USB, all goes well. However, when connected via wifi, I only get 2 of the labels. A wireshark capture shows that at some point in the process my computer (presumably my windows spooler) is sending a reset packet, which, I believe, would pretty much kill the print job. I'm getting the same results on Win8, Win7 and WinXP. The print job was originally generated on Zebra's ZebraDesigner2 software. For easier diagnosis, I captured it to a .prn file. The .prn file can be found here: https://drive.google.com/file/d/0BwxF_9SAkKzLLTF5bUJVT0lESUU/edit?usp=sharing And the wireshark capture file can be found here: https://drive.google.com/file/d/0BwxF_9SAkKzLTGpSS0ktZW1xV28/edit?usp=sharing And the printer configuration listing: https://docs.google.com/document/d/1zh1Tw4D4yNa2uljOIL1kO2z8se9HK859irpUEwyxlyY/edit?usp=sharing I've started a discussion with Zebra Tech Support, and they're working on it, but I thought I'd toss it out here for more ideas since we're getting kind of stumped. Any ideas why this may be happening?

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

< Previous Page | 796 797 798 799 800 801 802 803 804 805 806 807  | Next Page >