Search Results

Search found 39047 results on 1562 pages for 'process control'.

Page 596/1562 | < Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >

  • What could cause Windows 7 to hang whenever I install something?

    - by Larsenal
    I've had this problem when installing several different programs (iTunes, Adobe Acrobat Reader just to name two). Regardless of what the program is, the install usually gets at least 90% through the process and then just hangs. I don't see anything bad in the event log besides the following (and this didn't occur exactly at the time of install): wuaueng.dll (964) SUS20ClientDataStore: A request to write to the file "C:\Windows\SoftwareDistribution\DataStore\DataStore.edb" at offset 16252928 (0x0000000000f80000) for 32768 (0x00008000) bytes succeeded, but took an abnormally long time (185 seconds) to be serviced by the OS. This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. I've run check disk and it passed. I've had some problems with BIOS settings in the past with Windows 7, but I'm not sure whether that could be related. Update... I also see this error in the event log: Volume Shadow Copy Service error: Unexpected error querying for the IVssWriterCallback interface. hr = 0x80070005, Access is denied. . This is often caused by incorrect security settings in either the writer or requestor process. Operation: Gathering Writer Data Context: Writer Class Id: {e8132975-6f93-4464-a53e-1050253ae220} Writer Name: System Writer Writer Instance ID: {33493f01-ac1b-4efb-a378-3053ab03100d} One last wrinkle.... I see "Previous versions" of c:\ which look like they correspond to the time of attempted installation.

    Read the article

  • Connecting SVN from Remote Server

    - by Ashish
    I have hosted my repository in assebbla & it works fine. now I want to write a script that can automate the build process : 1. Take the code from assembla repository 2. Make a dump and copy it onto my web server. what I have researched from net states that use of commands like svn co svn+ssh://[email protected]/home/svn/test I believe I need to open Shell on my server and type these commands but shell has been disabled from my server admin. I tried to run the same from php using exec , admin has disabled that too. (am using shared hosting and want to do a automated deployment using these simple steps. i don't want to bring my local system in this process) now am not sure even if I get the shell access open to my server these commands like svn will work there as I don't have SVN installed on my server (its installed on assembla). kindly let me know if any more explanation is required regarding the same or if am going on the wrong track. Am a newbie so please be descriptive in answering :) Thanx in advance Ace

    Read the article

  • Quickly close all Word and Excel instances?

    - by dyenatha
    Suppose I open 10 Word files and 10 Excel files and make no changes, how do I quickly taskkill all at once? Because I must repeat several attempts to replicate race, I'm hoping for a command-line solution. I'm willing to try PowerShell and cygwin (1.5) if necessary. The OS is Windows XP SP3 with current patches (still IE7). I tried "taskkill /pid 1 /pid 2 /t" where 1 is PID of EXCEL.EXE and 2 is PID of WINWORD.EXE, but it closed only 1 window of each program. I'm trying to replicate a race where an add-in for Microsoft Office 2007 fails to exclusive-lock one of its own files, which caused the 2nd Office program to stop exiting with a warning: System.IO.IOException: The process cannot access the file 'C:\Documents and Settings\me\Application Data\ExpensiveProduct\Add-InForMicrosoftOffice\4.2\egcred' because it is being used by another process. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options) at System.IO.StreamWriter.CreateFile(String path, Boolean append) at System.IO.StreamWriter..ctor(String path, Boolean append, Encoding encoding, Int32 bufferSize) at System.IO.StreamWriter..ctor(String path, Boolean append, Encoding encoding) at System.IO.File.WriteAllText(String path, String contents, Encoding encoding) at ExpensiveProduct.EG.DataAccess.Credentials.CredentialManager.SaveUserTable() at ExpensiveProduct.OfficeAddin.OfficeAddinBase.Dispose(Boolean disposing) at ExpensiveProduct.OfficeAddin.WordAddin.Dispose(Boolean disposing) at ExpensiveProduct.OfficeAddin.OfficeAddinBase.OnHostShutdown() at ExpensiveProduct.OfficeAddin.OfficeAddinBase.Unload(ext_DisconnectMode mode)

    Read the article

  • How can I automate or script daily downloads for any new anti- virus databases, and then have the program scan my drive?

    - by Macgrimm
    Howdy all Super Users" I humbly ask if any Super User can direct this long time, gray haired Apple Tech in the right direction on this issue. I believe there probably are many ways to skin this cat. But I am looking to find simply the best, most unattended way to get it done. Any help will be greatly appreciated. also (I know there are much better softwares out there for the Mac so please don't go there! The politics of this company dictate which Anti virus we have to use) anyway without any further wait: basically I am trying to automate 2 very important functions of Mc'Afee anti-virus for Mac. First I want to automate the process of retrieving new virus definition files, and second I want to automate the process of scanning for viruses. It turns out that Using Mc'Afee Anti-Virus for the Mac are both manual functions. And they left up to the user (per user account) to perform. Depending on all of about 150 MAc users to perform these 2 tasks themselves is around 65% compliance. My question then is: If I wanted to use the command line such as (open /Applications/McAfee\ Security.app) It will open up the Security Console. But how can I make command Mc'Afee go out and grab the definition files and scan the computer? I have to admit I am at a crossroad and Macaltimers has set in. I would really appreciate it if any of you "Super ~ Users" can help me out with this MacAltimers loss of how to what to do. Thanks to All up Front Macgrimm

    Read the article

  • Application that will identify percentage of your system disk bandwidth used on a user-application by user-application basis?

    - by Warren P
    I always (subjectively) feel my computer is far too slow (however fast it is), and so I'm always looking for ways to measure and understand what my computer is actually doing, that is making it seem "slow" to me. It has been my observation that my software-developer workload is most often disk-bound (I am waiting for Disk I/O) more than CPU bound. What has made it worse, is that I am using a corporate PC that has in-memory active-scanning anti-virus software that I do not have control over, and also some IT department mandated services that seem to suck up a lot of available hard-disk bandwidth. The best tool I have seen (in Windows 7) is the Resource Monitor which I usually acess from the button in the task Manager. The disk IO page, however, seems to label Disk Activity at a very low level (for example, showing the Volume Shadow Storage, which is flushing information obviously written by something ELSE other than VSS itself, and then writes to Pagefile.sys, which are obviously due to Virtual Memory faults in some application). What I would like to know is if a utility exists that can add up all direct disk input and output by user-level process, or find the process or service that caused VM or VSS activity. In that way, I hope, you could establish a real idea of how much of your computer's precious disk subsystem bandwidth is attributable to a particular application. here's a scenario: MyApp.exe writes 100k/s and reads 100k/s directly. VSS ends up writing another 100k/s. pagefaults caused inside MyApp.exe cause another 100k/s of writes. So the total "cost" of MyApp.exe running, during a period of time (let's say 1 second) is 400k/s, whereas you can only directly observe half of that, in Resource Monitor. Is there a smarter disk-IO watching piece of software I can use?

    Read the article

  • Relinking a deleted file

    - by mbac32768
    Sometimes people delete files they shouldn't, a long-running process still has the file open, and recovering the data by catting /proc/<pid>/fd/N just isn't awesome enough. Awesome enough would be if you could "undo" the delete by running some magic option to ln that would let you re-link to the inode number (recovered through lsof). I can't find any Linux tools to do this, least with cursory Googling. What do you got, serverfault? EDIT1: The reason catting the file from /proc/<pid>/fd/N isn't awesome enough is because the process which still has the file open is still writing to it. A delete removes the reference to the inode from the filesystem namespace. What I want is a way of re-creating the reference. EDIT2: 'debugfs ln' works but the risk is too high since it frobs raw filesystem data. The recovered file is also crazy inconsistent. The link count is zero and I can't add links to it. I'm worse off this way since I can just use /proc/<pid>/fd/N to access the data without corrupting my fs.

    Read the article

  • How to write rules for persistent net names?

    - by ndemou
    I know that a process generates persistent network card names based on rules found in /lib/udev/rules.d/75-persistent-net-generator.rules. I also know how to completely disable this process with a simple echo '#' > /etc/udev/rules.d/75-persistent-net-generator.rules but I've read that I "could also write my own rules file to give the interface a name — the persistent rules generator ignores the interface if a name has already been set" (/etc/udev/rules.d/README confirms that this is possible). Do you have any pointers to documentation about how to write such rules? (I mostly care about Debian/Ubuntu and a bit less for CentOS) As a specific example of why I want to write custom rules: I have two identical servers with one onboard LAN and one PCI LAN. In case of HW failure I want to be able to move disks from HW#1 to HW#2 and it's important for eth0 to continue pointing to the onboard card and eth1 to the PCI card (no one wants to mess with cabling in the middle of a HW failure panic). My current workaround works but is a lot of work[1] so I wonder if writing custom rules would allow me to express something simple like this: cards with MAC A or B should be named eth0 cards with MAC C or D should be named eth1 follow default naming scheme for anything else [1] install the OS in HW#1 and keep a copy of /etc/udev/rules.d/70-persistent-net.rules. Move the disks to HW#2 and keep a second copy of the same file. Concatenate the two copies and manually edit the NAME="ethX" part. Replace /etc/udev/rules.d/70-persistent-net.rules with my version. Finally disable auto-creation of a new 70-persistent-net.rules using echo '#' > /etc/udev/rules.d/75-persistent-net-generator.rules

    Read the article

  • Is domain-transfer inherently safe for downtime when the name servers remain the same?

    - by jlmt
    I've been reading around this topic towards understanding whether there's some or no chance of downtime during an upcoming domain transfer for 15 live and very critical domains. In our case there are three companies involved: CompanyA is the original registrar and DNS host, CompanyB is the new DNS host, and CompanyC is the new registrar. I've already changed the nameservers for all domains to those of CompanyB. We suffered some downtime because CompanyA deleted their hosted DNS for our domains directly after the change, but the changes propagated and we're now able to configure our DNS with CompanyB. From what I understand (please correct where wrong!): There exists an SOA record that points oneofourdomains.com to ns.companyb.com. That record is maintained and authoritatively hosted by the ccTLD registry for the domain (eg. Verisign for .com). CompanyA currently has the ability to change the SOA record because they're the registrar. There exist NS records for oneofourdomains.com, which are also related to the link from domain name to nameserver, are similarly hosted by the ccTLD, and which CompanyA are also able to change while acting as registrar. Neither CompanyB nor CompanyC currently have any control over the SOA or NS records. CompanyA are unable to cause us (DNS) problems during the transfer by dropping service early, because they are not the authoritative source for the SOA and NS records. When we transfer the domains, it's administrative control of the SOA and NS records that will be transferred to CompanyC. As long as we advise CompanyC that the SOA and NS records must not change (as regards pointing to CompanyB's nameservers), there's no need for any kind of DNS change, and therefore no possibility of downtime. Is my understanding of this correct? My fear is that CompanyA will somehow cut us off again, and their support dept hasn't given me much confidence in their understanding of the topic.

    Read the article

  • Problems with USB-Devices using VDR

    - by emmsinator
    Hey Guys, I'm using VDR on vSphere4. It works sucessfully. I've already backuped several VMs with VDR and I like it very much. But now we got a problem. We have 2 VMs, using an USB-Device Server with a stick plugged in, which is definetely need by these 2 VMs for Licensing and so. Every time, I start the Backup process, the VMs lost the communication to the USB-Server and its stick after building the snapshot and while online. Because of that, the software on these VMs can't work correctly. I have to restart both Machines to solve this problem. These fact is bad for an automatic backup. Does VDR have a special function for those cases or is something like this already known? It would be no problem, to shutdown the servers for building snapshots on Saturday or Sunday. Can VDR initiate a shutdown before starting the backup process? Otherwise I must try to use scripts, but that wouldn't be so nice. Thanks a lot for your help.

    Read the article

  • Random “Lost connection to MySQL server at 'reading initial communication packet', system error: 0”

    - by user1606545
    Sometimes I get the error from MYSQL server: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 I cannot find the cause, since most of the time it works, but every week for some hours I get this error. I googled, but there seem to be only users which have this error permanently. But in this case, it only occurs sometimes. I checked hosts.allow and hosts.deny, but the host is allowed and not denied. Also sometimes I get the error: File './database/table.MYD' not found (Errcode: 24) It occurs very rarely. But it occurs for some hours once a week, sometimes on multiple days, but suddenly the problem disappears again. I have checked the open files limit. It's 2048 and should be absolutely enough. I also tried to increase the number of open files nevertheless, but no effect. I thought, perhaps the process does not close some tables. But this is impossible, because after a while everythings o.k. again and the process opens maximum 100 tables at once. I also checked the MySQL-runtime-environment, and there were 930 opened files. I cannot explain that. After a while it's 129. I am running a MySQL-Server on a SUSE-Linux machine. I connect to the MySQL-Server from another host by the command line tool "mysql" and by MySQL-C-connector. The MySQL-Server is version 5.0.67.

    Read the article

  • Reverse SSH tunnel: how can I send my port number to the server?

    - by Tom
    I have two machines, Client and Server. Client (who is behind a corporate firewall) opens a reverse SSH tunnel to Server, which has a publicly-accessible IP address, using this command: ssh -nNT -R0:localhost:2222 [email protected] In OpenSSH 5.3+, the 0 occurring just after the -R means "pick an available port" rather than explicitly calling for one. The reason I'm doing this is because I don't want to pick a port that's already in use. In truth, there are actually many Clients out there that need to set up similar tunnels. The problem at this point is that the server does not know which Client is which. If we want to connect back to one of these Clients (via localhost) then how do we know which port refers to which client? I'm aware that ssh reports the port number to the command line when used in the above manner. However, I'd also like to use autossh to keep the sessions alive. autossh runs its child process via fork/exec, presumably, so that the output of the actual ssh command is lost in the ether. Furthermore, I can't think of any other way to get the remote port from Client. Thus, I'm wondering if there is a way to determine this port on Server. One idea I have is to somehow use /etc/sshrc, which is supposedly a script that runs for every connection. However, I don't know how one would get the pertinent information here (perhaps the PID of the particular sshd process handling that connection?) I'd love some pointers. Thanks!

    Read the article

  • Nexenta, NFS and LOCK_EX

    - by Givre
    I'm currently using an LAMP architecture and I expect a big problem :( I have several http web server using PHP5. All are mounting via NFS (v3) the directory for all the hosted websites. The file server is running the Nexenta Storage Appliance using ZFS . The problem is all the NFS client trying to write something in a file over the NFS get this problem : This is inside the apache2 process: open("/nfs/website1/file.txt", ORDWR|OCREAT, 0600) = 11647 fstat(11647, {stmode=SIFREG|0600, st_size=23754, ...}) = 0 flock(11647, LOCK_EX And the process never get the LOCK and keep waiting for... always. The effect? All the apache2 procces get used and waiting.. my severs can't still proccess the others requests because there is no more proccess available. I don't now where to find a solution.. for me it.'s on the NFS server side.. but wich configuration is wrong or missing ? How can I find what is wrong? If you need more information about the configuration, just ask me what can help you more :)

    Read the article

  • Servers - Buying New vs Buying Second-hand

    - by Django Reinhardt
    We're currently in the process of adding additional servers to our website. We have a pretty simple topology planned: A Firewall/Router Server infront of a Web Application Server and Database Server. Here's a simple (and technically incorrect) diagram that I used in a previous question to illustrate what I mean: We're now wondering about the specs of our two new machines (the Web App and Firewall servers) and whether we can get away with buying a couple of old servers. (Note: Both machines will be running Windows Server 2008 R2.) We're not too concerned about our Firewall/Router server as we're pretty sure it won't be taxed too heavily, but we are interested in our Web App server. I realise that answering this type of question is really difficult without a ton of specifics on users, bandwidth, concurrent sessions, etc, etc., so I just want to focus on the general wisdom on buying old versus new. I had originally specced a new Dell PowerEdge R300 (1U Rack) for our company. In short, because we're going to be caching as much data as possible, I focussed on Processor Speed and Memory: Quad-Core Intel Xeon X3323 2.5Ghz (2x3M Cache) 1333Mhz FSB 16GB DDR2 667Mhz But when I was looking for a cheap second-hand machine for our Firewall/Router, I came across several machines that made our engineer ask a very reasonable question: If we stuck a boat load of RAM in this thing, wouldn't it do for the Web App Server and save us a ton of money in the process? For example, what about a second-hand machine with the following specs: 2x Dual-Core AMD Opteron 2218 2.6Ghz (2MB Cache) 1000Mhz HT 16GB DDR2 667Mhz Would it really be comparable with the more expensive (new) server above? Our engineer postulated that the reason companies upgrade their servers to newer processors is often because they want to reduce their power costs, and that a 2.6Ghz processor was still a 2.6Ghz processor, no matter when it was made. Benchmarks on various sites don't really support this theory, but I was wondering what server admin thought. Thanks for any advice.

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

  • Empty $upstream_http_location variable if response was cached

    - by Ivaldi
    I would like to cache the response of an redirect. (Cache the request to some site which returns a redirect and cache the second request which returns the actual content.) So far my config looks like this: location = /proxy { error_page 301 302 307 = @redir; resolver 8.8.8.8; proxy_pass $arg_url; proxy_intercept_errors on; proxy_cache pcache; proxy_cache_key $arg_url; proxy_cache_valid 200 301 302 307 1d; proxy_cache_min_uses 1; proxy_ignore_client_abort on; proxy_ignore_headers Set-Cookie Expires Cache-Control; } location @redir { resolver 8.8.8.8; # we need to assign $upstream_http_location to another var in order to use it with proxy_pass set $target $upstream_http_location; proxy_pass $target; proxy_cache predirects; proxy_cache_key $upstream_http_location; proxy_cache_valid 200 301 302 307 1d; proxy_cache_min_uses 1; proxy_ignore_headers Set-Cookie Expires Cache-Control; } It works for the first request or without the 30x codes for proxy_cache_valid in the /proxy part, but $target and $upstream_http_location are empty, if the response was cached. Is there a nice solution to cache both requests? Thanks!

    Read the article

  • Finding bluetooth link key in Win7, to double pair a device on dualboot computer

    - by Ilari Kajaste
    How can I dig up the bluetooth link key for a paired device in Win7? Is this something that is dependent on the bluetooth stack I'm using (Toshiba), or is there a generic place to store these in Win7? Note: I'm not talking about the six-digit code usually typed by the user during pairing - that is worthless since it's discarded after pairing process. What I mean is the 128-bit link key that the devices exchange during pairing, and use thereafter to encrypt all their bluetooth traffic. Background: I dualboot Win7 / Ubuntu on my laptop, and I would like to have my phone paired to both OS's. Since the dualbooting computer has only one bluetooth adapter and thus only one bluetooth address, I cannot do two pairings to the phone, since on the second pairing (windows) the phone just replaces the previous pairing (linux) to the same bluetooth address. A thread on Ubuntu forums pointed me to what I have to do - pair first on linux, then on windows, and then replace the link key on linux side with the one windows negotiated. I can find the linux side pairing key from /var/lib/bluetooth/[BD_ADDR]/linkkeys - no problems there. However, on windows side I can't find the key. According to the forum post, on windows side the key should be in SYSTEM\ControlSet002\services\BTHPORT\Parameters\Keys\[BD_ADDR] but while that registry key does exist, it has no subkeys. (And a similar registry path in ControlSet001 didn't have any subkeys either.) One thing I've been instructed to do is to capture all events during pairing with Sysinternals Process Monitor. I did this, but I haven't been able to find any useful information from the captured events, not even by exporting the data to a huge XML and grepping that with the BD_ADDRs (with or without colons). So how could I find the link key for a paired device in Win7? Some reference information: Wikipedia: Bluetooth, Security Now: Bluetooth security

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • building a debian base image

    - by Michael
    Is there a preferred way to create base images for Debian-based customized installations? We are currently going with multistrap but although it's better than hand-crafted chroot stuff, it still has a lot of edges and corners. Is there a more reliable and less error-prone way to produce a root filesystem of a Debian installation with some additional .debs installed? (I don't want to send out a Debian installer with a preseed file though.) Addendum 1: To clarify things a bit: We are delivering some kind of software appliance to our customers. That is, a debian operating system, with some additional software packages -- both our own and third-party ones -- and some configuration changes. To ease the installation process, we have an installer that does nothing more than partitioning, copying files to the partitions and setting up grub. So it's basically an image-based installer. So we are basically running the debian installation ourselves and just distribute the already installed operating system. The question is about the installation part. I want to have that as easy and robust as possible, and of course, it should be an automated process.

    Read the article

  • Ubuntu 12.04 can't boot after installing with software RAID 1

    - by Bill
    I've been trying to install Ubuntu with software RAID on my server and there is obviously something that I don't understand about the process. This is the guide that I followed: https://help.ubuntu.com/11.04/serverguide/advanced-installation.html I have two identical 1 TB disks in my server. I went through the initial install process and manually set up my partitions. On each disk I set up: (1) 100 MB partition for EFI boot (I didn't originally have this but added it based on a forum post I found after my original install failed to boot, I ended up with EFIboot since that was what the 'guided partitioning' decided to do) (1) 970 MB partition for / (1) 30 MB partition for swap I then created new RAID 1 disks combining the two partitions, one from each disk, such that each partition is mirrored. I then configured their usage as stated above. After saving the configuration I said yes to boot in a degraded state. The rest of the setup went normally, no errors of any kind. I saw GRUB being installed and again no errors. However, after rebooting the server I get the dreaded 'Insert boot media' and nothing happens. I loaded up the recovery disk and the mdadm configuration looks correct. md0 is my EFIBoot partition md1 is my \ partition using ext4 md2 is my swap partition Running file -s /dev/md0 doesn't indicate that GRUB is there and so I attempted to reinstall GRUB using the recovery disk. I selected the md0 disk and it appeared to install just fine. Running file -s /dev/md1 shows the error needs journal recovery, I'm not sure if that's related or not or how to fix that. Rebooting gives me the same problem, no boot media found. I've searched around the internet but can't figure out what to do next or more importantly how to troubleshoot what exactly is going wrong. Thanks!

    Read the article

  • Is there a Mac utility that does low level drive integrity check and repair?

    - by Puzzled Late at Night
    The PGP Whole Disk Encryption for Mac OS X Quick Start User Guide version 10.0 contains the following remarks: PGP Corporation deliberately takes a conservative stance when encrypting drives, to prevent loss of data. It is not uncommon to encounter Cyclic Redundancy Check (CRC) errors while encrypting a hard disk. If PGP WDE encounters a hard drive with bad sectors, PGP WDE will, by default, pause the encryption process. This pause allows you to remedy the problem before continuing with the encryption process, thus avoiding potential disk corruption and lost data. To avoid disruption during encryption, PGP Corporation recommends that you start with a healthy disk by correcting any disk errors prior to encrypting. and As a best practice, before you attempt to use PGP WDE, use a third-party scan disk utility that has the ability to perform a low-level integrity check and repair any inconsistencies with the drive that could lead to CRC errors. These software applications can correct errors that would otherwise disrupt encryption. The PGP WDE Windows user guide suggests SpinRite or Norton Disk Doctor. What recourse do I have on the Mac?

    Read the article

  • connections in FIN_WAIT and CLOSE_WAIT state

    - by Raj
    I would like to elaborate the setup so You guys can understand the question and answer more accurately. I have HAProxy as load-balancer, 4 webservers (apache 2.2.3) and one database server (MySQL 5). I am monitoring these servers by nagios. I have disabled the keepalive on apache as we have only 8GB of memory. Now what happens whenever I receive alerts for high memory and cpu utilization, I have observed that the connections from apache to database server hang in established mode (keepalive with timeout value of 7200) and at other side means connections between haproxy and apache shows status as FIN_WAIT on haproxy server and CLOSE_WAIT at apache side. I also see the huge memory swapping and apache taking the most of the memory. I did strace on apache process and did not find any information. strace gets attached to apache process but did not produce any output. The processlist on Mysql server show s those processes in sleep mode. The application on webserver is Magento a php application. if you need further information please let me know. Thanks.

    Read the article

  • Permanent solution to Win XP SP3 window animation removal

    - by epale85
    Hello everyone, May I know how I can get rid of the Window Animation (seen when you minimise or maximise a window) in Windows XP Service Pack 3 Permanently?? I have tried the following two solutions: I went to the control panel---adjust visual effects--- then unchecked the "Animate windows when maximising and minimising" option. 2.I have tried using windows powertoys tweakUI to disable the animation. 3.I even tried this: Turn Off Window Animation You can shut off the animation displayed when you minimize and maximize Windows. 1. Open RegEdit 2. Go to HKEY_CURRENT_USER\Control panel \Desktop\ WindowMetrics 3. Create a new string value "MinAnimate". 4. Set the value data of 0 for Off or 1 for On But still no help The Big Problem is that the window animation will disappear for a while but returns again some time later. When I navigate back to the "adjust visual effects" window, the checkbox for "Animate windows when maximising and minimising" is checked again. Thank you very much

    Read the article

  • Can't login after upgrading to Windows 8.1

    - by flatline
    This afternoon, I upgraded my work laptop from Windows 8 to Windows 8.1. I had previously had a local account, but after the upgrade, it prompted me to enter my windows account credentials, which I had set up beforehand at some point. I entered my password and clicked next, went through another screen or two, grew tired with the process, and clicked whatever the equivalent button to "skip this step" that I was presented with. Now I can't log in. Not with my (previous) local account password, and not with my windows account password. It's a Dell with biometric identification, which I had set up previously, so I put my finger on the reader and it complained that I couldn't use that fingerprint because I had changed my password. But, I hadn't wittingly changed my password at all. I assume that what happened is that, by entering my credentials, my local account was tied to the Windows account, but because I cancelled the process partway through, something went wrong and I cannot log in. A few questions: 1) How do I log in with my windows account credentials? Should LOCALMACHINENAME\username, which was my previous login method, still work for the Windows account? When I booted to safemode it prompted me with WindowsAccount\myemailaddress, which allowed me to login there, but the regular login doesn't accept the '@' symbol. 2) Is there any way to make that account local-only again? I can't find any way of doing it. 3) I managed to enable the local administrator account and get back into the box; failing all else, is there a quick way to migrate my old profile over to a new user?

    Read the article

  • processing of Group Policy failed only on 2008 Servers and Name Resolution failure on the current domain controller

    - by Ken Wolfrom
    Spent last 3 months doing a upgrade from 2003 domain to a 2008R2 domain. our last DC was rebuilt (5 total) and brought up on line. After it was put on line we have some 2008 and 2008R2 servers (10 now) getting these errors in the event logs. ERRORS Description: The processing of Group Policy failed. Windows could not resolve the user name. This could be caused by one of more of the following: a) Name Resolution failure on the current domain controller. b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).\ Can duplicate this if we drop to command prompt and run GPUPDATE manually When our users attempt to do a \directory\shared access to shared drive on an affected server get this error.– “THERE ARE CURRETLY NO LOGON SERVER AVAIALBE TO SERICE THE LOGON REQUEST. This is only affecting the 2008 OS and it is a random set of abotu 10 servers out of some 30 with this OS. The Services on the machines are running Ok and login. Able to log in with domain/user to the consoles and via RDP. WE can log onto an affected machine, and can get to the \domainname\sysvol and can see the GPO's Have checked the replication topology of the domain and it states all servers can replicate with no errrors. We went back to the last DC, demoted it, removed DNS and then removed it from the domain and waited 24 hours and issue still persist. Picked one server, removed it from domain, reboooted, and added back to domain with no problems, but still has this behavior. bottom line is we have some servers that the domain will not let any UDP/client server apps or GPO's process ,but the tcp related items seeme to work fine, http, tcp calls, sql and oracle dbs's connect and process. Any inputs on some possible reasons for this issue and fixes. It is only affecting the 2008 servers on a 2008R2 domain.

    Read the article

  • Giving the root user priority to maintain Debian (while server collapsing under heavy load)

    - by Saix
    Is there any way to setup Debian to prioritize any or specific root's activity before every other? For instance, several times per year something gets wrong (usually man's fault by overstressing apache/mysql) and system gets unresponsive under heavy load like 200 (8-core cpu). I know there are limits for php scripts to run then kill, but that's not the way because this limit has to be at least 45 minutes long. The problem is, until I'm able to login via SSH and let apache/mysql restart under this server stress, it nearly hits these 45 minutes anyway. Also hardware restart causing usually to run fsck at boot time on all harddrives since it's usually pretty long the box haven't been restarted. I was told it's really not good idea disabling fsck but then again, it takes more then hour to complete. What is the fastest way to restart apache/mysql? Is there any way to give ssh users or root user higher priority so the logging in and completing these restarts (rather stops though) commands wouldn't take so long? One comes to my mind.. use NICE for apache/mysql but no way. I can't risk limiting those two vital apps 24/7 or could I? I'm a little bit scared if any other system process wouldn't slow the pages down too much. Any backup process, swap (if any) etc. There is pretty heavy PHP framework with 20k visits a day, so it needs every hw/sw resource available. I can't throttle it the whole time, just in certain points when system gets unresponsive, so I could maintain it.

    Read the article

< Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >