Search Results

Search found 13697 results on 548 pages for 'linking errors'.

Page 411/548 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • EventID 1058 Code 5, Sysvol is subdir of Sysvol - how to fix?

    - by nulliusinverba
    I have been trying to resolve this error, like many others: The processing of Group Policy failed. Windows attempted to read the file \domain.local\SysVol\domain.local\Policies{3EF90CE1-6908-44EC-A750-F0BA70548600}\gpt.ini from a domain controller and was not successful. Group Policy settings may not be applied until this event is resolved. This issue may be transient and could be caused by one or more of the following: a) Name Resolution/Network Connectivity to the current domain controller. b) File Replication Service Latency (a file created on another domain controller has not replicated to the current domain controller). c) The Distributed File System (DFS) client has been disabled. Error code: 5 = Access Denied. The incredibly helpful post is this one (http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/2003_Server/A_1073-Diagnosing-and-repairing-Events-1030-and-1058.html). Quoting from this post: HERE IS A LIST OF POTENTIAL PROBLEMS THAT CAN LEAD TO 1030 AND 1058 EVENT ERRORS: --Sometimes the permissions of the file folders that contain Group policies (the Sysvol folder) can be corrupted. --Sometimes you have problems with NetBIOS: --Sometimes the GPO itself is corrupt, or you have a partial set of data for that GPO. --Sometimes you may have problems with File Replication Services, which almost always indicates a problem with DNS --Sysvol may be a subfolder of itself: Sysvol/Sysvol I have the problem listed where sysvol is a subfolder of sysvol. The directory structure is: -sysvol -domain -staging -staging areas -sysvol (shared as "\\server\sysvol") -domain.local -ClientAgent -Policies -scripts Interestingly, the second sysvol folder is the one that is shared as "\server\sysvol". This makes me confident this is the issue with the permissions and error code 5. Also interestingly, my server 2008 R2 servers can see it fine - my server 2008 servers cannot, and get the error. This is consistent across all my servers. This latter fact makes me uncertain what I need to do to fix this up. Do I, e.g., simply move the shared sysvol folder up a level to replace the non-shared one? Any help greatly appreciated. Cheers, Tim.

    Read the article

  • Corrupted NTFS Drive showing multiple unallocated partitions

    - by volting
    My external hdd with a single NTFS partition was accidentaly plugged out (kids!)... and is now corrupted. Iv tried running ntfsfix - with no luck - output below.. When I look at the disk under disk management in Windows 7 it shows up as having 5 partitions 2 of which are unallocated - none have drive letters and it is not possible to set any (that option and most others are greyed out) - so I can't run chkdsk /f Iv tried using Minitool partition wizard which was mentioned as a solution to another similar question here. It showed the whole drive as one partition, but as unallocated, and the option -- "Check File System" was greyout. Is there anything else I could try ? Output of fdisk -l Disk /dev/sdb: 1500.3 GB, 1500299395072 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930272256 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytest I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x69205244 This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sdb1 ? 218129509 1920119918 850995205 72 Unknown /dev/sdb2 ? 729050177 1273024900 271987362 74 Unknown /dev/sdb3 ? 168653938 168653938 0 65 Novell Netware 386 /dev/sdb4 2692939776 2692991410 25817+ 0 Empty Partition table entries are not in disk order Output of ntfsfix me@vaio:/dev$ sudo ntfsfix /dev/sdb Mounting volume... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error FAILED Attempting to correct errors... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error FAILED Failed to startup volume: Input/output error Checking for self-located MFT segment... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument OK ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error Volume is corrupt. You should run chkdsk. Options available with MiniTool: Related questions: How to fix a damaged/corrupted NTFS filesystem/partition without losing the data on it? Repair corrupted NTFS File System

    Read the article

  • Outlook / Gmail 'too many simultaneous connections' error

    - by sam
    I'm just setting up Outlook for Mac, and I'm trying to add a Google Apps application for business email (Gmail). I've set it up correctly (same details worked in Mac mail). But I keep getting two errors, either or just a error asking for the username and password again. Just to confirm the user name and password are correct, although when I go into menu command Tools - Account and look in the password field for that account it's blank. But if I just click cancel on the popup asking for my username password it just continues to get mail in the background for about 30 seconds, before again asking again for the password, or showing the above error which I can click 'yes' to and again it will get the mail. But after 30 seconds it does the same thing. I've got two other accounts set up fine, one a horde account (hosted webmail using POP3) and the other a iCloud .me account running on IMAP. What might be causing this and how I can remedy it? A bit more background: the machine is a MacBook Pro running Mac OS X v10.7 (Lion). Update 2013-11-02 I've updated Outlook to SP3, but I still get the same error.

    Read the article

  • vmware player won't run on CentOS due to missing /dev/vmmon, what could be the problem?

    - by Graphics Noob
    So I've tried installing vmware player 3.1.4 and 3.1.3 and both times had the same problem, when I try to load a VM I get the error "Could not open /dev/vmmon". When I ls /dev/ I can see there is no "vmmon" device present. When I try running: sudo /etc/init.d/vmware start I get the output: Starting VMware services: VMware USB Arbitrator [ OK ] Virtual machine monitor [FAILED] Virtual machine communication interface [ OK ] VM communication interface socket family [ OK ] Blocking file system [ OK ] Virtual ethernet [FAILED] which shows that the Virtual Machine Monitor fails to load. I tried following the advice on this site and ran vmware-modconfig --console --install-all I notice during the compilation there are no errors, but at the end I get the message: Starting VMware services: VMware USB Arbitrator [ OK ] Virtual machine monitor [FAILED] Virtual machine communication interface [ OK ] VM communication interface socket family [ OK ] Blocking file system [ OK ] Virtual ethernet [ OK ] Unable to start services Out of curiousity I tried: sudo /sbin/insmod /lib/modules/2.6.18-238.9.1.el5xen/misc/vmmod.ko But got the error message: insmod: error inserting 'vmmon.ko': -1 Invalid module format I have a feeling this may be the root of the problem, but I don't know what could be causing it or how to fix it.

    Read the article

  • How to migrate Fedora DS (389 DS) to a new machine?

    - by zengr
    Hello, I am trying to migrate a Fedora DS (1.2.2) to a new server (1.2.7.5). The process has been painful to say the least. The old server (1.2.2) was also an upgrade from an old fedora DS setup, so it does not contain migrate-ds-admin.pl. I found this question, but the URL does not open. I am aware that I need to use migrate-ds-admin.pl, but I am clueless. How do I use it? I assume this works like this: 1. Copy migrate-ds-admin.pl from server which has 1.2.7 to 1.2.2 2. Run migrate-ds-admin.pl to export the schema+ldif from 1.2.2 3. Import the schema+ldif to 1.2.7 using migrate-ds-admin.pl. If the above is true, then what parameters are need for export and import? Note: ./ldif2db -n NetscapeRoot -i /root/NetscapeRoot.ldif ./ldif2db -n userRoot -i /root/userRoot.ldif The above two commands work like a charm, but since the schema (custom schema) is not migrated, I see alot of errors during import.

    Read the article

  • Firefox window disappears

    - by Lord Torgamus
    Now this is odd. At some point in the last half hour, my Firefox window disappeared. I didn't notice, as I was working in another program at the time. No Firefox icon shows up with Alt-Tab, and no Firefox listing shows up under the Applications tab in the task manager. There is a Firefox entry under the Processes tab. Normally, I probably wouldn't have noticed, just opened Firefox up again, but I'm listening to an Internet radio station and the stream never stopped. When I did open a new Firefox window, it showed up in the Task Manager's applications tab. I'm running Windows XP, and my Firefox has the add-ons Adblock Plus, BetterPrivacy, Cert Viewer Plus, DOM Inspector, Firebug, Greasemonkey, Java Quick Starter, Live HTTP headers, Microsoft .NET Framework Assistant, NoScript, WebDeveloper and XPather. The radio station is Slacker; it's never given me any trouble before, and I've been using it for months. I don't think there was anything unusual in my open tabs; just a few static pages at non-sketchy sites like Java APIs, plus GMail and the aforementioned Slacker. Googling brought up a handful of similar-but-not-quite-the-same errors, none of which had useful resolutions. Does anyone know how to bring that window back and/or prevent this from happening again?

    Read the article

  • Exchange 2010 Hub Transport Role Fails - Registry Keys Missing?

    - by DKNUCKLES
    I've inherited an attempted Exchange 2010 implementation from a colleague that apparently failed. I've almost managed to bring it back from the dead, but the Hub Transport role fails to install with the following error [10/06/2012 02:30:44.0119] [2] Beginning processing Set-LocalPermissions -Feature:'Bridgehead' [10/06/2012 02:30:44.0166] [2] [ERROR] Unexpected Error [10/06/2012 02:30:44.0166] [2] [ERROR] The registry key "SOFTWARE\Microsoft\ExchangeServer\v14\Transport" does not exist under "HKEY_LOCAL_MACHINE". [10/06/2012 02:30:44.0182] [2] Ending processing Set-LocalPermissions [10/06/2012 02:30:44.0182] [1] The following 1 error(s) occurred during task execution: [10/06/2012 02:30:44.0182] [1] 0. ErrorRecord: The registry key "SOFTWARE\Microsoft\ExchangeServer\v14\Transport" does not exist under "HKEY_LOCAL_MACHINE". [10/06/2012 02:30:44.0182] [1] 0. ErrorRecord: System.ArgumentException: The registry key "SOFTWARE\Microsoft\ExchangeServer\v14\Transport" does not exist under "HKEY_LOCAL_MACHINE". at Microsoft.Exchange.Management.Deployment.SetLocalPermissions.GetTargetRegistryKey(XmlNode targetNode) at Microsoft.Exchange.Management.Deployment.SetLocalPermissions.ChangePermissions[TTarget,TSecurity,TAccessRule,TRights](XmlNode targetNode, Dictionary`2 rightsDictionary, GetTarget`1 getTarget, GetOrginalPermissionsOnTarget`2 getOrginalPermissionsOnTarget, SetPermissionsOnTarget`2 setPermissionsOnTarget, CreateAccessRule`2 createAccessRule, AddAccessRule`2 addAccessRule, RemoveAccessRuleAll`1 removeAccessRuleAll) at Microsoft.Exchange.Management.Deployment.SetLocalPermissions.SetPermissionsOnCurrentLevel[TTarget,TSecurity,TAccessRule,TRights](XmlNode permissionSetNode, String targetType, Dictionary`2 rightsDictionary, GetTarget`1 getTarget, GetOrginalPermissionsOnTarget`2 getOrginalPermissionsOnTarget, SetPermissionsOnTarget`2 setPermissionsOnTarget, CreateAccessRule`2 createAccessRule, AddAccessRule`2 addAccessRule, RemoveAccessRuleAll`1 removeAccessRuleAll) at Microsoft.Exchange.Management.Deployment.SetLocalPermissions.SetPermissionsOnCurrentLevel(XmlNode permissionSetNode) at Microsoft.Exchange.Management.Deployment.SetLocalPermissions.SetFeaturePermissions(String feature) at Microsoft.Exchange.Management.Deployment.SetLocalPermissions.InternalProcessRecord() [10/06/2012 02:30:44.0197] [1] [ERROR] The following error was generated when "$error.Clear(); Set-LocalPermissions -Feature:"Bridgehead" " was run: "The registry key "SOFTWARE\Microsoft\ExchangeServer\v14\Transport" does not exist under "HKEY_LOCAL_MACHINE".". [10/06/2012 02:30:44.0197] [1] [ERROR] The registry key "SOFTWARE\Microsoft\ExchangeServer\v14\Transport" does not exist under "HKEY_LOCAL_MACHINE". [10/06/2012 02:30:44.0197] [1] [ERROR-REFERENCE] Id=BridgeheadLocalPermissionsComponent___2e2dbc2a97cb4429bc2074edc50bedbd Component=EXCHANGE14:\Current\Release\Shared\Datacenter\Setup [10/06/2012 02:30:44.0197] [1] Setup is stopping now because of one or more critical errors. [10/06/2012 02:30:44.0197] [1] Finished executing component tasks. [10/06/2012 02:30:44.0244] [1] Ending processing Install-BridgeheadRole I've been unable to find any documentation on how to resolve this issue. Any help would be appreciated.

    Read the article

  • Old scheduled task still being started, but can't find it.

    - by JvO
    System: Windows XP Home Summary: Some scheduled task is still being started by Windows, but I can't find it, nor determine where its configuration has been stored. This is turning into a mystery for me... I set up a Windows XP Home machine to run a task at 7:00 AM, using the Task manager. This was a clean install, no users defined, so you got straight to the desktop after starting the machine. The filesystem uses NTFS. Later on, I needed to introduce users, so I created one (named Sam) with administrator privileges. After this I noticed that the scheduled task failed, most likely due to privilege errors (i.e. can't write to a network drive). So I want to delete the old task, and add it again with the correct user credentials. However.... I can't find the old task!! I know it is still being executed at 7:00 AM, but there's no mention anywhere on the system of this task. I've looked in c:\windows\tasks for .job files, but there's only the "MP Scheduled Scan.job" from Security Essentials. I've searched the whole disk for mention of the batch file that is being run, but can't find it. So why is this old task still running, and more importantly, why can't I find it? Would it have something to do with introducing users on XP?

    Read the article

  • Problems Installing slapd On Ubuntu Server 11.10

    - by Zach Dziura
    I know that there's a Ubuntu-specific StackExchange website, but I thought that I'd ask here because it's a server-specific question. If I'm wrong in my logic... Well, you people are better at this than I am! O=) On with the show! I'm in the process of installing Oracle Database 11g R2 Standard Edition onto Ubuntu Server 11.10. I found a guide on the Oracle Support Forums that walks you through the process fairly easily. Unfortunately, I'm running into issues installing one particular dependency: slapd. When I go to install it, I get this error message: (Reading database ... 64726 files and directories currently installed.) Unpacking slapd (from .../slapd_2.4.25-1.1ubuntu4.1_amd64.deb) ... Processing triggers for man-db ... Processing triggers for ufw ... Processing triggers for ureadahead ... Setting up slapd (2.4.25-1.1ubuntu4.1) ... Usage: slappasswd [options] -c format crypt(3) salt format -g generate random password -h hash password scheme -n omit trailing newline -s secret new password -u generate RFC2307 values (default) -v increase verbosity -T file read file for new password Creating initial configuration... Loading the initial configuration from the ldif file () failed with the following error while running slapadd: str2entry: invalid value for attributeType olcRootPW #0 (syntax 1.3.6.1.4.1.1466.115.121.1.15) slapadd: could not parse entry (line=1051) dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd E: Sub-process /usr/bin/dpkg returned an error code (1) After much Google searches and forum trolling, I have yet to find a definitive answer as to what's going wrong. The error messages seem straight forward enough, but I have no idea how to debug this. Can anyone offer some assistance? Again, if I'm asking in the wrong place, I apologize. If I'm indeed asking properly, then thank you for any and all help!

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • csvde doesn't import users

    - by The Eighth Ero
    I have a small problem as I'm a server manager beginner, I installed a Domain Controller on my Windows Server 2008, and I created three OUs, now I'm trying to add users to each OU via csvde command, but I get as a result of the operation, without mentioning any errors: > C:\csvde>csvde -i -f List.csv > Connecting to "(null)" > Logging in as current user using SSPI Importing directory from file > "List.csv" Loading entries. > 0 entries modified successfully. Below is the csv file I'm using to add 2 users to "Offshoring1" OU, the domain name is "iado.lan". DN objectClass sAMAccountName sn givenName userPrincipalNAme cn=BB NN,ou=Offshoring1,dc=iado,dc=lan user BB NN BB [email protected] cn=II YY,ou=Offshoring1,dc=iado,dc=lan user II YY II [email protected] and this the csv data as generated by Word 2011 on my mac : DN;objectClass;sAMAccountName;sn;givenName;userPrincipalNAme cn=BB NN,ou=Offshoring1,dc=iado,dc=lan;user;BB;NN;BB;[email protected] cn=II YY,ou=Offshoring1,dc=iado,dc=lan;user;II;YY;II;[email protected] I do use -k option to force import but still no success.

    Read the article

  • Suddenly getting lock timeouts with MySQL

    - by Marc Hughes
    We've got a web app hosted on Amazon Web services. Our database is a multi-az RDS MySQL server running 5.1.57 and 3-4 app servers talk to it. Today, we started seeing a lot of errors along the lines of "Lock wait timeout exceeded; try restarting transaction" - almost 1% of POST requests are seeing this. There have been no modifications to the code running on the site. There have been no schema changes. We haven't had a big spike in traffic. I've been looking at the processes running, and none seem out of control. I tried scaling our RDS instance from a small to a large, with no effect. Two days ago, Amazon had some outages. As part of the recovery from that, our RDS server, and our app servers ended up in different availability zones, but all within the same region. But yesterday, everything was fine so I'm not convinced that's related. The lock timeouts are in different types of requests and occur in different InnoDB tables. I have noticed the number of open connections jumped when we started seeing problems, but they may be a symptom and not a cause. What are my next steps in debugging this?

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

  • then an error occurred during the login process - Connection Error 233

    - by scott brunner
    We have SQL Server 2008 installed on 64Bit Windows Server 2003. When we try connect to the local SQL Server using SQL Server Management Studio at the console, we get the error: A connection was successfully established with the server, but then an error occurred during the login process. provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe. When we try TCP from same local SSMS to local server, we get the same error but intead of the pipe message its something like "connection forcibly closed". Now, here is the strange part - we CAN connect to this SQL Server from any other machine on the network using SSMS. - AND - WE CAN'T connect to ANY SQL Server from the problem server. So it seems the SQL Server instance is fine and accepting remote connections. However, the SSMS on that machine will not connect to any SQL Server even remotely. When we try an ADO.NET connection from C# remotely we can connect, run that same code on the console of the trouble server and we get the same errors. How can this be solved?

    Read the article

  • Using IIS6 to run kill process. Executable hangs

    - by David
    I'm using the following code (any tried many variations) in a web page that is supposed to kill a process on the server: Process scriptProc = new Process(); SecureString password = new SecureString(); password.AppendChar('p'); password.AppendChar('s'); password.AppendChar('s'); password.AppendChar('w'); password.AppendChar('d'); scriptProc.StartInfo.UserName = "mylocaluser"; scriptProc.StartInfo.Password = password; scriptProc.StartInfo.FileName = @"C:\WINDOWS\System32\WScript.exe"; scriptProc.StartInfo.Arguments = @"c:\windows\system32\killMyApp.vbs"; scriptProc.StartInfo.UseShellExecute = false; scriptProc.Start(); scriptProc.WaitForExit(); scriptProc.Close(); The VBS file is supposed to kill a w3wp.exe process, but never works. There are no errors in the application log. It works locally. I noticed WScript.exe is in task manager every time I run the page, and never goes away. The process WScript.exe (and I tried others such a psexec.exe) is being run as a local user with admin rights (and I tried other types of users including domain admins) when run from IIS, but it works when run from the command line on the server.

    Read the article

  • Windows 7 Install: No drives were found

    - by Albert Bori
    I was building a computer for my wife with an older SATA hard drive that I had lying around, and when attempting to do a new install of Windows 7 on it, the installer says: "No drives were found. Click Load Driver to provide a mass storage driver for installation." I ran the diskpart command: list volume, and it showed up as "Raw". So, I formatted it to NTFS and then it showed up as a healthy drive in diskpart. I also ran check disk on it with no errors. Windows 7 installer STILL can't find the drive. As far as BIOS settings, I have tried "Native IDE", AHCI, and Both AHCI/IDE mode (SATA slots 0-2 AHCI, 3-4 IDE). I tried all combinations... still "no drives were found". At this point, I'm just scratching my head. Using the installation dos window, I can see and talk to the drive just fine, but the installer just doesn't see it at all. I've even written folders and files to the drive, and it still "can't be seen". Any help would be great. Items of interest: Motherboard model: Gigabyte GA-A75M-UD2H - BIOS Version F5 (latest) Hard drive model: 80GB Seagate Barracuda 7200.7 ST380817AS (no other drives) Installing Windows 7 using a FAT32 formatted USB Drive, which I've used for other installs

    Read the article

  • Ubuntu 10.04 bind9 local zone include files and apparmor

    - by Gilgongo
    Rather than putting all my zones in one named.conf.local file, I'd like to have them in groups that I can manage as separate files. So, I've tried putting the following into named.conf.local: include "/home/zones/group1.conf"; include "/home/zones/group2.conf"; include "/home/zones/group3.conf"; However, when I restart named, I see "permission denied" errors in the logs. Ubuntu uses apparmor for bind, so I also added the following in /etc/apparmor.d/usr.sbin.named: /home/zones/group1.conf r, /home/zones/group1.conf r, /home/zones/group1.conf r, Now, when I re-start named, all appears to be well. Zones are loaded (I think). However, a day or two later, I see my secondary name server complaining that the primary is telling it that it's not authoritative for those domains. I then have to put all the domains back into the named.conf.local file again. How can I get bind9 to use include files in this way? I don't know much about apparmor, so that may or may not be the issue here, but I've used include files in this way on Debian OK.

    Read the article

  • Looking for way to log process terminations on OS X (Mac)

    - by Stan Sieler
    I'm looking for a way to log all process terminations on my Mac (OS X 10.6.8). (And see pid, timestamp, process name) I've implemented something similar for HP-UX, but it required a kernel-level driver and intercepting several variations of "exit()" (the normal one, and the one invoked on behalf of a process while it's aborting). Why do I want the info? I've been seeing messages in my system log file (dmesg) like: CODE SIGNING: cs_invalid_page(0x1000): p=91550[GoogleSoftwareUp] clearing CS_VALID CODE SIGNING: cs_invalid_page(0x1000): p=92088[GoogleSoftwareUp] clearing CS_VALID Although dmesg lacks timestamps, apps/Utilities/Console : Database : all : search for CS_VALID shows that the messages appears about once every 58 1/2 minutes. I suspect the number after "p=" is a process id (pid) ... but for a process that has long since terminated by the time I see the message. So, if there was a process termination log mechanism that recorded the pid, the time of termination, the reason for termination, and the process name (at time of termination), that would probably allow me to determine who's causing those errors to be logged! (No, I'm not running Chrome on my Mac, and "ps -ef | grep -i goog" gets no hits either ... I'm not consciously running any Google apps on the Mac) thanks, Stan [email protected]

    Read the article

  • Multiple logins with pam_mount means multiple (redundant) mounts ...

    - by Jamie
    I've configured pam_mount.so to automagically mount a cifs share when users login; the problem is if a user logs into multiple times simultaneously, the mount command is repeated multiple times. This so far isn't a problem but it's messy when you look at the output of a mount command. # mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) I'm assuming I need to fiddle with either the pam.d/common-auth file or pam_mount.conf.xml to accomplish this. How can I instruct pam_mount.so to avoid duplicate mountings?

    Read the article

  • Apache httpd: Send error logs to syslog and local disk? Without touching /etc/syslog.conf?

    - by Stefan Lasiewski
    I have an Apache httpd 2.2 server. I want to log all messages using syslog, so that the requests are sent to our central syslog server. I also want to ensure that all log messages are sent to local disk, so that a sysadmin can have easy access to the log files on the local system. It is easy to send HTTP access logs to both the local disk and to syslog. One common method is: LogFormat "%V %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined CustomLog logs/access_log combined CustomLog "|/usr/bin/logger -t httpd -i -p local4.info" combined But it is not easy to do this for error logs. The following configuration doesn't work, because the error logs only use the last ErrorLog stanza. The first ErrorLog stanza is ignored. ErrorLog logs/error_log ErrorLog syslog:local4.error How can I ensure that Apache errors logs are written to the local disk and are sent to syslog? Is it possible to do this without touching /etc/syslog.conf ? I am fine if my users want to manage their own Apache configuration files, but I do not want them touching system files such as /etc/syslog.conf

    Read the article

  • Create image (EBS AMI) takes forever - possibly caused MySQL Server to break?

    - by fuzzybee
    I'm trying to create an EBS AMI from my running EC2 instance to reuse my LAMP fully configured (for my needs). I got my website up and running yesterday on this EC2 instance my MySQL was working fine until this morning (it's not that difficult to install LAMP thanks to yum so I can't see how I could go wrong with this; having said that, it's always difficult for one to realise his own errors) I have seen "Loading, please wait ..." for a few hours now. How do I know whether this is completed or its progress? Shortly after I tried to create the AMI image from my EC2 instance, I encountered database connection error can't connect to local mysql server through socket '/var/lib/mysql/mysql.sock' I was able to restart mysqld at first. But database connection was down again. This time, I could not restart mysqld anymore. It shows MySQL Daemon failed to start. Could my attempt to create the AMI by any chance cause the MySQL server to reboot or corrupt? I did a lot of searched and have done the following although I think I shouldn't have to do any workaround for MySQL server to work here chown -R mysql.mysql /var/lib/mysql/ I also found this workaround but I'm very reluctant to follow due to my belief and the fact I would need to understand this problem first. Any help would be greatly appreciated. Getting back to searching for a solution for the MySQL server problem ... Thanks, Eric

    Read the article

  • Why does Windows Event Log stop logging events before maximum log size is reached?

    - by Tuure Laurinolli
    I have a service that produces a lot of event log output. Currently the event log is configured to overwrite any old events to keep the log from ever getting full. We have also increased the event log size considerably (to about 600 MB). Recently the service started reporting errors to its clients, and the error message it was sending to its clients is "The event log file is full". How can this be, when event log is configured to overwrite as necessary? In our hurry to get the service back up we cleared the event log without saving its contents, but most likely it had not reached 600 MB yet, judging from sizes of some earlier log dumps. There is also MS KB entry 312571, which reports that a hot fix to a similar issue is available, but the the configuration that the fix applies to is not exactly the same we have. Specifically, the fix only applies if event logs are configured to never overwrite old events. I wonder if this has something to do with the fact that the log files apparently are memory-mapped. What happens if the system runs out of address space to map files to?

    Read the article

  • HP tx2510us shuts down without warning, now won't boot, no BIOS codes, screen doesn't light

    - by Tim S
    Hey all, HP tx2150us worked great for a year and a half, rarely it would shut down instead of sleeping. It started running hot sometimes, I installed Win7, worked great, still running hot. Before Xmas, it gave me evidence of video errors - jagged lines, rows missing. That same night, it started rebooting, then it shut off and wouldn't boot. It gave me a BIOS code, I shut it off to look up the code online. When I turned it on again, it won't boot at all. The LEDs all light up, the fan, HD, and optical all spin up, but the screen never lights up and it doesn't try to boot, nor does it blink any BIOS codes. It just acts like it's sleeping, and won't wake up. I suspected heat problems, so I disassembled it, cleaned the crap out of the fan, and reassembled it, breaking the stereo mic connector. Oh well. When I reassembled it, it booted up into Win7 again, but kept shutting down for no discernible reason. After a dozen or so random reboots like that, it is now back to where it was - turns on but doesn't boot or give BIOS codes. The screen never lights, and everything spins up then idles. Any ideas? I really can't afford to buy a new one and I use(d) it to take ALL my notes, that's why I got a tablet in the first place.

    Read the article

  • MRTG Monitoring of Disk

    - by Antitribu
    I am trying to monitor Disk usage over SNMP using MRTG on CentOS5.2. I'm open to any suggestions as to the best way to achieve this (I would also like to do other metrics like CPU). Please don't assume I know anything about MRTG. I am using the following config: LoadMIBs: /usr/share/snmp/mibs/UCD-SNMP-MIB.txt,/usr/share/snmp/mibs/TCP-MIB.txt workdir: /var/www/html/mrtg/temp/ # # Disk Usage Monitoring # Target[servername.]: dskPercent.0&dskPercent.0:[email protected] Title[servername.]: / on servername routers.cgi*Desc[servername.]: / on servername routers.cgi*ShortDesc[servername.]: / MaxBytes[servername.]: 100 AbsMax[servername.]: 100 Options[servername.]: growright,nopercent,gauge YLegend[servername.]: used disk space ShortLegend[servername.]: % used Legend1[servername.]: usage Legend2[servername.]: usage Legend3[servername.]: peak usage Legend4[servername.]: peak usage LegendI[servername.]: usage LegendO[servername.]: usage routers.cgi*Icon[servername.]: disk-sm.gif routers.cgi*Options[servername.]: noo,nomax,noabsmax Unscaled[servername.]: dwmy I receive the errors: Unknown SNMP var dskPercent.0 at /usr/bin/mrtg line 2035 Unknown SNMP var dskPercent.0 at /usr/bin/mrtg line 2035 From forum surfing etc the suggestion is to use the fully qualified OIDs, I'd like to avoid this (for readability). So essentially I'm wondering where can I find a mib file compatible with mrtg for it's reference or a working config file.

    Read the article

  • EC2 configuration for medium load service on Django

    - by Luberg
    I have created a very basic Django application which puts an email to the database (Coming soon page for a startup). I launched a t1.micro instance to try out which load it can carry out. Nginx+FastCGI from Django+sqllite/postgres - tried both. blitz.io test gave me a pretty unhappy result (just 100 users within 1 minute): This rush generated 542 successful hits in 1.0 min and we transferred 809.01 KB of data in and out of your app. The average hit rate of 8.81/second translates to about 761,612 hits/day. You got bigger problems though: 87.28% of the users during this rush experienced timeouts or errors! I tried both to put varnish, disabled Debub mode in django and started fastcgi in threaded mode - nothing helps. This is not gonna be a super highload page - just a coming soon page to save email of subscribers, it should carry at least 500-1000 users at the same time in peak... I believe t1.micro is super small for that, but I also have tried small instance - not better result.. Please let me know should I use something different from Amazon EC2, or to pick smth better than t1.micro, or I that is definetely a configuration issues?...

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >