Search Results

Search found 15680 results on 628 pages for 'action filter'.

Page 491/628 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • Cannot delete flash9.ocx

    - by Kara Marfia
    Some kinda voodoo, indeed. Bought a new boot drive, and it's time to use the old drive for data. I thought I'd save some time by just wiping the unneeded system folders, instead of backing up, formatting, and restoring. Wups! I have a single Adobe file that absolutely will not be deleted. G:\Windows\SysWOW64\Macromed\Flash\flash9.ocx - though it may have started with a different file name. I'm able to rename it, oddly enough. To be clear, the drive is currently plugged in externally. So I can boot the computer, plug this drive in afterward, and immediately attempt to delete. "File in use" box reads "the action can't be completed because the file is open in another program". I'd format at this point, but it's my white whale, and I have to know if Adobe has inserted some nasty little registry hack - or whatever it is - making this impossible. Since I'm sure it'll come up, I've taken ownership of the file - and this was the trick preventing me from deleting anything else on the drive - full rights on the file permissions, you name it, I've fiddled with the file itself. I'm about to try uninstalling flash from the system drive, in case that aligns the planets properly. Sometimes I wish I were less stubborn, and could just format already.

    Read the article

  • Missing whole disk device in OpenSolaris

    - by Jeff Mc
    I have begun experimenting with Solaris and ZFS as a NAS. All was going very smoothly until I had a drive failure. When I replaced the drive, I no longer have a device file mapped to the whole disk. /dev/dsk/c7t3d0 does not exist but c7t2d0 and c7t4d0 both do. Also the sd@3,0:wd file under the /devices/ tree is non-existent. Do I have to prepare/partition the disk somehow to cause the whole disk device to exist? Here are a few outputs that might be useful. jeffmc@ats-ds2:/dev/dsk$ zpool status pool: datapool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: none requested config: NAME STATE READ WRITE CKSUM datapool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 UNAVAIL 0 0 0 cannot open mirror-1 ONLINE 0 0 0 c7t4d0 ONLINE 0 0 0 c7t5d0 ONLINE 0 0 0 jeffmc@ats-ds2:/dev/dsk$ zpool replace datapool c7t3d0 cannot open 'c7t3d0': no such device in /dev/dsk must be a full path or shorthand device name jeffmc@ats-ds2:/dev/dsk$ sudo format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@0,0 1. c7t1d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@1,0 2. c7t2d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@2,0 3. c7t3d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@3,0 4. c7t4d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@4,0 5. c7t5d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@5,0

    Read the article

  • VMM 2012 Adding Hosts in Trusted Forest

    - by Steve Evans
    I have two forests with a two way trust between them. VMM 2012 sits in ForestA and I can discover hosts in ForestA with no issue. When I try to discover hosts in ForestB I hit one of two issues: If I go through the GUI or use Powershell just like I normally do I get the following error on the job: Error (10407) Virtual Machine Manager could not query Active Directory Domain Services. Recommended Action Verify that the domain name and the credentials, if provided, are correct and then try the operation again. It doesn't matter which account I use. I've tried accounts from both forests, with Admin/Domain Admin permissions all over the place, etc Going through the GUI (can't find the switch in Powershell to duplicate this), I check the box "Skip AD Verification" and it causes the GUI to crash during discovery. I found an article (http://technet.microsoft.com/en-us/library/gg610641.aspx) that describes how to add a host in a disjoint namespace (even though that doesn't apply to me) and it says that VMM creates an SPN if one does not exist. So I verified that the correct SPN's exist in ForestB, that did not help the issue. I have a case open with PSS but they are stuck. I have VMM traces if anyone would like to see them. Any suggestions or ideas?

    Read the article

  • Updating WordPress 3.6 to 3.7 via admin area on Nginx VPS hangs and fails

    - by harryg
    So I have a few WordPress sites running on my VPS (Ubuntu 12.10, Nginx, php-fpm 5.4) The sites are all on seperate vhosts and use their own config files (albeit similar to each other) and vary in complexity. One is very simple and uses minimal plugins. When I try to update core on any site via the admin area I click the "Update Now" button (which should run the script in wp-admin/update-core.php the page hangs for a minute or two before going to a blank admin page (i.e. the wp-admin menu bars and header bar are there but there is no content in the body of the page). Visiting another admin page via the still menu bar reveals that the core has not been updated. Checking the error log I see this entry: 2013/10/29 23:20:48 [error] 9384#0: *5318248 upstream timed out (110: Connection timed out) while reading upstream, client: --.---.--.---, server: www.mysite.com, request: "POST /wp-admin/update-core.php?action=do-core-upgrade HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "mysite.com", referrer: "http://mysite.com/wp-admin/update-core.php" This didn't happen in the past on older updates and the rest of the site including updating plugins works fine. Any ideas? Could it be as simple as a time-out error? I find that unlikely as the server should munch though a wp upgrade in seconds.

    Read the article

  • Custom keyboard map is causing issues with stuck keys

    - by Grumbel
    I have a Microsoft Ergonomic 4000 keyboard and I am running a custom keymap (dvorak with some stuff for umlauts): http://pingus.seul.org/~grumbel/tmp/md5/b054e11505c88e1bfc6ebd5da46bdb78-xmodmap_pke http://pingus.seul.org/~grumbel/tmp/md5/f5e42a5b8ba4a034c5945f719b3d2608-xmodmap_pm This used to work fine for years and it still does, except that I am now having issues with a stuck Mode_switch key. When I hit Control_R and Mode_switch at the same time (happens a lot by accident), the Mode_switch key gets into a 'stuck' state, all letters I type afterwards come out in their umlaut form as if Mode_switch is pressed. I can unstuck the Mode_switch by again hitting Control_R and Mode_switch at the same time, but that leaves Gnome in a broken state where it doesn't react to my Gnome keyboard shortcuts any longer. The key presses themselves are still registered by the window manager as one can see changes in the applications (cursor in Gnome Terminal will turn into an unfilled rect, as if the application lost focus), but don't trigger the bound action. Does anybody have a clue what could be causing this? Or does anybody has an idea how I could debug this? xev doesn't seem to help here, as it is reporting normal KeyPress/KeyRelease events, even when the key is stuck. Also the Gnome key bindings don't get reported at all when its in the 'broken' state. I assume they are captured by the window manager before they even reach xev. I am using Ubuntu 10.04 with Gnome and Metacity, I have disabled all OpenGL related effects, so Compiz shouldn't interfere. Some general info on which applications are involved in Gnomes key binding handling would be helpful as well, as I assume its metacity, but restarting metacity doesn't fix the issue.

    Read the article

  • Accidentally dd'ed an image to wrong drive / overwrote partition table + NTFS partition start

    - by Kento Locatelli
    I screwed up and set the wrong output for dd when trying to copy a freenas iso, overwriting the wrong external hard drive. Ironically, I was trying to setup a freenas server for data backup... External drive is only used for data storage, system is entirely intact Drive had a single NTFS partition filing the entire device (2TB WD elements) Drive originally had an MBR partition table. Drive now shows as having a GPT, presumably from the freenas image. Drive was mounted at the time, with maybe a couple kB of data written/read after running dd Drive is just a few months old and healthy (regular SMART / fs checks) I have not reboot the OS (crunchbang) /proc/partition still holds the correct information (and has been stored) Have dd's output (records in / out / bytes) testdrive did not find any partitions on quick or deep search running photorec to recover the more important data (a couple recent plaintext files that hadn't been backed up yet). Vast majority of disk content ( 80%) is unnecessary media files. My current plan is to let photorec do it's thing, then recreate the mbr with gparted and use cfdisk to create another NTFS partition using the sector information from /sys/block/.../. Is that a good course of action (that is, a chance of success)? Or anything else I should try first? Possibly relevant information: dd if=FreeNAS-8.0.4-RELEASE-p3-x86.iso of=/dev/sdc: 194568+0 records in 194568+0 records out 99618816 bytes (100 MB) copied grep . /sys/block/sdc/sdc*/{start,size}: /sys/block/sdc/sdc1/start:2048 /sys/block/sdc/sdc1/size:3907022848 cat /proc/partitions: major minor #blocks name ** Snipped ** 8 32 1953512448 sdc 8 33 1953511424 sdc1 current fdisk -l output: WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 2000.4 GB, 2000396746752 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table

    Read the article

  • CIFS Mounting Permissions

    - by malco
    I have an issue that I;m going round in circles with, I hope you can help. The Set up: Server 1 (CIFS Client) - CentOS 6.3 AD integrated uing Samba/Winbind & idmap_ad Server 2 (CIFS Server) - CentOS 6.3 AD integrated uing Samba/Winbind & idmap_ad All users (apart from root) are AD authenticated and this, including groups, etc works happily. What's working: I have created a share on Server 2: [share2] path = /srv/samba/share2 writeable = yes Permissions on the share: drwxrwx---. 2 root domain users 4096 Oct 12 09:21 share2 I can log into a Windows machine as user5 (member of domain users) and everything works as it should, for example: If I create a file it shows the correct permissions and attributes on both the MS and the Linux sides. Where I Fall Down: I mount the share on Server 1 using: # mount //server2/share2 /mnt/share2/ -o username=cifsmount,password=blah,domain=blah Or using fstab: //server2/share2 /mnt/share2 cifs credentials=/blah/.creds 0 0 This mounts fine, but.... If I log su, or log onto server 1 as a normal user (say user5) and try to create a file I get: #touch test touch test touch: cannot touch `test': Permission denied Then if I check the folder the file was created but as the cifsmount user: -rw-r--r--. 1 cifsmount domain users 0 Oct 12 09:21 test I can rename, delete, move or copy stuff around as user5, I just can't create anything, what am I doing wrong? I'm guessing it's something to do with the mount action as when I log onto server2 as user5 and access the folder locally it all works as it should. Can anyone point me in the right direction?

    Read the article

  • Windows 7 access denied to executables.. by what?

    - by stijn
    Ever since I started using Windows 7 this problem has been bothering me. From time to time I see similar questions popping up on misc forums, but never did I see an answer. Here are two scenarios that nearly always reproduce it: the explorer way with explorer, navigate to a directory containing at least one exe file go one directory up immediately delete the directory just navigated to yields Folder Acces Denied dialog stating You need permission to perform this action You require permission from Administrators to make changes to this folder, with the buttons try Again and Cancel hitting Try Again never works immediately. Waiting a minute or so and then clickig it again does work note: if in step 2 and waiting a minute or more before going up one directory, the problem does not occur and the folder can be deleted the visual studio way build a project producing an exe file run the executable then close it immediately build the project again (by changing a single character in a source file for example) yields fatal error LNK1168: cannot open /path/to/the.exe for writing note: if in step 2 and waiting a minute or more before building again, the problem does not occur some specs happens both on Windows 7 32 and 64 bit, with VS2008/2010/2011 happens on 3 different machines I do not have a virusscanner of any kind I do have a bunch of services disabled, but nothing that prevents Windows from running normally, UAC is disabled as well happens on any type of disc I always use a user account that is in the Administrators group Obviously both scenarios are very similar and extremely reproducable. So I figured some process must have the file open for some reason, and release it again later. However, using systinternal's handle -a the exe file in question never shows up. (that is the correct way to use handle, right?) So while explorer/VS are reporting they cannot access the file, handle.exe says it's not in use anywhere. This leaves me rather clueless, so I'm wondering if someone can come up with a solution: why does this happen, and how to solve it?

    Read the article

  • Why am I not able to create a backup plan for TFS?

    - by noocyte
    I am trying to create a backup plan using the TFS Power Tools but I keep running into this error message: I have checked that the account has Full Control on the share, I can edit, create and delete files there. From the log: [Info @07:15:00.403] Starting creating backup test validation [Error @07:15:00.700] Microsoft.SqlServer.Management.Smo.FailedOperationException: Backup failed for Server 'WMSI003714N\SqlExpress'. ---> Microsoft.SqlServer.Management.Common.ExecutionFailureException: An exception occurred while executing a Transact-SQL statement or batch. ---> System.Data.SqlClient.SqlException: Cannot open backup device '\\wmsi003714n\sql dump\Tfs_Configuration_20100910091500.bak'. Operating system error 5(failed to retrieve text for this error. Reason: 1815). BACKUP DATABASE is terminating abnormally. at Microsoft.SqlServer.Management.Common.ConnectionManager.ExecuteTSql(ExecuteTSqlAction action, Object execObject, DataSet fillDataSet, Boolean catchException) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) --- End of inner exception stack trace --- at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(StringCollection sqlCommands, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Smo.ExecutionManager.ExecuteNonQuery(StringCollection queries) at Microsoft.SqlServer.Management.Smo.BackupRestoreBase.ExecuteSql(Server server, StringCollection queries) at Microsoft.SqlServer.Management.Smo.Backup.SqlBackup(Server srv) --- End of inner exception stack trace --- at Microsoft.SqlServer.Management.Smo.Backup.SqlBackup(Server srv) at Microsoft.TeamFoundation.PowerTools.Admin.Helpers.BackupFactory.TestBackupCreation(String path) [Error @07:15:00.731] !Verify Error!: Account GROUPINFRA\SA-NO-TeamService failed to create backups using path \\wmsi003714n\sql dump [Info @07:15:00.731] "Verify: Grant Backup Plan Permissions\Root\VerifyDummyBackupCreation(VerifyTestBackupCreatedSuccessfully): Exiting Verification with state Completed and result Error" Any ideas?

    Read the article

  • Merely installing PHP5 causes my AWS Ubuntu server to die minutes later from a massive CPU spike

    - by Mark Amery
    I have an AWS server with Ubuntu 11.04 as the OS that is running an Apache2 webserver (incidentally Python-based and using Django). We recently needed to add support for php5 to let us use a third party PHP library (incidentally for serving minified versions of js and css files). However, for no reason any of us can discern, if we simply run sudo apt-get install php5 on the server, then the install appears to finish successfully but, without us taking any further action (including not yet running sudo apt-get install libapache2-mod-php5, which I think would be the next step for us if everything worked), or actually running any PHP scripts on the server, a few minutes later the server becomes impossible to connect to, and looking at the 'Monitoring' tab for the server in the EC2 Management Console reveals that a while after the installation, CPU usage spikes to 100% and stays there permanently (until we reboot the server from the AWS Console). After rebooting, the server also reliably dies within a few (between 0 and 10) minutes. We restored the server to a pre-PHP state from an AMI Image, observed that it was stable, and then tried installing PHP5 again and observed the server die in exactly the same way, so we're pretty much certain that installing PHP5 is what causes the symptoms. What on earth could be causing this behaviour, and how can we get PHP installed on the server without it dying?

    Read the article

  • How to allow users to transfer files to other users on linux

    - by Jon Bringhurst
    We have an environment of a few thousand users running applications on about 40 clusters ranging in size from 20 compute nodes to 98,000 compute nodes. Users on these systems generate massive files (sometimes 1PB) controlled by traditional unix permissions (ACLs usually aren't available or practical due to the specialized nature of the filesystem). We currently have a program called "give", which is a suid-root program that allows a user to "give" a file to another user when group permissions are insufficient. So, a user would type something like the following to give a file to another user: > give username-to-give-to filename-to-give ... The receiving user can then use a command called "take" (part of the give program) to receive the file: > take filename-to-receive The permissions of the file are then effectively transferred over to the receiving user. This program has been around for years and we'd like to revisit things from a security and functional point of view. Our current plan of action is to remove the bit rot in our current implementation of "give" and package it up as an open source app before we redeploy it into production. Does anyone have another method they use to transfer extremely large files between users when only traditional unix permissions are available?

    Read the article

  • Microsoft Office 2003 applications crash on 'Save As' to a network mapped drive

    - by Archit Baweja
    Hey guys, so I'm not sure if it belongs on ServerFault forums so figured I'd ask here first because its a workstation/client side issue. I have a client where we have windows server 2003 setup, with windows xp professional setup on all the workstations. We've setup a 'domain' and all workstations logon to the domain (authenticated by the Windows Domain Controller), and in the logon script we map drives on to each workstation. Everything is working peachy except for one workstation, where when I open a file in excel from a mapped drive, it opens fine, but when I go to hit Save As, the Save As dialog pops and hangs up. I cannot perform any other action in excel. When I try cancel the Save As dialog, excel crashes. The mapped drive opens up fine in Windows Explorer. To further investigate this issue, I created a new blank text document on the network drive in Windows Explorer. I then opened it. Then hit save as, and the Save As dialog opened up fine and it would let me save the document. I repeated the above steps for a word document. However this time the Save As dialog hung/froze again. So I'd imagine its a Microsoft Office Issue. Any ideas?

    Read the article

  • Using a nat rule to translate 80/443 traffic to web server, but internal users cannot access it using external ip/domain name

    - by Josh
    I am using Cisco ASDM for ASA I have my internal network called soa. My outside interface is called outside. Let's say my outside IP given to me by my ISP isp is y.y.y.y I have a web server inside my network with a static ip of x.x.x.110. I have configured 2 static nat rules (one for http the other for https). Source is x.x.x.110. Interface is outside, service (http or https). Maybe I am doing this wrong, but when I run the packet tracer, I choose outside interface and for the source IP I used 8.8.8.8 and the destination ip is my outside IP address, y.y.y.y When I run that, it shows the packet traversing successfully, using 9 steps. For my other test, I switch to the soa interface, input an ip on that network, and leave the destination the same. This test comes up with 2 steps and then fails on my access list. When I see the rule that fails, it is my catch all which is source: any desitnation: any, service: ip action: deny. What rule do I need to make to allow my soa network access to go out and come back in by my external IP addess (using a domain name attached to that ip in my dns, of course)?

    Read the article

  • Win7 'locking' process/files/folders?

    - by Dynde
    I've had a fair bit problems with sometimes files/folders/processes being 'locked' by Windows. The weird thing is, it's not like the traditional sense, I think, where tools like UnlockIT and wholockme would work. It seems that just giving it a little often helps - making me think it could either be the HDD, the memory, or something in Windows. A scenario: I go into a folder - don't open anything at all, go back up, cut or drag-move the folder to someplace else, it says "Action can't be completed because the folder or a file in it is open in another program". Waiting sometimes 20 seconds sometimes a little longer, and I can move it. Another scenario is deleting a bunch of files in a folder, and it appears that everything is gone, but then suddenly after a few seconds an .exe file pops back up, and I can't delete it. Waiting a few minutes, then pressing refresh and it's gone. I have the strangest feeling that there's a problem with either HDD or memory. I already tried disabling Windows indexing service with no luck. Does anyone have any ideas? EDIT: I should say, that I have a very fast system, 16 GB DDR3 RAM, i7-2600k CPU, SSD main HDD, so I really should not be experiencing any sort of problems, where one might say that it's "reasonable" for the system not to respond right away. Edit2: And I updated SSD firmware a couple months ago, so it shouldn't be bad release FW either

    Read the article

  • Weird rendering artefact in vim (terminal, not MacVim)

    - by Tobi Lehman
    Running Mac OS X, using either Terminal.app or iTerm2, there is a strange artefact with the character rendering that I have a hard time explaining and an even harder time understanding. I'll start with a video of my screen so that you can see and example of it in action: From the video you can see a few ways it is weird, for example, sometimes when I hit a letter in insert mode, the character is double printed. When I go into normal mode, the artefact remains. When I re-enter insert mode, hitting backspace copies the characters on the left to the position under the cursor. This has happened in OS X Lion, and Mountain Lion, under both Terminal.app and iTerm 2. This never happens under MacVim. Also, I use GNU/Linux on my other machine, and have never had this happen, I am pretty sure it is strictly a Mac OS X issue, but I do not know how to fix it. For a while, I've been working around it by using MacVim most of the time, but I prefer working in a terminal. Does anyone know what is happening here, and if so, how can I fix it? EDIT: I tried using the macvim Vim executable, and I still get strange artefacts, but they are localized to the left side of the screen, here is an example:

    Read the article

  • Opera's problem with magnet-links

    - by Dmitriy Matveev
    I've encountered following problem while using Opera web browser: When I click on a magnet link on some web page like magnet:?xt=urn:tree:tiger:CXW6MJFRNOEFU2STCBWWOIYZLVCR2FTR37SQCXY&xl=352342016&dn=ER%20-%207x16%20-%20Witch%20Hunt.avi Opera is asking me if I want to open that link with my DC++ client. If I click 'Yes' button then my DC++ client is correctly "opening" the clicked magnet link and performing some action on it. There is also an option "Do not show this dialog again" in Opera's dialog, but it doesn't seem to be working correctly. If I check that option before answering 'Yes' and then click on other magnet link of the same kind the Opera will again ask me about how to open new link. I haven't found protocol association in 'Control Panel Default Programs Set Associations' part of my Windows Vista settings, but if I paste magnet link in "Run" dialog then Vista will handle that link perfectly. I've tried to find out how to manually set protocol association in Opera and found 'Programs' page of browser's advanced settings. There I discovered that instead of storing protocol to application associations Opera tries to store per-link associations (There are several entries with exact links as they was on web page as value of protocol field). If I click on the links which are already stored in Opera's protocols associations browser will ask me about them again. I haven't found any information on how to resolve this problem on the internet, maybe someone on this site will be able to help me.

    Read the article

  • How do I restore tab-completion on shell variables on the bash command-line?

    - by Eric
    I've long set my most-recently visited directories to shell variables d1, d2, etc. On an ancient Fedora machine I could type a command like $ cp $d1/ and the shell would replace $d1 with text like /home/acctname/projects/blog/ and would then show me the contents of .../blog, like any tab-completion. Now, both ubuntu wheezy/sid and fedora 16 just -escape the '$', and naturally there are no completions to show. You can see this behavior in action in an OSX Terminal window. On 10.8, do something like ls $HOME/ to see what I mean. Is there a bash shell variable or option that can restore the old behavior? man bash suggests this is a bug: complete (TAB) Attempt to perform completion on the text before point. Bash attempts completion treating the text as a variable (if the text begins with $), username (if the text begins with ~), hostname (if the text begins with @), or command (including aliases and functions) in turn. If none of these produces a match, filename completion is attempted. I get the above described completion when a token starts with '~' or a letter. It's just '$'-completion that's broken.

    Read the article

  • Recovering ZFS pool with errors on import.

    - by Sqeaky
    I have a machine that had some trouble with some bad RAM. After I diagnosed it and removed the offending stick of RAM, The ZFS pool in the machine was trying to access drives by using incorrect device names. I simply exported the pool and re-imported it to correct this. However I am now getting this error. The pool Storage no longer automatically mounts sqeaky@sqeaky-media-server:/$ sudo zpool status no pools available A regular import says its corrupt sqeaky@sqeaky-media-server:/$ sudo zpool import pool: Storage id: 13247750448079582452 state: UNAVAIL status: The pool is formatted using an older on-disk version. action: The pool cannot be imported due to damaged devices or data. config: Storage UNAVAIL insufficient replicas raidz1 UNAVAIL corrupted data 805066522130738790 ONLINE sdd3 ONLINE sda3 ONLINE sdc ONLINE A specific import says the vdev configuration is invalid sqeaky@sqeaky-media-server:/$ sudo zpool import Storage cannot import 'Storage': invalid vdev configuration I should have 4 devices in my ZFS pool: /dev/sda3 /dev/sdd3 /dev/sdc /dev/sdb I have no clue what 805066522130738790 is but I plan on investigating further. I am also trying to figure out how to use zdb to get more information about what the pool thinks is going on. For reference This was setup this way, because at the time this machine/pool was setup it needed certain Linux features and booting from ZFS wasn't yet supported in Linux. The partitions sda1 and sdd1 are in a raid 1 for the operating system and sdd2 and sda2 are in a raid1 for the swap. Any clue on how to recover this ZFS pool?

    Read the article

  • Spam or exchange issue?

    - by John
    I am getting an error message to unknow user on my domain. I would like to know is this just a phishing spam email or it was really send from our domain? I have changed our domain name to OURDOMAIN.COM I have Exchange 2010 installed. Body of the email is Delivery has failed to these recipients or distribution lists: sales The recipient's e-mail address was not found in the recipient's e-mail system. Microsoft Exchange will not try to redeliver this message for you. Please check the e-mail address and try resending this message, or provide the following diagnostic text to your system administrator. Sent by Microsoft Exchange Server 2007 Diagnostic information for administrators: Generating server: murraygroup.local [email protected] #550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ## Original message headers: Received: from ironport.mih.co.uk (10.10.29.9) by mih-exca-01.murraygroup.local (10.10.29.133) with Microsoft SMTP Server id 8.3.106.1; Fri, 29 Jun 2012 12:36:12 +0100 Received: from glamf04.netintelligence.com (HELO mailfilter.iomart.com) ([62.128.193.114]) by ironport.mih.co.uk with SMTP; 29 Jun 2012 12:42:48 +0100 Received: from glamta4.netintelligence.com(localhost.localdomain[127.0.0.1]) by mailfilter.iomart.com ; Fri, 29 Jun 2012 12:37:18 BST Received: from [195.43.137.66] ([195.43.137.66]) by glamta4.netintelligence.com (8.13.1/8.12.8) with ESMTP id q5TBbH4j022142 for <[email protected]>; Fri, 29 Jun 2012 12:37:18 +0100 Date: Fri, 29 Jun 2012 12:37:17 +0100 Message-ID: <20120629145229.4C2A817231D8A7958044@SONW> From: Ines Hampton <[email protected]> To: sales <[email protected]> Reply-To: Marguerite Soto <[email protected]> Subject: User sales MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Return-Path: [email protected] eporting-MTA: dns;murraygroup.local Received-From-MTA: dns;ironport.mih.co.uk Arrival-Date: Fri, 29 Jun 2012 11:36:12 +0000 Final-Recipient: rfc822;[email protected] Action: failed Status: 5.1.1 Diagnostic-Code: smtp;550 5.1.1 RESOLVER.ADR.RecipNotFound; not found X-Display-Name: sales

    Read the article

  • Upgrading only certain packages via the getdeb repo

    - by intuited
    I'm a bit confused about how getdeb.net works now. The last time I got a package from there was a while ago; at that point the procedure was that you would just download a .deb for each package that you wanted to install/upgrade and then install it using dpkg -i. However the inexorable march of progress has lent its trumpets to this system as well, and getdeb installs are now done via their repo, which is registered with apt in /etc/apt/sources.list.d, after you install a single package that makes the changes to the apt database. I've installed that package, and I've discovered that aptitude dist-upgrade now wants to upgrade a lot of packages on my system that weren't ready for upgrades prior to the installation of the getdeb package. If I rename the file /etc/apt/sources.list.d/getdeb.list to something with a different extension, then do aptitude update && aptitude dist-upgrade, it stops wanting to upgrade packages. So I gather that the default behaviour is now to upgrade all packages to the version available at getdeb. This is not particularly appropriate, since these packages are not as well tested as the officially released versions. Is there a config setting somewhere that will prevent upgrading packages to versions from the getdeb repo unless this action is specifically selected? I'd like to be able to pick and choose what packages are upgraded via getdeb.

    Read the article

  • Full-text search locks up database - error 0x8001010e

    - by Stewart May
    Hi We have a full-text catalog that is populated via a job every 15 minutes like so: ALTER FULLTEXT INDEX ON [dbo].[WorkItemLongTexts] START INCREMENTAL POPULATION We have encountered a problem where the database containing this catalog locks up. There are a couple of scenarios, we either see the job execute and the process hang with with a wait type of UNKNOWN TOKEN, or we see another process hang with a wait type of MSSEARCH. Once this happens the job continues to run but informs us that the request to start a full-text index population is ignored because a population is currently active. Looking in the full text log files we see the following error each time these problems occur: 2010-04-21 08:15:00.76 spid21s The full-text catalog health monitor reported a failure for full-text catalog "XXXFullTextCatalog" (5) in database "YYY" (14). Reason code: 0. Error: 0x8001010e(The application called an interface that was marshalled for a different thread.). The system will restart any in-progress population from the previous checkpoint. If this message occurs frequently, consult SQL Server Books Online for troubleshooting assistance. This is an informational message only. No user action is required."'' The only solution is to restart the SQL Server service and then the full text service. This is now occuring on a daily basis now so any help would be appreciated.

    Read the article

  • Is there any way to synchronize AD users with Office 365 but still be able to edit them online?

    - by Massimo
    I'm performing a migration to Office 365 from a third-party mail server (MDaemon); the local Active Directory doesn't include any Exchange server, and never had any. We will need directory synchronization in order to enable users to log on to Office 365 using their domain credentials; but it seems that as soon as you enable directory synchronization, you can't perform any action anymore on Office 365 users: all changes need to be made on the local Active Directory, and then replicated by the synchronization process. For ordinary users with a single e-mail address and standard features, this is not a big problem; but what about users which need an additional address? What if I need to configure some nonstandard setting, like "hide from address list" or a custom mailbox quota? From what I've gathered, the only supported way to do this, as you can't directly edit Office 365 objects anymore after synchronization is enabled, is to extend the local AD schema with Exchange attributes, and then manually edit them (!). Or, you can install at least one local Exchange server, and then use the Exchange administrative tools to configure the required settings. Is this correct or am I missing something? Is there any way to synchronize user accounts and password, but still be able to edit user settings directly in Office 365? If not (everything really needs to be set locally and then synchronized), is there any simpler way to do this than manually editing LDAP attributes or installing a local Exchange server?

    Read the article

  • Recommended drive encryption solution

    - by Chris Driver
    Hello, I will soon be purchasing a number of laptops running Windows 7 for our mobile staff. Due to the nature of our business I will need drive encryption. Windows BitLocker seems the obvious choice, but it looks like I need to purchase either Windows 7 Enterprise or Ultimate editions to get it. Can anyone offer suggestions on the best course of action: a) Use BitLocker, bite the bullet and pay to upgrade to Enterprise/Ultimate b) Pay for another 3rd party drive encryption product that is cheaper (suggestions appreciated) c) Use a free drive encryption product such as TrueCrypt Ideally I am also interested in 'real world' experience from people who are using drive encryption software and any pitfalls to look out for. Many thanks in advance... UPDATE Decided to go with TrueCrypt for the following reasons: a) The product has a good track record b) I am not managing a large quantity of laptops so integration with Active Directory, Management consoles etc is not a huge benefit c) Although eks did make a good point about Evil Maid (EM) attacks, our data is not that desirable to consider it a major factor d) The cost (free) is a big plus but not the primary motivator The next problem I face is imaging (Acronis/Ghost/..) encrypted drives will not work unless I perform sector-by-sector imaging. That means an 80Gb encrypted partition creates an 80Gb image file :(

    Read the article

  • Linux RAID: Replacing Failed Drive...permanantly

    - by user137519
    Okay, odd question here. I have a server with RAID 5. A drive failed, in a really physically in a really odd way. On the machine it boots and is seen by the BIOS but...no partition can be seen on the drive consistantly (in and out). 2 out of 3 drives working...I made new spare disk and added it, RAID 5 rebuilt clean. All appears well but...when I reboot it keeps trying to use the 2nd drive which doesn't give any partition data, so of course the RAID 5 gets 2 out of 3...again. The status of my drive is as follows: /dev/sda2:Good /dev/sdb2 (drive has physical problem so no partition data) bad, /dev/sdc2:good /dev/sdd2:good. Every time I reboot the mdadm system seems to keep trying to use /dev/sdb which has physical failure (although spins and is detected). /dev/sdd is the new drive I created. I added /dev/sdd to the raid and it rebuilds the raid but this action isn't memorized upon reboot so it keeps listing /dev/sda and /dev/sdc but doesn't use the perfectly good /dev/sdd until I re-add manually. I've tried removing the dead drive with the mdadm tool, but as it cannot see /dev/sdb paritions it will not fail or remove it (says partition doesn't exist). the /etc/mdadm.conf was automatically made on the original OS install which only lists: DEVICE partitions MAILADDR root ARRAY /dev/md2 super-minor=2 ARRAY /dev/md0 super-minor=0 ARRAY /dev/md1 super-minor=1 Basically just the raids to use on boot. I need to remove this semi-dead drive (/dev/sdb) but I'd prefer to know why this is happening before I do. any ideas or suggestions. I supposed I could attempt to clone/replace /dev/sdb (the partitions on drive show up, then disappear shortly after) but given the partition "chester cat" behaviour this seems risky to me and as I have a working "spare" it seems unnecessary. Thanks in advance for your insight.

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >