Search Results

Search found 34489 results on 1380 pages for 'folder view'.

Page 583/1380 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • Adding network share as the system account breaks Win7 backup to network

    - by ChrisBenn
    (On Win7 Ultimate x64 SP1) So I've been using Win7 backup to \192.168.0.100\Backup\main-desktop\ for awhile without issue. Yesterday I tried to setup crashplan to synchronize my dropbox folder and a network share. I then found out that crashplan, as it runs under the system account, can't see my user mapped drives. So I created a startup script net use O: \192.168.0.100\Documents /USER:192.168.0.100\username password and set it to run, on startup, after the network interface is up, as the system account. (the username & password are the same for the net use script above, the locally logged in user, and the explicit username/password entered in windows backup). I woke up this morning to find error flags from the windows backup and get "Network location cannot be used" (0x800704B3). If I disable the startup task & reboot then windows backup works fine. I'm not sure why having a mapped drive for another user is killing windows backup (same server, different folder). I can work around the issue by using another program to synchronize the two folders, but I'm completely in the dark as to why this happens (and it's 100% repeatable). Uninstalling the crashplan client doesn't change anything - it's the net use run under the system account that breaks win7 backup (to a network location).

    Read the article

  • SQL Server Installaion error 0x84B40000

    - by Kurtevich
    I have a problem installing SQL Server 2008 R2. Long time ago I had it installed, and then uninstalled. It was left in "Add/remove programs", but I didn't pay attention on that. I had 2005 installed. And now there is a need to install 2008. I removed 2005 and started installing 2008, but it says that space on C: is not enough. That's when I found out that "Add/remove programs" shows it occupying more than 4 gigabytes, though I used to uninstall it. So I click "Remove", it shows all those many screens and validations, shows that removal completed, but the size of Program Files folder is still more than 4 GB. I removed (from "Add\remove programs" everything that had "SQL Server" in it's name, but that main "SQL Server 2008" item is still there and still 4 GB and uninstalling does nothing. Because installation of SQL Server did not show existing instances, and I don't see any running services related to SQL server (well, almost any, more details in the end), I though that this folder contains just some leftover staff and data and deleted it manually. Then agreed to removing of the item in "Add/remove programs" and everything looks clean. Now every time I try to install SQL Server (even in the minimum configuration), I receive the following error: SQL Server Setup has encountered the following error: The specified credentials that were provided for the SQL Server service are not valid. To continue, provide a valid account and password for the SQL Server service. Error code 0x84B40000. What is this service mentioned here? This error looks like I'm trying to add features to existing server and it can't login. But the setup didn't ask me for any credentials, except one username that couldn't be changed. Here are the services shown that can be related, both disabled and pointing to non-existing executables: SQL Active Directory Helper Service SQL Full-text Filter Daemon Launcher (MSSQLSERVER) I understand that this must be because of my manual deletion, but is there a way to clean it up now?

    Read the article

  • Umount stale glusterfs partition

    - by Khaled
    I am using glusterfs on several Ubuntu servers: two of them are running glusterfs servers in replication mode. Without any clear error, the glusterfs partition became stale and the system shows this error when I try to access the stale partition: Transport endpoint is not connected Also, when running ls -l on the parent folder I get: d????????? ? ? ? ? ? myfolder I tried all types of commands that I can find to umount this partition, but I could not get it done: umount -l /path/to/mount/point umount -f /path/to/mount/point Also, using fuser command to show processes accessing this folder did not work. Unload the fuse kernel module can not be done as it is clear from the kernel config that fuse is built into the kernel and not a loadable module. I found this line in /boot/config-2.6.32-24-server CONFIG_FUSE_FS=y I have been left with two options: Reboot the system. Create another mount point like myfolder2 and mount this again using sudo glusterfs -f /etc/glustefs/glusterfs.vol /path/to/folder2. Of course, I have chosen to go with option 2. Anyone faced such an issue before? Anyone has a better solution for such a case?

    Read the article

  • Nginx ignores HTTP Authentication for WordPress login directory

    - by MrNerdy
    I am running WordPress in a subfolder of my domain for testing and development purposes on a VPS LEMP-stack. In order to password-protect the wp-login.php with an etxra layer, I used HTTP authentication for the wp-admin folder. The problem is that the http authentication is ignored. When the wp-login.php or wp-admin-folder is called, it goes directly to the normal WordPress-login. I installed everything from the command line in the following way: sudo apt-get install apache2-utils sudo htpasswd -c /var/www/bitmall/wp-admin/.htpasswd exampleuser New password: Re-type new password: Adding password for user exampleuser My Nginx configuration file looks like this: server { listen 80; root /var/www; index index.php index.html index.htm; server_name example.com; location / { try_files $uri $uri/ /index.html; } location /bitmall/wp-admin/ { auth_basic "Restricted Section"; auth_basic_user_file /var/www/bitmall/wp-admin/.htpasswd; } location ~ /\.ht { deny all; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } I would appreciate your advive on this.

    Read the article

  • finding the user - without knowing the innards of IIS

    - by LosManos
    Place of crime is WinSrv2008 with IIS7. My IIS apppool user is trying to create a folder but fails. How do I find out which User it is? Let's say I don't know much about IIS7 and Aspnet but need to trace whatever is happening through tools. So I fire up Sysinternals/ProcessMonitor to find out what is happening. I find Access denied on a folder just as I suspected. But which user? I add the User column to the output or ProcessMonitor and it says IIS Apppool\Defaultapppool in capitals. Well... that isn't a user is it? If I go to IIS and its Apppools and Advanced settings and Process model and Identity I can see clues about which user it is but that is only because I know IIS. What if it had been Apache or LightHttpd or whatever? How do I see the user to give the appropriate rights to?

    Read the article

  • Disabling Skype automatic update

    - by user13267
    How to stop skype from searching or at least downloading update without consent? I want that annoying "Update skype now" dialog box that keeps popping up before I log in to Skype and after I log in to Skype from appearing at all. Few months ago this used to work: 1) C:\Users\”YourName”\AppData\Local\Temp folder. 2) Find the file called SkypeSetup.exe, and delete it. 3) Create a text file in the folder, rename it to SkypeSetup.exe 4) Right click on the new file you just created and ask for properties. 5) Next left click the security tab then left click the advanced button. 6) Now left click “Change Permissions” and then “Add”. Enter “Everyone” (without the quotes) where it sez’, “Enter the object name to select (examples):” and click “OK”. 7) Now check the “Deny” box for “Full control” and click “OK”. obtained from HERE, but now it seems this has stopped working. The worst part is Skype seems to download ~30MB of executable setup file without my knowledge before bugging me with the dialog box to update it, and there seems to be no direct way to disable this download. And disabling the skype updater service does not seem to work either. Is there any kind of patch or registry hack I can use to stop skype from auto updating? Or should I start looking for an alternative to Skype altogether?

    Read the article

  • How to find hidden/cloak files in Windows 2003?

    - by homemdelata
    Here is the point. I set Windows to display all the hidden files and protected operating system files but even after that, my antivirus (Kaspersky) is still getting a ".dll" file on "c:\windows\system32" saying it's a riskware 'Hidden.Object'. I tried to find this file everytime but it's not there. So I asked one of the developers to create a service that verifies the folder each 5 seconds and, if it founds the file, copies to another place. If it copies to another place with the same name and extension, I still can't find the file on the other folder but Kaspersky now founds both. If I keep the same name but with a different extension, like ".temp123", I still can't find the file. Lastly, I created an empty text file and renamed with the same name as the other one, the file just gone too. After all this research It's clear that every file with this same name on this specific server gets cloak, doesn't matter the file extension. I created a file with this same name on another server and nothing happens, the file is still there without problem. How can I found this kind of file? How can I "uncloak" it? How can I know what this file is doing?

    Read the article

  • Cannot login to zabbix web portal

    - by hlx98007
    I've managed to install Zabbix22-server on CentOS 6.x along with php-fpm and nginx. I can view the page of 127.0.0.1 but I can only see this: After clicking the "Login" button, the page is the same: What can I do to make it work as expected, so that I can login as admin? Here are some confs: nginx_zabbix.conf: server { listen 80; add_header X-Frame-Options "SAMEORIGIN"; access_log /var/log/nginx/zabbix.log; error_log /var/log/nginx/zabbix.err.log; client_max_body_size 500M; # This folder is a soft link to /usr/share/zabbix # the permssion has been set to nginx:nginx recursively. root /var/www/zabbix; location / { index index.html index.htm index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; fastcgi_param PATH_INFO $path_info; } } php-fpm is using its default values, with permission user/group set to nginx (rather than apache) Folder /var/lib/php/session has been set to nginx:nginx with permission 770. SELinux is set to disabled. I've restarted everything up to this point.

    Read the article

  • Apache directive for authenticated users?

    - by Alex Leach
    Using Apache 2.2, I would like to use mod_rewrite to redirect un-authenticated users to use https, if they are on http.. Is there a directive or condition one can test for whether a user is (not) authenticated? For example, I could have set up the restricted /foo location on my server:- <Location "/foo/"> Order deny,allow # Deny everyone, until authenticated... Deny from all # Authentication mechanism AuthType Basic AuthName "Members only" # AuthBasicProvider ... # ... Other authentication stuff here. # Users must be valid. Require valid-user # Logged-in users authorised to view child URLs: Satisfy any # If not SSL, respond with HTTP-redirect RewriteCond ${HTTPS} off RewriteRule /foo/?(.*)$ https://${SERVER_NAME}/foo/$2 [R=301,L] # SSL enforcement. SSLOptions FakeBasicAuth StrictRequire SSLRequireSSL SSLRequire %{SSL_CIPHER_USEKEYSIZE} >= 128 </Location> The problem here is that every file, in every subfolder, will be encrypted. This is quite unnecessary, but I see no reason to disallow it. What I would like is the RewriteRule to only be triggered during authentication. If a user is already authorised to view a folder, then I don't want the RewriteRule to be triggered. Is this possible? EDIT: I am not using any front-end HTML here. This is only using Apache's built-in directory browsing interface and its in-built authentication mechanisms. My <Directory> config is: <Directory ~ "/foo/"> Order allow,deny Allow from all AllowOverride None Options +Indexes +FollowSymLinks +Includes +MultiViews IndexOptions +FancyIndexing IndexOptions +XHTML IndexOptions NameWidth=* IndexOptions +TrackModified IndexOptions +SuppressHTMLPreamble IndexOptions +FoldersFirst IndexOptions +IgnoreCase IndexOptions Type=text/html </Directory>

    Read the article

  • SpamAssassin bayesian score discrepancies

    - by CaptSaltyJack
    This makes my brain hurt. For some reason, SpamAssassin is giving high scores to certain emails, but when I test them on the command line, they get a low score. This one particular email has this in the header: X-Spam-Flag: YES X-Spam-Score: 8.521 X-Spam-Level: ******** X-Spam-Status: Yes, score=8.521 tagged_above=-9999 required=5 tests=[BAYES_99=3.5, BAYES_999=0.2, HTML_MESSAGE=0.001, NO_RECEIVED=-0.001, NO_RELAYS=-0.001, RAZOR2_CF_RANGE_51_100=0.5, RAZOR2_CF_RANGE_E8_51_100=1.886, RAZOR2_CHECK=0.922, URIBL_RHS_DOB=1.514] autolearn=no Yet when I dump the raw email into a file msg and run sudo su amavis -c 'spamassassin -t msg', I get this output: Content analysis details: (3.8 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 1.5 URIBL_RHS_DOB Contains an URI of a new domain (Day Old Bread) [URIs: cliobeads.com] -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP 0.0 HTML_MESSAGE BODY: HTML included in message -0.0 BAYES_20 BODY: Bayes spam probability is 5 to 20% [score: 0.1855] 1.9 RAZOR2_CF_RANGE_E8_51_100 Razor2 gives engine 8 confidence level above 50% [cf: 100] 0.5 RAZOR2_CF_RANGE_51_100 Razor2 gives confidence level above 50% [cf: 100] 0.9 RAZOR2_CHECK Listed in Razor2 (http://razor.sf.net/) I'm really confused as to why when the email comes in, it gets a completely different score attached to it than when I run spamassassin -t. Is there some other way I should be testing emails? Also, my users have the ability to drag false positives into a folder called "False Positives," and every day a cron job fires off that runs this on every message in every user's folder: sa-learn --dbpath=/var/lib/amavis/.spamassassin --ham /tmp/*-*.eml >/dev/null I ran sudo locate bayes_toks and there's definitely only one bayes DB on the system, in /var/lib/amavis/.spamassassin. I'm clueless, any help would be great and may help restore my sanity!

    Read the article

  • How can I set Thunderbird's "Recipient" column to display my email address rather than a friendly na

    - by Howiecamp
    After configuring a single, unified Inbox within Outlook 2007 to unify multiple email accounts, I found Thunderbird 3's Smart Folder feature. It works great, providing individual inboxes for each of your email accounts and a unified inbox which provides a unified, virtual view of those other inboxes. Thunderbird is smart enough so what when I reply to an email addressed to a specific email account, the reply is "From" that email address. In order to know which inbound email was to which of my accounts, I added the "Recipient" column to the inbox Smart Folder: What's displayed in the Recipient column depends on how the sender/sender's email client addresses the email. If they send it to just "[email protected]" without specifying a friendly name, the Recipient column displays "[email protected]" and there's no ambiguity about which account the email was sent to. However, if the sender has me in their address book (likely stored with a friendly name), it will be addressed as "Howard Camp [[email protected]]" and then show in the Recipient column as "Howard Camp". The problem is that if someone emails me with a friendly name at another of my email accounts (e.g. "Howard Camp [[email protected]]", the Recipient column will also display "Howard Camp" and I can't tell which account it's to until I open the message and/or look at the details. How can I configure Thunderbird to always display my email address rather than the sender-specified friendly name in the Recipient column?

    Read the article

  • Exchange users moved mailbox now can't open some calendars

    - by Kip
    OK So the environment is Exchange sp on Windows 2003 server. This weekend we had to move a bunch of users of off one information store that was corrupt and onto a temp store delete the original dodgy store and then move the users back from the temp store to one of the three other stores under the same original storage group. Since then we are having some weird access issues relating to calendars. I am assuming it is all related, but it might not be. The problem is that users are unable to see any calendars that they have previously had access to. The weird thing is, that some of the users in question are not ones who have been moved nor are they trying to access calendars that belong to people whose accounts have been moved. Hence my assumption its related but possibly not. The message received is "Unable to display the folder. The calendar folder could not be found." here is the kicker, if i move someone who is trying to access other calendars, to a different mailbox store (thereby creating a new email account and sending stuff over), things start to work again. this to me indicates a permissions problem however I am unsure in what way. Looking for help out there please guys :) Cheers

    Read the article

  • Best practices for settings for Oracle database creation

    - by Gary
    When installing an Oracle Database, what non-default settings would you normally apply (or consider applying) ? I'm not after hardware dependent setting (eg memory allocation) or file locations, but more general items. Similarly anything that is a particular requirement for a specific application rather than generally applicable isn't really useful. Do you separate out code/API schemas (PL/SQL owners) from data schemes (table owners) ? Do you use default or non-default roles, and if the latter, do you password protect the role ? I'm also interested in whether there's any places where you do a REVOKE of a GRANT that is installed by default. That may be version dependent as 11g seems more locked down for its default install. These are ones I used in a recent setup. I'd like to know whether I missed anything or where you disagree (and why). Database Parameters Auditing (AUDIT_TRAIL to DB and AUDIT_SYS_OPERATIONS to YES) DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING (both to FULL) GLOBAL_NAMES to true OPEN_LINKS to 0 (did not expect them to be used in this environment) Character set - AL32UTF8 Profiles I created an amended password verify function that used the apex dictionary table (FLOWS_030000.wwv_flow_dictionary$) as an extra check to prevent simple passwords. Developer logins CREATE PROFILE profile_dev LIMIT FAILED_LOGIN_ATTEMPTS 8 PASSWORD_LIFE_TIME 32 PASSWORD_REUSE_TIME 366 PASSWORD_REUSE_MAX 12 PASSWORD_LOCK_TIME 6 PASSWORD_GRACE_TIME 8 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME 1080 IDLE_TIME 180 LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Application login CREATE PROFILE profile_app LIMIT FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 999 PASSWORD_REUSE_TIME 999 PASSWORD_REUSE_MAX 1 PASSWORD_LOCK_TIME 999 PASSWORD_GRACE_TIME 999 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME unlimited IDLE_TIME unlimited LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Privileges for a standard schema owner account CREATE CLUSTER CREATE TYPE CREATE TABLE CREATE VIEW CREATE PROCEDURE CREATE JOB CREATE MATERIALIZED VIEW CREATE SEQUENCE CREATE SYNONYM CREATE TRIGGER

    Read the article

  • NFSv3 + ACL: mask is gone on clients

    - by Jorge Suárez de Lis
    I'm sharing a NFS folder among a user group. The default umask on the clients is 0700, and this is a problem because newly created files won't be readable/writable by another users. So, I'm using ACLs to force the umask 0770 on the shared folder, and this works OK on the server, but not on the clients. server # getfacl /export/proyectos getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: export/proyectos # owner: root # group: root user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:mask::rwx default:other::r-x server # getfacl /export/proyectos/innovacion getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: export/proyectos/innovacion # owner: root # group: proyecto-innovacion # flags: ss- user::rwx group::rwx mask::rwx other::--- default:user::rwx default:group::rwx default:mask::rwx default:other::--- As you see, the default (and also a specific on the second directory) mask ACLs are being applied. I mount the whole share on the client: 172.16.54.56:/export/proyectos on /proyectos type nfs (rw,noatime,rsize=131072,wsize=131072,acregmin=10,acl,nfsvers=3,addr=172.16.54.56) But the mask and default:mask ACLs are gone. client $ getfacl /proyectos/ getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: proyectos/ # owner: root # group: root user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:other::r-x client $ getfacl /proyectos/innovacion getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: proyectos/innovacion # owner: root # group: proyecto-innovacion # flags: ss- user::rwx group::rwx other::--- default:user::rwx default:group::rwx default:other::--- It lacks the default:mask and mask ACLs, the only ones that I've setted. So the proposed solution to enforce umask won't work for me. Why is happening this?

    Read the article

  • Rsync when run in cron doesnt work. Rsync between Mac Os x Server and Linux Centos

    - by Brady
    I have a working rsync setup between Mac OS X Server and Linux Centos when run manually in a terminal. I enter the rsync command, it asks for the password, I enter it and off it goes, runs and completes. Now I know thats working I set out to fully automate it via cron. First off I create an SSH authorized key by running this command on the Mac server: ssh-keygen -t dsa -b 1024 -f /Users/admin/Documents/Backup/rsync-key Entering the password and then confirming it. I then copy the rsync-key.pub file accross to the linux server and place in the rsync user .ssh folder and rename to authorized_keys: /home/philosophy/.ssh/authorized_keys I then make sure that the authorized_keys file is chmod 600 in the folder chmod 700. I then setup a shell script for cron to run: #!/bin/bash RSYNC=/usr/bin/rsync SSH=/usr/bin/ssh KEY=/Users/admin/Documents/Backup/rsync-key RUSER=philosophy RHOST=example.com RPATH=data/ LPATH="/Volumes/G Technology G Speed eS/Backup" $RSYNC -avz --delete --progress -e "$SSH -i $KEY" "$LPATH" $RUSER@$RHOST:$RPATH Then give the shell file execute permissions and then add the following to the crontab using crontab -e: 29 12 * * * /Users/admin/Documents/Backup/backup.sh I check my crontab log file after the above command should run and I get this in the log and nothing else: Feb 21 12:29:00 fileserver /usr/sbin/cron[80598]: (admin) CMD (/Users/admin/Documents/Backup/backup.sh) So I asume everything has run as it should. But when I check the remote server no files have been copied accross. If I run the backup.sh file in a terminal as normal it still prompts for a password but this time its through the Mac Key chain system rather than typing into the console window. With the Mac Key Chain I can set it to save the password so that it doesnt ask for it again but Im sure when run with cron this password isnt picked up. This is where I'm asuming where rsync in cron is failing because it needs a password to connect but I thought the whole idea of making the SSH keys was to prevent the use of a password. Have I missed a step or done something wrong here? Thanks Scott

    Read the article

  • Have an Input/output error when connecting to a server via ssh

    - by Shehzad009
    Hello I seem to be having a problem while connecting to a Ubuntu Server while connecting via ssh. When I login, I get this error. Could not chdir to home directory /home/username: Input/output error It seems like my home folder is corrupt or something. I cannot ls in the home folder directory, and in my usename directory, I can't cd into this. As root I cannot ls in the home directory as well or in any directory in Home. I notice as well when I save in vim or quit, it get this error at the bottom of the page E138: Cannot write viminfo file /home/root/.viminfo! Any ideas? EDIT: this is what happens if I type in these commands mount proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) /dev/mapper/RAID1-lvvar on /var type xfs (rw) /dev/mapper/RAID5-lvsrv on /srv type xfs (rw) /dev/mapper/RAID5-lvhome on /home type xfs (rw) /dev/mapper/RAID1-lvtmp on /tmp type reiserfs (rw) dmesg | tail [1213273.364040] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213274.084081] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213309.364038] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213310.084041] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213345.364039] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213346.084042] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213381.365036] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213382.084047] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213417.364039] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213418.084063] Filesystem "dm-4": xfs_log_force: error 5 returned. fdisk -l /dev/sda Cannot open /dev/sda

    Read the article

  • What folders to encrypt with EFS on Windows 7 laptop?

    - by Joe Schmoe
    Since I've been using my laptop more as a laptop recently (carrying it around) I am now evaluating my strategy to protect confidential information in case it is stolen. Keep in mind that my laptop is 6 years old (Lenovo T61 with 8 GB or RAM, 2GHz dual core CPU). It runs Windows 7 fine but it is no speedy demon. It doesn't support AES instruction set. I've been using TrueCrypt volume mounted on demand for really important stuff like financial statements forever. Nothing else is encrypted. I just finished my evaluation of EFS, Bitlocker and took a closer look at TrueCrypt again. I've come to conclusion that boot partition encryption via Bitlocker or TrueCrypt is not worth the hassle. I may decide in the future to use Bitlocker or TrueCrypt to encrypt one of the data volumes but at this point I intend to use EFS to encrypt parts of my hard drive that contain data that I wouldn't want exposed. The purpose of this post is to get your feedback about what folders should be encrypted from the general point of view (of course everyone will have something specific in addition) Here is what I thought of so far (will update if I think of something else): 1) AppData\Local\Microsoft\Outlook - Outlook files 2) AppData\Local\Thunderbird\Profiles and AppData\Roaming\Thunderbird\Profiles- Thunderbird profiles, not sure yet where exactly data is stored. 3) AppData\Roaming\Mozilla\Firefox\Profiles\djdsakdjh.default\bookmarkbackups - Firefox bookmark backup. Is there a separate location for "main" Firefox bookmark file? I haven't figured it out yet. 4) Bookmarks for Chrome (don't know where it's bookmarks are) and Internet Explorer ($Username\Favorites) - I don't really use them but why not to secure that as well. 5) Downloads\, My Documents\ and My Pictures\ folders I don't think I need to encrypt, say, latest service pack for Visual Studio. So I will probably create subfolder called "Secure" in all of these folders and set it to "Encrypted". Anything sensitive I will save in this folder. Any other suggestions? Again, this is from the point of view of your "regular office user".

    Read the article

  • Disabled FRS replication on a DFS link, but the targets still list the replica set in their FRS conf

    - by Graeme Donaldson
    It's been a while since I've had to deal with the wonders of FRS, so I'm doing some testing to refresh my memory. This is what I've done so far. I am stuck with FRS rather than DFS-R for the moment since not all of my link targets are running R2. Created a domain-based DFS root, hosted on 4 servers. Created a DFS link under the root, targeted at 2 servers. The shares on both servers were empty. Dropped about 500MB of data into the target folder on one server and waited for replication to complete. Added/removed/modified files on both targets and confirmed that changes are replicated within a few seconds. Deleted the contents of the target folder on 1 server and waited for the other server to replicate the deletion. All of this worked perfectly, so now I want to remove my DFS link since I only created it for testing purposes. This is where it gets weird. I'm pretty sure that in the past I've disabled replication on the DFS link and after a short amount of time each target would log an info event in the FRS event log, something along the lines of "this server is no longer a member of replica set X". I have waited about 3 hours and I haven't seen this happen. ntfrsutl ds tells me that the server is not a member of any set, which is expected because when I disable replication on the link, the AD attributes on the computer object are removed. The weird part is... ntfrsutl sets still shows me the replica set, with all the properties, etc. So it seems like the FRS-related attributes of the target server's AD object are gone, but the FRS service for some reason hasn't removed the replica set. Can anyone see what I have done wrong?

    Read the article

  • MacBook Pro - Aquamacs - spell check

    - by peggy Li
    I have tried to use spell check for aquamac. I highlighted a region of the text. Then clicked Edit, then spell check region. I got the error message: Error : No word lists can be found for the language "en_US". Then I went to the website to download the following dictionaries: CocoAspell : I just clicked the download button. It was reported that the download was successful. However, when I tested it and highlighted a text region and clicked spell check. The same error message came out. Do I need to pull the downloaded .pkg to a certain place, such as the application folder, before I opened the .pkg? Or what else do I need to do make it work? I also downloaded the base package Aspell (for Intel) and the pre-built dictionaries as (as the instruction of the website), just the same way as point 1. I still got the same error message. Again, Do I need to pull the downloaded .pkg to a certain place, such as the application folder, before I opened the .pkg? Or what else do I need to do make it work? I would be greatly appreciated if someone could give me some help? Peggy Li

    Read the article

  • Backup and Archive Strategy Question

    - by OneNerd
    I am having trouble finding a backup strategy for our code assets that 'just works' without any manual intervention. Goal is to have an off-site backup (a synchronized one) so that when we check-in files, create builds, etc. to the network drive, the entire folder structure is automatically synchronized and backed-up (in real time, or 1x per day) at some off-site location so if our office blows up, we don't lose all of our data. I have looked into some online backup services, but have not yet had any success. Some are quirky/buggy, others limit file size and/or kinds of files (which doesn't work well for developer files). Everything gets checked in and saved to a single server (on a Raid Mirror), so we just need to have a folder on that server backed up/synchronized to some off-site location. So my question is this. What are you using for your off-site backup strategy. What software, system, or service? Is there a be-all/end-all system of backing up your code assets that I just haven't found yet? Thanks

    Read the article

  • Why can't I see installed AIR applications in /Applications on Mac OS Lion?

    - by Damir Zekic
    Yesterday I did a clean install of Mac OS Lion, and installed bunch of apps, including HipChat Desktop Application (an Adobe AIR app). I used it for some time and eventually closed it. Today I wanted to start it again, but couldn't find it in Launchpad. I looked in /Applications folder, but I couldn't find it there either (I still don't have ~/Applications). I went back to HipChat website (from where I installed) and a Flash plugin allowed me to run the app immediately. It shows up in my Dock and if I "keep it" there, I can launch it again (it still doesn't appear in my /Applications dir). However, it asks me for confirmation about launching an app that has been downloaded from the Internet (every time I start it, it's annoying). I also see the app in Adobe AIR uninstaller. I then went to my old Snow Leopard installation and found HipChat in my /Applications folder there. Copied it to the Lion disk and it works. Now I have two HipChat entries in Adobe AIR uninstaller. I guess I solved the issue at hand, but I still don't understand where the original app is located and how I can access it and move it to Applications. I couldn't find it anywhere with both Finder search nor find command in terminal (I used $find / -name "HipChat*" -print). So, how does Adobe AIR store the installed applications (on OSX Lion) and is there any way to get them to show up in Launchpad?

    Read the article

  • Network share not always available on Windows 2003

    - by JP Hellemons
    Hello everybody, we have a windows 2003 server with a shared directory/folder. I've seen this thread but this wasn't any help: http://superuser.com/questions/58890/the-specified-network-name-is-no-longer-available I have a ping -t running from 3 pc's (vista and two windows 7) they all work. the problem occurss when two users enter the network share then this 'network share is no longer available' appears and the explorer windows turn white. after f5 or refresh the shared directory is back. this is really strange. there is no anti virus or kasparsky running on either end. this is all in the same LAN. the internet connection is really stable, so it's really strange. because a stable internet connection should imply that the local network connection is also stable and that this is a windows issue. can it be a router issue? I have checked the eventlog on the server for diskfailure related messages, but there are none. EDIT: can this be related to mapping a shared directory to a drive letter? and that there is a router between me and the mapped network drive? or is it just windows that is not working well with two users on the same shared folder? should I install samba or something?

    Read the article

  • MDT 2010 Litetouch.vbs Fails to Launch

    - by Mitch
    I have the custom image captured. Import the image and files. Prepare the customsettings.ini and the boot.ini to minimize the questions the deployment team will need to answer. Everything works like a charm on virtual machines but when I map to the scripts folder on the deployment share and double-click litetouch.vbs it creates the c:\minint folder, subfolders, and a couple of log files then nothing. Here's what the log files look like: <![LOG[Property LogPath is now = C:\MININT\SMSOSD\OSDLOGS]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[Property CleanStart is now = ]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[Microsoft Deployment Toolkit version: 5.1.1642.01]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[Property Debug is now = FALSE]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[GetAllFixedDrives(False)]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> Anyone encounter this before or know what might be happening/not happening and can direct me in the right way? I've only found a couple of other references to this anywhere and they had no solution/cause listed either. I'm stumped.

    Read the article

  • SPDIF passthrough not working in Windows 7

    - by adriangrigore
    Hi, I'm running Windows 7 on a computer with an Audigy Platinum eX sound card connected to a surround receiver via optical cabling. Sound works fine when listening to non-surround audio sources, such as windows sounds or MP3. However, when I view a DVD in Media Center and the SPDIF passthrough kicks in, I can only hear an awful noise instead of the movie soundtrack. Also, the receiver does not show the Dolby Digital or DTS symbol, but stays at Dolby Prologic, so it seems it doesn't identify the sound encoding properly. I could switch off SPDIF passthrough and use the sound card's decoder instead, but that's not an option for me since it would create more problems with regular MP3 playback via additional Stereo Receiver which is also connected to the same sound card. I've tried both the default Audigy drivers that come with Windows 7 and the latest drivers from the Soundblaster website, but the problem remains unchanged. Also, I have ensured that the receiver's Dolby Digital decoder is not broken by successfully connecting it to my PS3 to view a Dolby Digital DVD. Besides, SPDIF passthrough was working fine in Vista before I upgraded to Windows 7. Is there anything else I could try?

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >