Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 466/1981 | < Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >

  • Outlook 2010 PST not indexing

    - by kellyllek
    I've had this problem for months: Outlook is unable to search recent items in the inbox. I was running Outlook 2010 Beta. I've just moved to the Trial version, hoping to solve the issue. I have tons of PST files but one central one I'm mainly concerned with. As of now it seems none of it is indexing. I've been through all the sites and made all the changes; rebuilt the index, changed the name of the PST files, run scanpst, stopped and started the search services, made sure the Windows features under programs and features has the indexing option checked, etc... Status now says 'zero items left to index', and 150,000 have been indexed. I think I have a lot more files than that, and also nothing is showing up on any search. I'm not sure what else to do? Side question. I'm going to be moving out of Outlook. However I have 10Gigs+ of PST files over the years. I want to merge them and make them search-able in the easiest way possible. Any idea on how to do that? Could I even move over to Thunderbird right now and be able to index and search my PST files? Also, Google Desktop won't index Outlook 2010 email either...

    Read the article

  • Setting up logging for a remote backup script

    - by Brian Dainis
    So I wrote up a short script that I am planning to run via a cron job daily to package up my site files and send them to a remote location. I also plan to incorporate DB dumps, but I have not gotten that far yet. My issue today however is that Im am uncertain how to log the output of each command for errors, warnings, or other pertinent information the command may output. I would also like to install sometype of fail safe so if something goes horribly wrong the script will stop dead in its tracks and notify me via email or something. Ok the email thing is not as critical, but would be nice. Does anybody have any ideas for that? Here is what I have so far. By the way, both servers are CentOS 6.2 running standard LAMP. #!/bin/sh ################################# ### Set Vars ################################# THEDATE=`date +%m%d%y%H%M` ################################# ### Create Archives ################################# tar -cf /root/backups/files/server_BAK_${THEDATE}.tar -C / var/www/vhosts gzip /root/backups/files/server_BAK_${THEDATE}.tar ################################# ### Send Data to Remote Server ################################# scp /root/backups/files/server_BAK_${THEDATE}.tar.gz user@host:/home/bak1/ftp/backups/ ################################# ### Remove Data from this Server ################################# rm -rf /root/backups/files/server_BAK_${THEDATE}.tar.gz

    Read the article

  • Seagate 3TB hard drive loses format information

    - by Victor Bugarin
    I have a Windows 7x64 Ultimate, 6 GB memory, 1 TB HD. 3TB Barracuda XT HDD. The HDD is installed on a StarTech 4 bays external enclosure I had troubles so I converted to a GPT, created 1 partition and formatted as NTFS. The hard drive I can write and read to and from the hard drive but it will become unreadable at some point while I am copying files or after I have copied files to it. I have copied large Bluray movies and diverse video files, I have also copied 32 GB of pictures, and I have copied about 86 thousand music files in different formats. At some point the partition becomes unreadable and I have to format the partition again (all files lost) and I have to start the whole process again. At some point I have been unable to copy large ISO (Bluray movies) file images. I have partitioned the HDD in 2 partitions P1 - 2TB, P2 - 1TB and I have lost every single file in either partition the same way. I reformat the HDD and it seems fine. I have run seatools to check the hard drive and it reports to be OK. What gives?

    Read the article

  • CDN recommendation

    - by michaeld79
    Hey all, I am looking for a CDN service that is able to update the end point files on demand via API in max time of 10 min. or an expiration time for the files that is 10 min or less. In addition the CDN must have an option to upload files via API (working with PHP in my project). thanks in advance michael D

    Read the article

  • nginx won't serve an error_page in a subdirectory of the document root

    - by Brandan
    (Cross-posted from Stack Overflow; could possibly be migrated from there.) Here's a snippet of my nginx configuration: server { error_page 500 /errors/500.html; } When I cause a 500 in my application, Chrome just shows its default 500 page (Firefox and Safari show a blank page) rather than my custom error page. I know the file exists because I can visit http://server/errors/500.html and I see the page. I can also move the file to the document root and change the configuration to this: server { error_page 500 /500.html; } and nginx serves the page correctly, so it's doesn't seem like it's something else misconfigured on the server. I've also tried: server { error_page 500 $document_root/errors/500.html; } and: server { error_page 500 http://$http_host/errors/500.html; } and: server { error_page 500 /500.html; location = /500.html { root /path/to/errors/; } } with no luck. Is this expected behavior? Do error pages have to exist at the document root, or am I missing something obvious? Update 1: This also fails: server { error_page 500 /foo.html; } when foo.html does indeed exist in the document root. It almost seems like something else is overwriting my configuration, but this block is the only place anywhere in /etc/nginx/* that references the error_page directive. Is there any other place that could set nginx configuration?

    Read the article

  • How to save transferred MP3 on Centova cast?

    - by cHAKE
    We have uploaded and transferred the mp3 files on centova cast successfully with Filezilla. We know how to update the media library We know how to create playlists We know how to drag the songs into different playlists We know how to save the media or the playlists BUT, we are losing all the media library saved files (songs) the moment we get new files in media library. However we want to keep the media library and playlist saved.

    Read the article

  • Windows 7 Explorer shortcut for "Replace All"

    - by chris
    I you copy/paste some files into a directory that already contains files with the same name you'll get a confirmation dialog "There is already a file with the same name in this location..." To replace all files you have to check "Do this for the next N conflicts" and then click on "Copy and Replace". Is there a keyboard shortcut for this (like there was in XP where you could simply press 'a')?

    Read the article

  • SFTP over double server hop

    - by josh.trow
    I'm trying to work out a method to allow me to access files on an SFTP server than I cannot access from my local machine. Currently, I have to SSH to a remote server (it is in a certain IP block that the final SFTP server will accept from), then from there SFTP to the destination server. From there, I get the files I am interested in, thereby dropping them onto the middleman server, from which I can get the files either over a Samba share or with a direct scp. I also work in the reverse, where I drop the files onto the middleman, SSH to it then SFTP to the destination and put them into the appropriate folders. My goal is to shorten this. The unfortunate restrictions are that my machine is Windows (I use KiTTy and/or Cygwin) and I cannot modify the middleman server (or destination server) in any way. I am willing to use command line or GUI programs so long as it works and is free. Any ideas?

    Read the article

  • EXECUTE master.dbo.xp_delete_file folder permission issue

    - by Alex
    I'm trying to run a Maintenance Cleanup Task to remove .bak files older than 2 days (simple enough). Been trying all variety of .bak, BAK, .*., and editing the path, but the files are still not getting removed even though I receive a "job succeeded" log message. I'm not at the point where I believe it's a folder permission issue. How do I make sure my SA has the proper permissions to remove files from a folder? Thanks.

    Read the article

  • Replicate a big, dense Windows volume over a WAN -- too big for DFS-R

    - by Jesse
    I've got a server with a LOT of small files -- many millions files, and over 1.5 TB of data. I need a decent backup strategy. Any filesystem-based backup takes too long -- just enumerating which files need to be copied takes a day. Acronis can do a disk image in 24 hours, but fails when it tries to do a differential backup the next day. DFS-R won't replicate a volume with this many files. I'm starting to look at Double Take, which seems to be able to do continuous replication. Are there other solutions that can do continuous replication at a block or sector level -- not file-by-file over a WAN?

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • Apache, Permissions, and Convenience

    - by Mike
    I'm on Mac OSX and i I have apache2 installed via MacPorts, running as the _www user. I have some files I want to serve in the /Users/Me/Documents/abc folder. Right now, though, the permissions of /Users/Me/Documents are 700. So, _www can't get in, even if abc is chmod 777. I recognize the following options: Allow _www access to my Documents folder. Put the files I want to share outside of my Documents folder. Hard-link the files outside of my Documents folder, and point apache to the hard links. None of these solutions are acceptable to me, however. I don't feel safe allowing _www access to my entire Documents folder. I really want to keep the files in my Documents folder for other reasons. The files are changing all the time, so hard-linking would not always reflect the right file structure, and, as I understand it, you can't hard-link a directory (though, if you could, that would solve it). Any ideas for a solution? Is there a way to run a few httpd processes as my user account so it can get in there? Or, is there some way to hard-link a directory, or some way to get httpd to follow a symlink past a directory that is 700 not owned by _www? Thanks!

    Read the article

  • Samba/Winbind issues joing to Active directory domain

    - by Frap
    I'm currently in the process of setting up winbind/samba and getting a few issues. I can test connectivity with wbinfo fine: [root@buildmirror ~]# wbinfo -u hostname username administrator guest krbtgt username [root@buildmirror ~]# wbinfo -a username%password plaintext password authentication succeeded challenge/response password authentication succeeded however when I do a getent I don't get any AD accounts returned [root@buildmirror ~]# getent passwd root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin daemon:x:2:2:daemon:/sbin:/sbin/nologin adm:x:3:4:adm:/var/adm:/sbin/nologin lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail:/sbin/nologin uucp:x:10:14:uucp:/var/spool/uucp:/sbin/nologin operator:x:11:0:operator:/root:/sbin/nologin puppet:x:52:52:Puppet:/var/lib/puppet:/sbin/nologin my nsswitch looks like this: passwd: files winbind shadow: files winbind group: files winbind #hosts: db files nisplus nis dns hosts: files dns and I'm definitely joined to the domain: [root@buildmirror ~]# net ads info LDAP server: 192.168.4.4 LDAP server name: pdc.domain.local Realm: domain.local Bind Path: dc=DOMAIN,dc=LOCAL LDAP port: 389 Server time: Sun, 05 Aug 2012 17:11:27 BST KDC server: 192.168.4.4 Server time offset: -1 So what am I missing?

    Read the article

  • How can I view a .eml file from command line in Windows Vista?

    - by Nosrettap
    Ok, so my parent's computer crashed (horribly - corruptedRegistry) and I'm trying to access one of their e-mail files that is saved locally on the hard drive. I can't launch Windows itself so right now I am in a "bootup command prompt". I've navigated to where the e-mail appear to be stored C:Users\[userName]\AppData\Local\Microsoft\Windows Mail\Local Folders\Inbox and it shows a list of what appear to be the files. The problem is, they are .eml files and I can't seem to be able to open them. I've tried 'vim' and 'vi' commands but it tells me that 'vim is not recognized as an internal or external command. Does anyone know how I can view .eml files from command line? Thanks

    Read the article

  • nginx not returning 304 on cached content

    - by Don H
    I'm using nginx as a reverse proxy with an Apache back-end handling some PHP files. The files return the right expiry headers and proxy_cache does a good job of caching them, but I've noticed that the cached content returns a 200 on every refresh, when it might be more efficient to return a 304 on the cached files. The files in question are generated by PHP. The urls do not have .php in them as they've been prettified. Any idea why nginx might not be returning 304 on repeated visits to a cached PHP output? To clarify: It's using proxy_cache for caching dynamic PHP pages (not static html pages generated by PHP). I'm setting expires headers in the PHP file of time + 24 hours. With that in mind, I was hoping nginx would be able to then return 304s on its cached versions during that 24 hour window.

    Read the article

  • Import EML emails into Outlook 2010 64-bit

    - by nness
    Evening everyone. I'm helping setup a small office network, where a number of old PC's are being replaced with new ones with a 64-bit copy of Outlook 2010. The old emails were stored in Windows Live Email, and were exported as .eml files (since we were replacing the machines). All the support I can find indicates that .eml files could simply be dragged-and-dropped into a folder in Outlook 2010, and it will import them correctly. However, it seems this is not the case in the 64-bit versioin, where dropping in .eml files results in a new message being created with these files as attachments. We can re-download the most of the emails off the server if need be, but there were user folders which were not on the server which we were hoping to import. Any advice would be fantastic at this point!

    Read the article

  • VPS showing low disk space despite there is nothing major on it

    - by SheoNarayyan
    Hello experts, On my VPS server I was trying to see the used disk space and when I open My Computer it shows 17.9 GB free out of 39.8 GB it means that 21.9 GB space is used. However, when I select all files and folders from C: and try to see the total size, it just count approximately 11 GB. The difference is around 10 GB. Where is this 10 GB going if I have not stored anything else here? I asked above question from my VPS provider and he responded below Check hidden files/system files/etc. This is default windows OS and its utilization and not specific to setup. If you want specifics of usage, you can go ahead and get in touch with Microsoft support team and they'll provide you with exact specification of the same. I am sure that Windows OS must not be taking up 10 GB space for hidden files and folders. My VPS has Windows Server 2008 R2 installed. Can anyone help me in this on who is right?

    Read the article

  • Input multiple file names in windows open file dialog box

    - by goodiet
    Windows 7 allows you to select multiple files to open at once by using ctrl or shift key. The "File Name" input field at the bottom of the dialog box would auto populate with the following sample: "aaa.txt" "bbb.txt" "ccc.txt" "ddd.txt" I have 14,000 files in a folder and I only need a range of files (approx 500). When I use the shift key to select a range of files, the "File Name" field auto populates all 500 file names. Windows would cut me off at the 260th character when I try to paste in a pre-generated string into the "File Name" field. Is there a way to bypass the 260 character limit so it would accept my entire string with 500 file names?

    Read the article

  • Prevent rmdir -p from traversing above a certain directory

    - by thepurplepixel
    I hacked together this script to rsync some files over ssh. The --remove-source-files option of rsync seems to remove the files it transfers, which is what I want. However, I also want the directories those files are placed in to be gone as well. The current part of the find command, -exec rmdir -p {} ; tries to remove the parent directory (in this case, /srv/torrents), but fails because it doesn't have the right permissions. What I'd like to do is stop rmdir from traversing above the directory find is run in, or find another solution to get rid of all the empty folders. I've thought of using some kind of loop with find and running rmdir without the -p switch, but I thought it wouldn't work out. Essentially, is there an alternative way to remove all the empty directories under the parent directory? Thanks in advance! #!/bin/bash HOST='<hostname>' USER='<username>' DIR='<destination directory>' SOURCE='/srv/torrents/' rsync -e "ssh -l $USER" --remove-source-files -h -4 -r --stats -m --progress -i $SOURCE $HOST:$DIR find $SOURCE -mindepth 1 -type d -empty -prune -exec rmdir -p \{\} \;

    Read the article

  • Disk Cleanup on Server 2008 R2 is ineffective

    - by cparker4486
    I have a user profile with ~2.9GB of Error Reports backed up in the ReportQueue folder (C:\Users\UserName\AppData\Local\Microsoft\Windows\WER\ReportQueue). Running DiskCleanup as the administrator does not detect these files and therefore does not clean them up. However, running the utility as the user shows an even larger amount (12.4GB!) of error reporting files. As seen below: The problem is that after running the cleanup utility the disk spaced used does not decrease by anywhere near 12.4GB and running the utility again detects the same 12.4GB of files. What is the problem here? Alternatively, can I manually delete the files in the ReportQueue folder?

    Read the article

  • Minimal backup for Windows 7 system recovery [migrated]

    - by JIm
    There might not be an answer to this, but for a home Win7 system, what files/directories must be backed up to recover after a windows crash? I can reinstall software, and I keep data files elsewhere. When I use acronis home backup software to backup my "critical" files it seems to choose the entire partition. Updates are mostly browser cache files and the like. Or, after a crash, should I just reinstall windows. I dread the hours of windows updates that would require. Thanks.

    Read the article

  • How to restrict deletion of a folder on NTFS share, but still allow modify access within folder

    - by thinkdreams
    I am setting up a set of scan folders from a scanning copier device, and would like to know the best way to protect the folders (for each department) from moving or deletion, but yet still allow access for the users to modify (i.e. create/add/delete) the scanned files within the folder. Structure is: Share Name Departmental Folder User files The writing of the files initially is taken care of by a service account which has full control. We'd just like to ensure the users cannot accidentally delete the folder (which has already happened) containing all the files, etc. This is for a Windows 2003 server, NTFS permissions. Suggestions would be most appreciated.

    Read the article

  • Is there a filesystem that is "friendly" to both windows and Linux?

    - by Somebody still uses you MS-DOS
    I'm planning to install Ubuntu 10.04 with Windows 7. (I'm new to Linux, have to use at work so I'm planning to install it at home to learn more) I plan to use a partition to my Windows system files (C:), a partition for my personal files that already exists (D:) and a new partition for Linux. What I want is to have a partition for my personal files that works across these systems - so, if I start with Windows or Linux, there's the same "Videos", "Pictures", "Projects" folders. Is it possible? Is there a hd filesystem capable of having writes from both systems without too much risk of corrupting or something? (Can't be FAT32, I need to store 4gb files). I've read some horror stories of corruption, and would like to know from a sysadmin POV all the risks involved in such scenario.

    Read the article

  • Windows - Decrypt encrypted file when user account is destroyed

    - by dc2
    I have a Virtual Machine running on my Windows Server 2008 computer that originally was received by me encryped, as the builder of the VM did it on a MAC, which decrypts files by default. I never thought to decrypt these files, as they automatically 'decrypt' when you have permission over them, so the VM has been running for over a year despite the encryption. I just upgraded my computer to Domain Controller (dcpromo.exe). Now when I try to access/run the VM, I can't because I don't have permission to decrypt the files as that was on another logon (local administrator) and now I am the domain administrator. Apparently the local admin is totally nuked when you upgrade to domain controller. I have tried EVERYTHING - taking ownership of the files, which works. Doesn't do anything for me. Adding full control to everyone on the files. I go to File Properties Advanced Details (under encryption) Users who can access this file. The only user is administrator@localcomputername, and there is a cert number. I try adding a new cert, I don't have permission. I don't have permission to: Decrypt the file (access is denied). Copy the file (to another computer) - access denied. I am totally stumped and this VM is a production machine and needs to get up right now. Does anyone have any ideas?

    Read the article

< Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >