Search Results

Search found 10242 results on 410 pages for 'stored proc'.

Page 225/410 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • How can I unobstrusively backup a few client's email?

    - by tladuke
    This is a small office. Our web/email server is a shared host. In the office we have a windows 2008 box up all the time that runs our NAS and a couple other services. I don't have access to the ISP admin stuff, but I assume it has cpanel or something like that. I can get access if I ask. I want to get email backed up from the server to our NAS without the users having to do anything. I suppose I could set up Outlook on that server with everyone's account, but that's a terrible idea, maybe (would sent mail. The boss uses outlook, but we have Apple Mail and Thunderbird clients too. I guess the important thing is that outlook look at the backups, so boss is happy. Then again, maybe it should be stored in whatever is the most portable format (that will work on NTFS) This is for about 10 users.

    Read the article

  • How to choose NoSQL database engine?

    - by Poma
    We have a database with following specs: 30k records, 7mb in size 20 inserts/second 1000 updates/second 1000 range selects/second, by secondary index, approx 10 rows each needs at least one secondary index needs some mechanism to expire keys if they are not updated for 75 secs (can be done via programmatic garbage collector but will require additional 'last_update' index and will add some load) consistency is not required durability is not required db should be stored in memory For now we use Redis, but it does not have secondary index and it's keys index:foo:* is too slow. Membase also does not have secondary index (as far as I know). MongoDB and MySQL memory engine have table-level locks. What engine will fit our use case?

    Read the article

  • Syncing Large Directories/Filesystems using USB Drive [closed]

    - by Alan Lue
    Does anyone have a solution for syncing large directories/filesystems using just a USB flash drive (and specifically without using a network connection)? The objective is simply to sync a user directory between two computers. The contents of the user directory could amount to a large quantity of data—say, a quantity larger than could be stored on any single USB drive—but the aggregate size of changes that must be propagated by a single sync could easily fit on a USB drive. As an example, suppose a user directory is already synchronized between a desktop and a laptop computer. Here's a use case: Some changes are made in the user directory on the desktop. We mount a USB drive onto the desktop and copy whatever changes need to be applied to the laptop user directory in order to synchronize the desktop and laptop user directories. We now mount the USB drive onto the laptop and apply the changes. The desktop and laptop user directories are now synchronized. Any ideas? Alan

    Read the article

  • Syncing Large Directories/Filesystems using USB Drive

    - by Alan Lue
    Does anyone have a solution for syncing large directories/filesystems using just a USB flash drive (and specifically without using a network connection)? The objective is simply to sync a user directory between two computers. The contents of the user directory could amount to a large quantity of data—say, a quantity larger than could be stored on any single USB drive—but the aggregate size of changes that must be propagated by a single sync could easily fit on a USB drive. As an example, suppose a user directory is already synchronized between a desktop and a laptop computer. Here's a use case: Some changes are made in the user directory on the desktop. We mount a USB drive onto the desktop and copy whatever changes need to be applied to the laptop user directory in order to synchronize the desktop and laptop user directories. We now mount the USB drive onto the laptop and apply the changes. The desktop and laptop user directories are now synchronized. Any ideas? Alan

    Read the article

  • create symlink to another machine

    - by microchasm
    Hi, I have 2 machines. Both running CentOS. Box1 is webserver with apache, php. Box2 is mysql, and file storage. The files will only be accessible from Box1 within the webapp. I'd like to somehow create a symlink or somesuch on box1 to a folder on box2 where uploaded files can be stored and retrieved. Security in mind, what would be the best way to go about linking these 2 boxes up in a transparent (to apache) way? NB: the boxes are connected directly to each other via a crossover cable; no lan access to box2. Much thanks!

    Read the article

  • QNAP TS-419p as a VPN Gateway?

    - by heisenberg
    Hello, I am hoping one of you might be able to help. I want to make files stored on shared folders on a QNAP TS-409p available to users over a VPN link. How is the possible? Can someone explain what I need to do. What do I need to do at the router and what do I need to do on the QNAP NAS? Effectively, what I want do do is use the built in Windows vpn client to connect to my home network and then be able to browse the shared folders. Thanks in advance.

    Read the article

  • Information about a file or directory

    - by Tim
    In Linux, the information about a file or directory is stored in its inode. I was wondering what is the data structure for information about a file or directory in Windows 7? How to get the information about a file or directory in Linux and in Windows 7, in terminal and command line window? Is the owner of a file or directory always its creator? Will it be able to change? Is there a creation timestamp for a file in Linux and in Windows 7? How to get it? Thanks and regards!

    Read the article

  • Nagios and rrd on a old server

    - by Pier
    I have an old server (P4 based) on which nagios (and all the other tools to monitor) is running. In the last few weeks we are seeing a strange behavior. In the /var/spool/pnp4nagios (where temporary files are stored before getting processed by pnp4nagios daemon) we have many files like perfdata.1274949941-PID-18839 and we get an error in npcd.log: [05-27-2010 11:17:46] NPCD: ThreadCounter 0/15 File is perfdata.1274951306-PID-27849 [05-27-2010 11:17:46] NPCD: File 'perfdata.1274951306-PID-27849' is an already in process PNP file. Leaving it untouched. Sometimes some graph are not drawn. The server is pretty loaded (around 5-6 normally) and i suspect that npcd goes in timeout and leave those files behind. What could I do (apart from change the server)? Few infos about the system: centos 5.5 nagios 3.2.1 pnp4nagios 0.6 (from sources) Thanks

    Read the article

  • Raid-3 like software backup tool

    - by Chronial
    I have a lot of data (about 7 TB), stored across multiple hard-drives with varying sizes. I would like to have a backup of that data to be safe against drive failure. A RAID is not a good option for me, as I want to keep my cost low and be able to easily extend the storage capacity of my setup by buying an additional HD. I remember seeing a piece of software that generates parity data over all drives and stores that on an extra drive. That solution protects the setup from hard drive failure and works with varying drive sizes (as long as the parity drive is the biggest one). But I can’t seem to find that software again. Does anybody now what I’m talking about or have any other solution for my situation?

    Read the article

  • How can I view a .eml file from command line in Windows Vista?

    - by Nosrettap
    Ok, so my parent's computer crashed (horribly - corruptedRegistry) and I'm trying to access one of their e-mail files that is saved locally on the hard drive. I can't launch Windows itself so right now I am in a "bootup command prompt". I've navigated to where the e-mail appear to be stored C:Users\[userName]\AppData\Local\Microsoft\Windows Mail\Local Folders\Inbox and it shows a list of what appear to be the files. The problem is, they are .eml files and I can't seem to be able to open them. I've tried 'vim' and 'vi' commands but it tells me that 'vim is not recognized as an internal or external command. Does anyone know how I can view .eml files from command line? Thanks

    Read the article

  • Puppet: hanging at Schedule[weekly]

    - by Andrei Serdeliuc
    Why would puppet hang at Schedule[weekly]? I'm running puppet in a masterless setup, so to apply by manifest I'm just running puppet apply /etc/puppet/manifests/site.pp In debug mode, these are the last things it says before it just hangs debug: /Schedule[never]: Skipping device resources because running on a host debug: /Schedule[daily]: Skipping device resources because running on a host debug: /Schedule[monthly]: Skipping device resources because running on a host debug: /Schedule[puppet]: Skipping device resources because running on a host debug: /Schedule[hourly]: Skipping device resources because running on a host debug: /Schedule[weekly]: Skipping device resources because running on a host If I send a SIGINT, it says Exiting debug: Storing state debug: Stored state in 0.03 seconds debug: Finishing transaction 69992657242500 Thanks

    Read the article

  • ESXi disaster recovery plan

    - by Marlin
    I have a Vmware infrastructure where I am using the free version of Esxi 5 . I cannot as a result use vmotion and the other cool features that come with a paid ESXI. I am using snapshots for the backups but they are stored on local hard drive. I need a better backup scenario where I can recover in the event of a harddrive failure. I tried openfiler but could not get it right. What backup method can I try given my situation?

    Read the article

  • Extract photo stills from .vob files

    - by Eric Rath
    My parents had all the family slides scanned by a photo lab. The lab returned the digital photos on two DVDs as movies; there's some stock music over a slideshow with fades between each photo. The discs contain only a handful of files, including some very large VOB files. I'd like to extract these photos and import them into iPhoto. I saw this answer about capturing stills, and that might work if I can figure out the right offset from the beginning and the right capture rate. But this approach seems very error-prone for this purpose. Is there a better way? I wish the individual photo files were stored in a directory on the discs, but they're not there.

    Read the article

  • Is rsync corrupting my RAR?

    - by Mark Henderson
    We have two qnap devices - one in our datacentre and one off-site. We have hundreds of password protected RAR files stored on the qnap that contain virtual machine image snapshots, with approx 20 of them being created each day. We synchronise the two devices using rsync, and it looks like all the files are being rsynced OK - they come over and have the same file size and all the files are present and accounted for. However, when I try to open the RAR files on the remote site, I get Cannot open \\qnap01\FromDatacentre\Snapshots\DB001SQL1-20110626.rar I can open the RAR files on the local site just fine, so I assume that something is getting mangled during the rsync procedure. However, the older files (pre 2011-06-20) work just fine, it's something that's only started happening in the last week. There haven't been (as far as I know) any changes to any of the devices, setup or configuration in that time. Obviously something has changed though. Where should I start investigating?

    Read the article

  • Very frequent "Server not found" messages

    - by Village
    Recently, while browsing and clicking or typing an address, I get: "Server not found", "Connection was reset", or a half-loaded page. Usually the page loads instantly after clicking reload, but this means I must reload on every page. Images on one site, but stored at a different server don't load until reloading for a second time. Clicking "submit" on many sites frequently doesn't work unless I reload many times. Sometimes sites load, but without colors and formating, appearing as if they would in Lynx. This seems to happen with every Web site. My Internet service claims everything on their end is fine. This happened a day after running an update in aptitude. I have not updated any hardware. I have tried clearing Iceweasel's cache. I do not have any router or other equipment. What could be going on? How can I troubleshoot this? PPPoE connection, Iceweasel 3.5.16, Debian 6

    Read the article

  • ZFS on top of iSCSI

    - by Solipsism
    I'm planning on building out a file server using ZFS and BSD, and I was hoping to make it more expandable by attaching drives stored in other machines in the same rack via iSCSI (e.g., one machine is running ZFS, and others have iSCSI targets available to be connected to by the ZFS box and added to zpools). Looking for other people who have tried this has pretty much lead me to resources about exposing iSCSI shares on top of ZFS, but nothing about the reverse. Primarily I have the following questions: Is iSCSI over gigabit ethernet fast enough for this purpose, or would I have to switch to 10GbE to get decent performance? What would happen when one of the machines running iSCSI targets disconnects from the network? Is there a better way to do this that I just am not clever enough to have realized? Thanks for any help.

    Read the article

  • iTunes + External Hard Drive problem

    - by Samwho
    Okay, so I've always stored my music on my external hard drive and run it from there with iTunes. Recently, however, it's being a little awkward. I think it was because I accidentally tried to run a song off my hard drive when it wasn't plugged in and, as would be expected, it said it couldn't find it. So I plug the hard drive in, locate the file and off it goes. But now it won't find any of the files... When I go to the "Get Info" page on the right click menu on a song I notice it has prepended file://localhost/ to everything, so my paths look like this: "file://localhost/E:/Sam/Media/Music/[song name]" I went into the iTunes Music Library.xml file and did a search and replace for file://localhost/ and replaced it with nothing and tried opening iTunes again and it just added file://localhost/ to every file again! Anyone have any idea why it does this and how to fix it without reimporting my library?

    Read the article

  • TPM had to be reintialized: Does a new recovery password have to be uploaded to AD?

    - by MDMoore313
    Some way some how, a user's machine couldn't get read the bitlocker password off of the TPM chip, and I had to enter the recovery key (stored in AD) to get in. No big deal, but once in the machine, I tried to suspend bitlocker per recovery documentation, and got an error message about the TPM not being initialized. I knew the TPM was on and activated in the BIOS, but Windows still made me reinitialize the TPM chip, and in the process it created a new TPM owner password. I found that odd because it prompted me to save this password or print it (there wasn't an option not to), but it made no reference of a recovery password, nor did it back this password up to AD. After the user took her laptop and left I started thinking that if the TPM password change, does the recovery password change also? If so, that new recovery password will need to be uploaded to AD, but MS' documentation doesn't make that clear, and doesn't back up the new recovery key (if one exists) to AD automatically when the group policy says it must, and from a network standpoint AD is accessible.

    Read the article

  • Database implementation question?

    - by gundam
    consider a disk with a sector size of 512 bytes, 2000 tracks/surface, 50 sectors/track, 5 doubled sided platters, average seek time is 10 msec. Assume a block size of 1024-byte is selected. Assume a file that contains 100,000 records of 100-byte each is to be stored on the disk, and NONE of the reocd can be spanned 2 blocks. How many blocks are needed to store the entire file?? If the file is arranged sequentially on disk, how many surfaces are required?? Now, i have calculated that 10,000 blocks are needed to store 100,000 records. But i am not sure how to find out the answer of the surfaces required. I only calculated the capacity of track is 25KB and capacity of surface is 50,000 KB But I don't know how to calculate the number of surfaces... Could anyone help me how to get the answer? Thanks a lot!!

    Read the article

  • Import EML emails into Outlook 2010 64-bit

    - by nness
    Evening everyone. I'm helping setup a small office network, where a number of old PC's are being replaced with new ones with a 64-bit copy of Outlook 2010. The old emails were stored in Windows Live Email, and were exported as .eml files (since we were replacing the machines). All the support I can find indicates that .eml files could simply be dragged-and-dropped into a folder in Outlook 2010, and it will import them correctly. However, it seems this is not the case in the 64-bit versioin, where dropping in .eml files results in a new message being created with these files as attachments. We can re-download the most of the emails off the server if need be, but there were user folders which were not on the server which we were hoping to import. Any advice would be fantastic at this point!

    Read the article

  • VPS showing low disk space despite there is nothing major on it

    - by SheoNarayyan
    Hello experts, On my VPS server I was trying to see the used disk space and when I open My Computer it shows 17.9 GB free out of 39.8 GB it means that 21.9 GB space is used. However, when I select all files and folders from C: and try to see the total size, it just count approximately 11 GB. The difference is around 10 GB. Where is this 10 GB going if I have not stored anything else here? I asked above question from my VPS provider and he responded below Check hidden files/system files/etc. This is default windows OS and its utilization and not specific to setup. If you want specifics of usage, you can go ahead and get in touch with Microsoft support team and they'll provide you with exact specification of the same. I am sure that Windows OS must not be taking up 10 GB space for hidden files and folders. My VPS has Windows Server 2008 R2 installed. Can anyone help me in this on who is right?

    Read the article

  • How to sort time column by value instead of alphabetically

    - by Turch
    I'm creating a pivot table by connecting to an SSAS tabular model (Data - From Other Sources - From Analysis Services) . The model has a "time" column that I want to sort by. The default (database) sorting is earliest to latest: When I click the triangle next to 'Row Labels' and select "Sort A to Z", I get alphabetically sorted times: How can I get the times to sort by time? Changing the number format from "General" to "Time" does nothing. The times aren't stored as text either - the data type of the column in the SSAS model is Auto (Date)

    Read the article

  • Leaving job, PC personal info deletion guide?

    - by bangoker
    I'm quitting my current job, and I want to make sure all my personal stuff gets whipped off the computer. I know everybody stores stuff on different places, but is there any quick guide of locations (folders) and apps that might need deletion? you know, just to be sure I erased from all the places and I don't forget any (like those were apps save stuff automatically, the data folders and such) I have erased most of my docs in My Document folder, and I am going to erase all the cookies and stored history in my browsers (opera, ie, ff, chrome) (by the way is there any tool for this?)

    Read the article

  • Ways to deduplicate files

    - by User1
    I want to simply backup and archive the files on several machines. Unfortunately, the files have some large files that are the same file but stored differently on different machines. For instance, there may a few hundred photos that were copied from one computer to the other as an ad-hoc backup. Now that I want to make a common repository of files, I don't want several copies of the same photo. If I copy all of these files to a single directory, is there a tool that can go thru and recognize duplicate files and give me a list or even delete one of the duplicates?

    Read the article

  • Securing internal data accessed by a website on the big, bad internet

    - by aehiilrs
    A close relative of this question on Stack Overflow: When you have a web site in your DMZ that needs to access production data stored on an internal DB, what strategies do you recommend using to lower the risks that come from accessing live data? Is it even considered acceptable to have a connection initiated from the DMZ come inside of your network? An extra detail about the nature of the site that kind of throws a monkey wrench into the machinery is that people using the web site will be competing for "spots" on a first-come, first-serve basis with others using the internal software. Because of this, as close to zero lag time between the two applications as possible is ideal.

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >