Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 1027/1338 | < Previous Page | 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034  | Next Page >

  • Backup and Archive Strategy Question

    - by OneNerd
    I am having trouble finding a backup strategy for our code assets that 'just works' without any manual intervention. Goal is to have an off-site backup (a synchronized one) so that when we check-in files, create builds, etc. to the network drive, the entire folder structure is automatically synchronized and backed-up (in real time, or 1x per day) at some off-site location so if our office blows up, we don't lose all of our data. I have looked into some online backup services, but have not yet had any success. Some are quirky/buggy, others limit file size and/or kinds of files (which doesn't work well for developer files). Everything gets checked in and saved to a single server (on a Raid Mirror), so we just need to have a folder on that server backed up/synchronized to some off-site location. So my question is this. What are you using for your off-site backup strategy. What software, system, or service? Is there a be-all/end-all system of backing up your code assets that I just haven't found yet? Thanks

    Read the article

  • Using native resolution on external display results in stretched, out of bounds image

    - by Roni Yaniv
    I have an HP min 311 netbook with Windows XP, which I've connected to a Samsung SyncMaster 2043BW display via the supplied analog cable. The external display's native res is 1680x1050, which the netbook's ION GPU supports. I've configured the external display as the single display (no cloning or any such fancy stuff). However, once I set the native res, the image just stretches out. It looks squashed, and it goes outside the monitor's edges. In contrast, lower resolutions manage to stay within the monitor's display edges, though obviously they are skewed in some way (vertically or horizontally). BTW, the only res which seems to be displayed relatively clearly (it's the least blurry) is 1280x720. I tried looking all over the web for an explanation/advice but could not find any. I already played with the settings on the external display itself several times. So either it's not that, or I missed something. Has someone run into this issue? I need help.

    Read the article

  • EC2: How dangerous is it to turn off fsck for EBS volumes?

    - by Janine
    I have been tearing my hair out trying to figure out why my EC2 instances (made from my own custom AMIs) were taking many tries to come up properly. They would fail with the following error: fsck.ext3: No such file or directory while trying to open /dev/sdf For both of the EBS volumes I was attaching during startup. Finally, I figured out the problem. I had put this in /etc/fstab: /dev/sdf /export ext3 defaults 1 2 /dev/sdi /export2 ext3 defaults 1 2 The 2 tells the system to fsck the drives on the way up. Changing this to /dev/sdf /export ext3 defaults 1 0 /dev/sdi /export2 ext3 defaults 1 0 Avoids the problem completely, but now the volumes are never going to be fsck'd. How much does this matter? Once the instance goes into production it's going to be running pretty much 24/7, so not many fscks would be happening anyway, but still... this just feels like a bad idea. I have not been able to find anyone else even reporting this problem (there are people with the same error message, but different causes). It seems unbelievable that I could be the only person to ever make this mistake, but perhaps I'm just talented that way. :) If there is another solution to the problem I would love to hear it; I have not been able to find one.

    Read the article

  • Apache2 process stuck at 100% cpu, CLOSE_WAIT socket lingering

    - by mmazing
    I've troubleshooted the heck out of this today, and I can't seem to find any information on how to determine what is happening exactly. Basically, on my development server, another developer is causing CLOSE_WAIT connections that eat up one or more apache2 processes for several hours if I don't restart apache2. strace on any of the processes yields no information, only that it was able to attach. mod_proxy is not enabled. KeepAlive is on, KeepAliveTimeout is 15 seconds, MaxKeepAliveRequests is 100. From what I've been reading, this may or may not be an apache issue at all, just that that's how CLOSE_WAIT works (the server is waiting for a FIN packet to close the connection). I just can't believe that a server would be crippled so easily by not receiving a packet from a remote host telling it to close the connection. Especially without any intervention for well over an hour. Any tips? I'm about to pull my hair out. Edit : Also, there are no unusual entries in any apache log files. Edit 2: lsof -i shows only a single CLOSE_WAIT per hanging process. (That's what has been bothering me about this, as most other discussions talk about many CLOSE_WAIT connections, while I only have one per process.) The nature of the code that is running (php) doesn't really lend itself to closing open connections and whatnot. I can run the same code that he is executing with the same session data, and not result in a hanging process.

    Read the article

  • OpenSSL: how to setup an OCSP server for checking third-party certificates?

    - by StackedCrooked
    I am testing the Certificate Revocation functionality of a CMTS device. This requires me to setup a OCSP responder. Since it will only be used for testing I assume that the minimal implementation provided by OpenSSL should suffice. I have extracted the a certificate from a cable modem, copied it to my PC and converted it to the PEM format. Now I want to register it in the OpenSSL OCSP database and start a server. I have completed all these steps, but when I do a client request my server invariably responds with "unknown". It seems to be completely unaware of my certificate's existence. I would greatly appreciate if anyone would be willing to have a look at my code. For your convenience, I have created a single script consisting of a sequential list of all used commands, from setting up the CA until starting the server: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/AllCommands.sh You can also find the custom config file and the certificate that I am testing with: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/ Any help would be greatly appreciated.

    Read the article

  • OpenSSL: how to setup an OCSP server for checking third-party certificates?

    - by StackedCrooked
    I am testing the Certificate Revocation functionality of a CMTS device. This requires me to setup a OCSP responder. Since it will only be used for testing I assume that the minimal implementation provided by OpenSSL should suffice. I have extracted the a certificate from a cable modem, copied it to my PC and converted it to the PEM format. Now I want to register it in the OpenSSL OCSP database and start a server. I have completed all these steps, but when I do a client request my server invariably responds with "unknown". It seems to be completely unaware of my certificate's existence. I would greatly appreciate if anyone would be willing to have a look at my code. For your convenience, I have created a single script consisting of a sequential list of all used commands, from setting up the CA until starting the server: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/AllCommands.sh You can also find the custom config file and the certificate that I am testing with: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/ Any help would be greatly appreciated.

    Read the article

  • Which server software and configuration to retrieve from multiple POP servers, routing by address to correct user

    - by rolinger
    I am setting up a small email server on a Debian machine, which needs to pick up mail from a variety of POP servers and figure out who to send it to from the address, but I'm not clear what software will do what I need, although it seems like a very simple question! For example, I have 2 users, Alice and Bob. Any email to [email protected] ([email protected] etc) should go to Alice, all other mail to domain.example.com should go to Bob. Any email to [email protected] should go to Bob, and [email protected] should go to Alice Anything to *@bobs.place.com should go to Bob And so on... The idea is to pull together a load of mail addresses that have built up over the years and present them all as a single mailbox for Bob and another one for Alice. I'm expecting something like Postfix + Dovecot + Amavis + Spamassassin + Squirrelmail to fit the bill, but I'm not sure where the above comes in, can Postfix deal with it as a set of defined regular expressions, or is it a job for Amavis, or something else entirely? Do I need fetchmail in this mix, or is its role now included in one of the other components above. I think of it as content-filtering, but everything I read about content-filtering is focussed on detecting spam rather than routing email.

    Read the article

  • What does a status of "Backup" mean for Windows 7 local user profiles?

    - by Howiecamp
    Summary: Upon logging on to Windows 7 RTM I get a message that my profile can't be loaded and a temporary user profile is created. I logged off and back on as Administrator. The user profiles dialog shows my user profile with a Type of "Local" and a Status of "Backup" rather than "Local" which it should be. How can I change this to make my user profile accessible? The long story: My PC has a single hard drive partitioned into a C: and a D:. I'd moved my user profile directory (c:\Users) to d:\Users, removed c:\Users and then used mklink.exe to create a directory symbolic link c:\Users -- d:\Users. Worked like a charm since I did it. Today, I make a System Restore Point for drives C: and D:. Next, I dismounted D: and used the Disk Management tool to remove the "D:" drive letter from the D volume. (My plan was to reboot and then redirect the symbolic link.) Upon reboot, I got the user profile error described above. Finally, I restored the System Restore Points that I'd created for both drives and then rebooted again. Same issue.

    Read the article

  • Make a snapshot of a live mySQL database with myISAM & innoDB tables without locking

    - by Artem
    We have a live database in production where we are running out of space on the server. So I would like to transfer to a new server without any downtime (or as little downtime as possible). In general, I would also like to have a hot failover copy of the database available. I would like to use replication to get all of the data copied to the new machine, and then at some point flip a switch and have that new machine become the master (normal failover scenario). My problem is that I am not sure how to initialize replication without locking the db to make the initial snapshot I will use? Is there any way to do this? I know I could do it using single-transaction if I was using innoDB, but very unfortunately we have some myISAM tables in there (in fact the largest 150GB table is myISAM and I want to switch it to InnoDB but I can't do it until I have more space & a hot copy to switch to). Any ideas? Is there some way to make such a snapshot? Or is there alternatively a way to get replication to "catch up" without an snapshot for initialization?

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • TV-out worked, now doesn't. May the problem be the cable, TV, driver, OS, graphic card?

    - by Petruza
    I have a CRT TV hooked to the PC, which once worked great, now doesn't. I can't consider getting a newer TV, this one is used in an MAME arcade cabinet so it has to be a CRT for best old school look and feel. It's connected through the TV-out connector of my graphic card. When it worked, I had Windows XP, the same PC and the same card. Now I have windows 7, not sure if the OS switch caused the malfunction as I don't use the TV-out all the time. Can it be an upgrade of the Nvidia driver? I thought it may be the S-video to RCA cable, but tried 3 different cables and neither worked. In fact, one of them, that unlike the other two, has a single RCA output connector instead of two, behaves differently, although it doesn't work, but it does the following: When I open the NVidia settings panel, or when I change a setting and click Apply then the TV flashes for a split second and you can see the windows screen, but then it goes back to blank. So any clues what can be failing here, and some advice? Possible failures, please comment on the one you suspect the most: NVidia driver version Windows version Cable Graphic card's TV out other?

    Read the article

  • Make isolinux 4.0.3 chainload itself in VMWare

    - by chainloader
    I have a bootable iso which boots into isolinux 4.0.3 and I want to make it chainload itself (my actual goal is to chainload isolinux.bin v4.0.1-debian, which should start up the Ubuntu10.10 Live CD, but for now I just want to make it chainload itself). I can't get isolinux to chainload any isolinux.bin, no matter what version. It either freezes or shows a "checksum error" message. I'm using VMWare to test the iso. Things I have tried: .com32 /boot/isolinux/chain.c32 /boot/isolinux/isolinux-debug.bin (chainload self) this shows Loading the boot file... Booting... ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: Main image LBA = 53F00100 ...and the machine freezes. Then I've tried this (chainload GRUB4DOS 0.4.5b) chainloader /boot/isolinux/isolinux-debug.bin Result: Error 13: Invalid or unsupported executable format Next try: (chainload GRUB4DOS 0.4.5b) chainloader --force /boot/isolinux/isolinux-debug.bin boot Result: ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: No boot info table, assuming single session disk... isolinux: Spec packet missing LBA information, trying to wing it... isolinux: Main image LBA = 00000686 isolinux: Image checksum error, sorry... Boot failed: press a key to retry... I have tried other things, but all of them failed miserably. Any suggestions?

    Read the article

  • Forward Apache to Django dev server

    - by Alex Jillard
    I'm trying to get apache to forward all requests on port 80 to 127.0.0.1:8000, which is where the django dev server runs. I think I have it forwarding properly, but there must be an issue with 127.0.0.1:8000 not being run by apache? I'm running the django dev server in an ubuntu vmware instance, and I'd other people in the office to see the apps in development without having to promote anything to our actual dev/staging servers. Right now the virtual machine picks up an IP for itself, and when I point a browser to that url with the defualt apache config, I get the default apache page. I've since changed the httpd.conf file to the following to try and get it to forward the requests to the django dev server: ServerName localhost <Proxy *> Order deny,allow Allow from all </Proxy> <VirtualHost *> ServerName localhost ServerAdmin [email protected] ProxyRequests off ProxyPass * http://127.0.0.1:8000 </VirtualHost> All I get are 404s with this, and in error.log I get the following (192.168.1.101 is the IP of my computer 192.168.1.142 is the IP of the virtual machine): [Mon Mar 08 08:42:30 2010] [error] [client 192.168.1.101] File does not exist: /htdocs

    Read the article

  • apache 2.4 redirect within virtualhost

    - by user129545
    I have a couple http (port 80) vhosts that I want to redirect to http if an https request is made to them. Apparently some things have changed with Apache 2.4 (NameVirtualHost not used like it was in the past, etc). Apache 2.4 on centos 5.5, This is all using a single ip for all vhosts below, I don't have multiple ip's on this box, my /usr/local/apache2/conf/extra/httpd-vhosts.conf : # <VirtualHost www.dom1.com:80> ServerName www.dom1.com ServerAlias dom1.com DocumentRoot /usr/local/apache2/htdocs/dom1/wordpress </VirtualHost> <VirtualHost webmail.dom2.com:443> ServerName webmail.dom2.com DocumentRoot /usr/local/apache2/htdocs/webmail SSLEngine On SSLCertificateFile /usr/local/apache2/webmail.crt SSLCertificateKeyFile /usr/local/apache2/webmail.key </VirtualHost> # my /usr/local/apache2/conf/extra/httpd-ssl.conf, # Listen 443 SSLPassPhraseDialog builtin SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000) SSLSessionCacheTimeout 300 Mutex default SSLRandomSeed startup file:/dev/urandom 512 SSLRandomSeed connect builtin SSLCryptoDevice builtin # webmail.dom2.com works fine. Problem is I can connect to https://www.dom1.com, and it serves up the content from webmail.dom2.com. I want any https requests for www.dom1.com on port 443 to simply redirect to http://www.dom1.com on port 80. Thanks

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • Resize a RAID 1 volume on OSX Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OSX 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable...), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

  • CryptSvc not matched by Windows 7 Firewall rule

    - by theultramage
    I am using Windows Firewall in conjunction with a third-party tool to get notified about new outbound connection attempts (Windows Firewall Notifier or Windows Firewall Control). The way these tools do it is by setting the firewall to deny by default, and to add an auditing policy to log blocked connections into the Security event log. Then they watch the log, and display notification about newly added entries. netsh advfirewall set allprofiles firewallpolicy blockinbound,blockoutbound auditpol /set /subcategory:{0CCE9226-69AE-11D9-BED3-505054503030} /failure:enable With this configuration in place, I now need to craft outbound allow rules for applications and system services. Here is the rule for CryptSvc, the service frequently used for certificate validation and revocation checking: netsh advfirewall firewall add rule name="Windows Cryptographic Services" action=allow enable=yes profile=any program="%SystemRoot%\system32\svchost.exe" service="CryptSvc" dir=out protocol=tcp remoteport=80,443 The problem is, this rule does not work. Unless I change the scope to "all programs and services" (which is really unhealthy), connection denied events like the following will keep appearing in the security log: Event 5157, Microsoft Windows security auditing. The Windows Filtering Platform has blocked a connection. Application Information: Process ID: 1476 (<- svchost.exe with CryptSvc and nothing else) Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Direction: Outbound Source Address: 192.168.0.1 Source Port: 49616 Destination Address: 2.16.52.16 Destination Port: 80 Protocol: 6 (<- TCP) To make sure it's CryptSvc, I have let the connection through and reviewed its traffic; I also configured CryptSvc to run in its own svchost instance to make it more obvious: ;sc config CryptSvc type= share sc config CryptSvc type= own So... why is it not matching the firewall rule, and how to fix that?

    Read the article

  • Gerrit ssh key setup on windows server

    - by hotpotato
    I am attempting to configure google's 'Gerrit' code review web app on a Windows server 2008 virtual machine on our internal network. We are using Apache Tomcat (6.0.36) to host the web app and have deployed the gerrit.war to tomcats webapp folder, setup the context.xml, web.xml etc for the web app correctly i believe. However when i startup Tomcat using the $CATALINA_HOME/bin/startup.bat i get the following message in the tomcat logs: *Dec 07, 2012 1:03:54 PM org.apache.catalina.core.StandardContext listenerStart SEVERE: Exception sending context initialized event to listener instance of class com.google.gerrit.httpd.WebAppInitializer com.google.inject.CreationException: Guice creation errors:* 1) No SSH keys under C:\Gerrit\config\etc while locating com.google.gerrit.sshd.HostKeyProvider at com.google.gerrit.sshd.SshModule.configure(SshModule.java:90) I have created a is_rsa.pub SSH key and placed it in the specified directory to no avail. I have been googling this for about a week now and can't seem to find any information about the file or format it is expecting... documentation on setting gerrit up on windows seems hard to come by! Can anyone provide useful information about how to correctly configure a host SSH key in this context?

    Read the article

  • Copy a harddrive from a failed desktop machine using a second working one. [closed]

    - by MrEyes
    Heres the scenario: I have PC-A, an old PC that runs Windows XP but now refuses to boot due to a failed motherboard (or maybe PSU). This PC has a single 80gb IDE drive. I also have PC-B, running Windows Vista, this is working fine. I want to copy all the data off PC-As HDD onto PC-B. To do this I have taken the HDD out of PC-A and connected it as a slave to PC-B. PC-B now boots and sees the additional drive. However, when I attempt to access/copy user folders (i.e. Documents and Settings/[username]/*) I am told that I cannot access the folders due to user permissions. I am doing this under an adminstrator account on PC-B. So the question is, how can I "backup" the data? Preferably without making any changes to the drive contents. The reason for this is that it is possible that PC-A is failing due to a bad PSU, so I intend to replace it before writing off the machine. However I would feel much happier if I had a backup of the data on the HDD.

    Read the article

  • Brick-level backup and restore with exchange 2007

    - by V. Romanov
    In the company I'm working for, we use exchange 2007 and backup it using netbackup. The backup is a daily complete backup of the information store and the direct corollary of this is that restores are hell. We need to restore the entire information store (over 80 gb), somehow merge it back with the original store, which causes problems. Alternatively, we tried using QUEST software to emulate exchange and restore mails from the emulation. However, this proved unreliable. The main problem with this entire situation is that we have to restore the whole information store and walk it through the restore process manually, and its quite absurd to be forced spend more than a day restoring even one erased email. (we have erased mail retention, but sometimes we need to restore older mail). in comparison, back in the day of XCH2003 and backupexec 12, we had complete brick level backup and restore at the push of a button. I've spoken to one of our chief sysadmins who claimed that the official response from microsoft to this issue was - "sorry guys, no brick level backup in XCH2007" which sounds ridiculous to me. Can someone shed some light on the situation? How do you backup your exchange2007 stores? Can you restore a single email quickly? A mailbox, perhaps?

    Read the article

  • Can a hardware firewall block a server accessing its OWN UNC shares?

    - by Simon
    I need to set up a UNC share for my hosted dedicated server to access a share on itself. Unfortunately TFS requires a UNC share. I am on a Windows Server 2008 Standard SP2 64bit dedicated server behind a PIX 501 firewall hosted with GoDaddy. I just cannot get the server to access itself and get this error: Windows cannot access \\SERVER\SHARE Check the spelling of the name.. etc. I've found numerous questions about this but no answer to my problem. Server 2008 Standard x64 SP2 Workgroup - not domain Windows Firewall is off Computer browser service is on I am trying to access \\MYMACHINE\TFS-BUILDS by typing in - or double clicking. Neither works. Machine has single network card Filesharing wizard says share was ok Share was showing under 'Computer management' Permissions are set to 'everyone' full control No obvious errors in eventlog Reboot didn't fix it Unfortunately I cannot try to access other shares in or out of this machine because it is a hosted dedicated server and the only machine behind a hardware firewall. The only thing left i can think of is that the hardware firewall needs to be configured. Is this possible? Does 'UNC traffic' go out of the machine and then back in again?

    Read the article

  • Processing-time billing in Amazon EC2

    - by Rafael Almeida
    Hi all! I think my question is fairly basic, but I would like a clarification: in the Pricing part of AWS we can see that Amazon charges people around .10 by the 'instance computing hour'. I've seen in a blog post somewhere (can't remember where exactly, and even if I did I think it was in Portuguese anyway) that this way your minimum monthly payment would be $72 (= .10 $s/hour x 24 hours x 30 days). Is this correct? (I don't think it is!) In my understanding is that this 'virtual computing time' is only used when your machine is actually doing something (serving pages, serving the admin via ssh, whatever), so real billable usage would be less than 720 hours/month in most webserver scenarios. Is my view correct? If it is, then it leads me to another question: is it economically interesting to buy access to one of these instances for testing? I mean, would I have the 'freedom' to 'forget' about it for a month and receive a very-close-to-zero (as in, a few cents) bill? Do you do it/know of anybody who does? Any thoughts on the matter (as in, "yes, it's a good idea", or "yes, but there's this 'gotcha': ...", or "no, nobody does it because of...")? PS: sorry for the loong question text. I highlighted the main questions for easy view. Also, I'm not sure if this question is actually more than one and if it's desirable for the community, so, sorry if it is too! Thanks in advance!

    Read the article

  • Best practice for Exchange 2010 HA topology considering 6 x Exchange licenses and TMG 2010

    - by MadBoy
    What would be best topology considering that: 6 x Exchange 2010 Standard Licenses 2 x Separate locations that are supposed to support redundancy in case of link problems 4 x Forefront TMG 2010 with Forefront Security and Forefront Protection/Security Multiple locations worldwide using those Exchange. Most locations will be connected with VPN Tunnel (the ones hosting Exchange for sure). I was thinking something like this: Location MAIN (about 70-100 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Location SUPPORT (about 20 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Management wants to make sure that in case of problems in main location (power failure, link loss etc) second location can support all traffic from around the world and vice-versa. We have 6-7 locations and more comming up (not big ones but like 10+ people per each location). I do know that CAS/HUB is single point of failure (and no NLB), but i simply lack more licenses to do some redundancy on that. What do you think about this approach? What would be better approach according to you?

    Read the article

  • NetBackup with VSS and Instant Recovery - Failing to delete old snapshots

    - by Jonathan Bourke
    We are attempting to implement Microsoft VSS for snap-shotting in our NetBackup 6.5.3.1 environment. The clients are both 32 & 64 bit Windows 2003 Server. Snapshot parameters are: Instant recovery is enabled Maximum snapshots = 1 Provider type = 1 (System) Snapshot attribute = 1 (Differential) All backups successfully complete, and VSS shadows are successfully created both for the snapshot backup and for the open files (shadow copy components). The Issue: NetBackup is not clearing or overwriting old snapshots with each successive backup. When we list shadows, and shadow storage, it is increasing and increasing. IT is not honouring the Maximum Snapshot setting. The Logs: The bpfis log doesn’t really appear to show any errors other than for methods which we are not employing (VxVM, Flashsnap, etc.). A section is as follows: 11:54:10.744 [348.4724] <2> logparams: D:\Program Files\Veritas\NetBackup\bin\bpfis.exe delete -nbu -id htpststr001.san.mgmt.det_1248918143 -bpstart_to 300 -bpend_to 300 -clnt htpststr001.san.mgmt.det 11:54:10.744 [348.4724] <4> bpfis: INF - BACKUP START 348 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS error 10; see following messages: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: Non-fatal method error was reported 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vfm_configure_fi_one: method: FlashSnap, type: FIM, function: FlashSnap_init 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS method error 3; see following message: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: FlashSnap_init: Veritas Volume Manager not installed. 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS error 10; see following messages: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: Non-fatal method error was reported 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vfm_configure_fi_one: method: vxvm, type: FIM, function: vxvm_init 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: VfMS method error 3; see following message: 11:54:11.713 [348.4724] <8> onlfi_initialize_vfms: vxvm_init: Veritas Volume Manager not installed. 11:54:11.713 [348.4724] <4> onlfi_thaw: Thawing C:\ using snapshot method VSS. 11:54:11.713 [348.4724] <2> onlfi_vfms_logf: vfm_thaw: delete snapshot ... 11:54:11.744 [348.4724] <2> onlfi_vfms_logf: snapshot services: emcclariionfi:Thu Jul 30 2009 11:54:11.744000 <Thread id - 4724> Unable to import any login credentials for any appliances. 11:54:11.760 [348.4724] <2> onlfi_vfms_logf: snapshot services: hpevafi:Thu Jul 30 2009 11:54:11.760000 <Thread id - 4724> CHpEvaPlugin::init: CLI tool is not installed. 11:54:11.760 [348.4724] <2> onlfi_vfms_logf: snapshot services: hpmsafi:Thu Jul 30 2009 11:54:11.760000 <Thread id - 4724> No array mangement credentials are available in configuration file. 11:54:13.806 [348.4724] <4> onlfi_thaw: do_thaw return value: 0 11:54:13.806 [348.4724] <4> onlfi_thaw: Thawing D:\ using snapshot method VSS. 11:54:15.806 [348.4724] <4> onlfi_thaw: do_thaw return value: 0 11:54:19.806 [348.4724] <2> fis_delete_id: removing D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.htpststr001.san.mgmt.det_1248918143.0 11:54:19.806 [348.4724] <2> fis_delete_id: removing D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.htpststr001.san.mgmt.det_1248918143.0.fiid 11:54:19.853 [348.4724] <4> bpfis: INF - EXIT STATUS 0: the requested operation was successfully completed The Question: Has anyone any experience of NetBackup / VSS not clearing snapshots after backups? We will ultimately be using a HP EVA for the snapshots, but we want to ensure correct functioning at a VSS level before we go further. Regards, Jonathan (PS: Question previously posted by my colleague on entsupport.symantec.com)

    Read the article

  • How to recover data files from xampp-windows to xampp-linux after crash?

    - by David Buehler
    My Windows box died after I developed a database in xampp on it; fortunately I have a backup of the entire F:/TestWeb/Xampp partition. Unfortunately, I did not do an Export (nor dump) of the "Lws2" database before the crash. I have replaced the defunct machine with one running Mint7 (based on Ubuntu 9.04 "Jaunty Jackalope") and installed xampp-linux into the /opt partition, so the new xampp now runs fine in /opt/lampp, and says all the elements are secured by passwords (which I just assigned during this installation.) I assumed that Xamp-Windows installed in November would migrate easily to xampp-linux installed iin February -- a bad assumption. It apparently would have been simple if I had known enough to do an Export or a Dump before the crash, but.... The backup was done to a Network Attached Storage drive, which is formatted as "vfat" so the backup does not carry with it any valid ownership permissions from MySql on NTFS. I now see from my backup that the old data resided in \TestWeb\Xampp\Mysql\Data\Lws2\ and consists of 7 ".frm" files which define my tables. The actual data -- I suppose a ".sql" file or files -- has disappeared, and I am resigning myself to two days of retyping it. But I do not wish to do the table layouts all over again. So I copied Data tree to /opt/lampp/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/Lws2 -- PhpMyAdmin does not see it. So I copied Data tree to /opt/lampp/var/mysql/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/var/mysql/Lws2 -- PhpMyAdmin does not see it. So I adjusted all the permissions to stop saying owner "nobody" to owner "root" and gave full permissions to all groups and to all others, with permissions percolating down, in all 4 trees. You guessed it -- PhpMyAdmin does not see any database named Lws2, only its 4 default ones. I double-checked the permissions and rebooted Linux and repeated the tests. At some point in that process I did see PhpMyAdmin showing "lws2(7)" but when I clicked on it I saw a "no table found" message. I have not been able to recreate that experience. Apparently there are some setup files for MySql and for PhpMyAdmin which need to be set up by running a wizard or two or by editing the files directly. I grepped the TestWeb tree and found an old "ldir = "C:TestWeb\Xampp\MySql\" and a "DataDir = C:TestWeb\Xampp\MySql\" in a .php file and in a .bat file, but I cannot find the corresponding config file names on the /opt partition/ -- so it looks as if these wizards have not been run to create them. What config files files does Linux use to setup MySql config files for PhpMyAdmin? What wizards do I need to run to point the MySql engine and the PhpMyAdmin at the folder /opt/lampp/data/ with its lws2 folder inside it? Or which files do I need to edit, with a sample of what it normally says under Linux? Incidentally, I remember I converted from MyISAM with its .MYD and .MYI files to InnoDB after entering only a small amount of the data -- and I do not know what file types to look for -- perhaps my data is still there but under another guise or in another place? Is it something as simple as linux needing to see "/data/" instead of /Data? I will check that out while waiting for a response. If anyone can point me to documentation that discusses this level of detail -- I will read it avidly! In any case, thanks for any clarification you can give on this thorny problem. wizdum

    Read the article

< Previous Page | 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034  | Next Page >