Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 803/906 | < Previous Page | 799 800 801 802 803 804 805 806 807 808 809 810  | Next Page >

  • pam_unix(sshd:session) session opened for user NOT ROOT by (uid=0), then closes immediately using using TortiseSVN

    - by codewaggle
    I'm having problems accessing an SVN repository using TortoiseSVN 1.7.8. The SVN repository is on a CentOS 6.3 box and appears to be functioning correctly. # svnadmin --version # svnadmin, version 1.6.11 (r934486) I can access the repository from another CentOS box with this command: svn list svn+ssh://[email protected]/var/svn/joetest But when I attempt to browse the repository using TortiseSVN from a Win 7 workstation I'm unable to do so using the following path: svn+ssh://[email protected]/var/svn/joetest I'm able to login via SSH from the workstation using Putty. The results are the same if I attempt access as root. I've given ownership of the repository to USER:USER and ran chmod 2700 -R /var/svn/. Because I can access the repository via ssh from another Linux box, permissions don't appear to be the problem. When I watch the log file using tail -fn 2000 /var/log/secure, I see the following each time TortiseSVN asks for the password: Sep 26 17:34:31 dev sshd[30361]: Accepted password for USER from xx.xxx.xx.xxx port 59101 ssh2 Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session opened for user USER by (uid=0) Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session closed for user USER I'm actually able to login, but the session is then closed immediately. It caught my eye that the session is being opened for USER by root (uid=0), which may be correct, but I'll mention it in case it has something to do with the problem. I looked into modifying the svnserve.conf, but as far as I can tell, it's not used when accessing the repository via svn+ssh, a private svnserve instance is created for each log in via this method. From the manual: There's still a third way to invoke svnserve, and that's in “tunnel mode”, with the -t option. This mode assumes that a remote-service program such as RSH or SSH has successfully authenticated a user and is now invoking a private svnserve process as that user. The svnserve program behaves normally (communicating via stdin and stdout), and assumes that the traffic is being automatically redirected over some sort of tunnel back to the client. When svnserve is invoked by a tunnel agent like this, be sure that the authenticated user has full read and write access to the repository database files. (See Servers and Permissions: A Word of Warning.) It's essentially the same as a local user accessing the repository via file:/// URLs. The only non-default settings in sshd_config are: Protocol 2 # to disable Protocol 1 SyslogFacility AUTHPRIV ChallengeResponseAuthentication no GSSAPIAuthentication yes GSSAPICleanupCredentials yes UsePAM yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS X11Forwarding no Subsystem sftp /usr/libexec/openssh/sftp-server Any thoughts?

    Read the article

  • VirtualBox - multiple guests, each with a single bridged adapter?

    - by Martin
    I am running a dedicated server (located at Hetzner, Germany) that runs VirtualBox in order to virtualize several services accross multiple virtual guests. Those guests are supposed to communicate with each other (for instance, a virtual web server has to access a virtual database server); to be reachable from the dedicated server (for instance, SSH access); and to access the Internet via the dedicated server (for instance, to download security updates) Currently, this is achieved by having host-only adapter vboxnet0 on the dedicated server and two virtual interfaces on each guest. There, virtual adapter eth0 is attached to vboxnet0 (to achieve (1) and (2)), virtual adapter eth1 is attached to VirtualBox' NAT (to achieve (3)). Via eth0, the guests have access to a DHCP and a DNS server, both running on the dedicated server (there, bound to vboxnet0). This allows me to assign custom IP addresses and names. Via eth1, VirtualBox pushes a proper route that enables each guest to access the Internet (via eth0 on the dedicated server). This setup with two virtual adapters frequently leads to problems and at leasts complicates many things. For instance, on the dedicated server there is OpenVPN which allows to access the virtual machines via the Internet; futhermore, there is Shorwall that controls the incoming and outgoing network traffic between the Internet, the dedicated server, and the individual virtual machines. Not to mention automatic installation of servers via PXE... Therefore, I would prefer to have only one single virtual adapter on each guest which would be used for both incoming and outgoing connections. As far as I understand, one would basically use a bridged interface for that very purpose. Now the question arises: Which interface on the dedicated server would the bridge use? eth0 on the host server is not an option, as this is prohibited by the provider. A virtual interface eth0:0 would not make any sense, as a bridge always uses a physical interface (eth0 in this case). Would it be possible to create a bridged interface in each virtual machine that would "dangle in the air"? Thus, without a complement on the dedicated server? How would I have to set up the routing on the host server? Please note that the host / dedicated server has only one network adapter (eth0) which is connected to the provider's network. Regards, Martin

    Read the article

  • Firefox: non-Vimperator way to do mouseless browsing?

    - by Peter Mortensen
    Is it possible to do efficient browsing with Firefox using only the keyboard (like in Opera)? By efficient I mean something faster than using TAB - this takes far too long. The arrow keys should be for navigation (in Opera it is Shift + arrow key). It can done with the Vimperator add-on, but isn't there a simpler way? Update 2: the closest to Opera's way is to enable caret navigation (F7 toggles this mode). It doesn't jump between links so it is a little bit slower, but the normal navigation (arrow keys, page up, page down, etc.) works and the focus/caret/cursor follows (in contrast to a text editor for page up/down). And text can be selected and copied like in a text editor. The biggest drawback is that in practice it is necessary to switch in and out of caret mode. And there is no indication of which mode is currently active. Update 1: a work-around (proposed by several but is not really what I am looking for) can be used if 3 settings are changed (to make it practical). After these changes the first few letters of a link text can be typed and that link will selected so pressing Enter will open it. Using the work-around the screen will jump around if it is a long page as it does not restrict itself to the current visible page, but it is usable. First settings change: menu Tools/Options/Advanced/tab General/Accessibility/Search for text when I start typing Turn this option on. Second settings change: set option to only go to links; in address bar enter: about:config followed by Enter. Then: press "I'll be careful, I promise", find the line accessibility.typeaheadfind.linksonly, select it and change the value to True by either hitting Enter or Shift+F10/Toggle (accessibility.typeaheadfind.linksonly is on line 11 when I tried). Third settings change: turn off case-sensitivity. Set accessibility.typeaheadfind.casesensitive to 0 (same procedure as for accessibility.typeaheadfind.linksonly, see above. When Enter is pressed a dialog box will appear with the current value. Type 0 and press Enter). To use: type some part of the link. If there are several possibilities use Ctrl+G (or F3) to jump between them. Use Ctrl+Enter to open in a new tab. Platform: Firefox 3.0.6, Windows XP 64 bit SP2.

    Read the article

  • Impact of the L3 cache on performance - worth a dual-processor system?

    - by Dan Nissenbaum
    I will be purchasing a new high-end system, and I would like to have a better sense of whether a dual-processor Xeon system (I am looking at the new, high-end Xeon E5-2687W) might, realistically, provide a noticeable performance improvement due to the doubling of the L3 cache (20 MB per CPU). (This is in addition to the occasional added advantage due to the doubling of cores and RAM.) My usage scenario is, roughly, that I have many background applications running at any time - 3 or 4 data compression/backup applications, a low-impact web server, one or two virtual machines at any given time (usually fairly idle), and perhaps 20 utility programs that utilize a noticeable (but small) portion of the CPU cores. In total, when I am not actively using the computer, about 25% of the total CPU power is utilized in my current i7-970 6-core (12 thread) system. When I am doing routine work, the CPU utilization often exceeds 50%, and occasionally hits 75%-80%. The Xeon E5-2687W is not only a second-generation i7 (so should improve performance for that reason), but also has 8 cores (16 threads), rather than 6 cores. For this reason, I expect to run into the 75% CPU range even less frequently. Nonetheless, the ability to double the cores and the RAM is a consideration. However, in the end, I believe this decision comes down to whether the doubling of the L3 cache will provide a noticeable improvement. There are many benchmarks, and a lot of discussion, regarding CPU power. However, I find very little discussion of L3 cache utilization, and how increases in the L3 cache (such as doubling it with dual processors) affect performance. For example: If there are only two processes running, but each benefits from a large L3 cache (such as might be the case for background processes that frequently scan the file system), perhaps the overall system performance might noticeably improve with dual CPU's - even if only a single core is active on each CPU - due to each process having double the effective L3 cache. I am hoping that someone has a sense of the benefits of increasing (or doubling) the L3 cache size. Note: the CPU I am considering (the Xeon E5-2687W) has 20 MB L3 cache, so a system with dual CPU's would have 40 MB L3 cache.

    Read the article

  • Ubuntu 12.04 crash analysis - strange binary data on all open files at the moment of crash

    - by lanbo
    A couple of hours ago we got a system crash on Ubuntu 12.04. We checked all the log files and there is nothing suspicious to blame to. Last stuff that was logged was some dovecot activity. There are no kernel panic messages. Nothing. It is a new server (new hardware) we are testing before production. And because it is new hard, I'm suspicious the problem may be due to some faulty hardware. We already run memtester with no problem detected. I'll be happy to hear from other hardware testing tools (the machine has SSD). Anyway, the thing I wanted to ask you is a different one. The strange thing is on every open file at the moment of the crash we found the next sequence of symbols was written into them: "@^@^@^@^@^@^@...". For example, on the syslog log file we got: Apr 16 15:53:56 odyssey dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<info>, method=PLAIN, rip=46.29.255.73, lip=5.9.58.177 ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^ [these continues for about 1000 chars...] ^@^@^@^@Apr 16 15:55:12 odyssey kernel: imklog 5.8.6, log source = /proc/kmsg started. We got all these symbols in all open files. These include: syslog, mail.log, kern.log, ... But also on some logs that are output by php scripts run in CRONs from user accounts (not root). So, any idea why all open files got these characters written during the crash? This is pretty bad since the crash corrupted many files (we don't even know which other ones may be affected). We are suspicious that all open files (in write mode maybe) at the moment of the crash got all these symbols inserted. Why is that? BTW [in case it helps], the system automatically rebooted after the crash but Apache did not start. There were not traces in /var/apache2/*log why apache did not start. After running a "service apache2 start" it started with no problems. Also, we rebooted the machine manually and Apache also started on reboot. But it did not start after the crash and no errors were reported. Thanks guys!

    Read the article

  • apache sendmail: trying to change user "from" address from apache to domain account

    - by Wes
    I apologize if I am asking a question already answered, but my problem isn't really that I haven't found an answer. I have, in fact, found a half-dozen different "solutions" to my problem, tried them all, in various combinations, and have been consistently unsuccessful. The goal All I want to do is change the envelope "from" address for all email sent from [email protected] to [email protected], always. What I've already done I am running Apache, PHP, and sendmail on CentOS 5.5, [email protected]. We have an SMTP server at 192.168.0.4. The domain's email accounts are all at @domain.org. I have successfully set up "smart host" using this line in the sendmail.mc file: define(`SMART_HOST', `192.168.0.4')dnl Then I set up masquerading, and was hopeful this would solve it. I have this in the .mc file: FEATURE(`masquerade_entire_domain')dnl FEATURE(`masquerade_envelope')dnl FEATURE(`allmasquerade')dnl MASQUERADE_AS(`domain.org')dnl MASQUERADE_DOMAIN(`domain.org.')dnl MASQUERADE_DOMAIN(`localhost.localdomain.')dnl This rewrites "to" addresses, but not "from" addresses. Testing from the command line: sendmail -v [email protected] Always is shown from the local user (in this case root, or my local user account). I had read that "sendmail" command sometimes bypasses masquerading. Nevertheless, using the "mail" command has the same result. After that, I have explored several "solutions", including: mailertable virtusertable FEATURE(`accept_unresolvable_domains')dnl LOCAL_DOMAIN(`localhost.localdomain')dnl FEATURE(`genericstable')dnl /etc/mail/access file /etc/mail/local-host-names file /etc/mail/trusted-users file All to no affect. The last thing I've tried So, I decided to go in a different direction, and try to set the envelope "from" address via PHP, using either the configuration in /etc/php.ini, or adding the -f parameter to the mail() function or to sendmail command. If I run this command: sendmail -v -f [email protected] [email protected] I get this error in /var/log/maillog: Mar 30 08:56:16 localhost sendmail[24022]: p2UCuE8w024022: [email protected], size=5, class=0, nrcpts=1, msgid=<[email protected]>, relay=user@localhost Mar 30 08:56:19 localhost sendmail[24022]: p2UCuE8w024022: [email protected], [email protected] (500/502), delay=00:00:05, xdelay=00:00:03, mailer=relay, pri=30005, relay=[192.168.0.4] [192.168.0.4], dsn=5.1.1, stat=User unknown Mar 30 08:56:19 localhost sendmail[24022]: p2UCuE8w024022: p2UCuE8x024022: DSN: User unknown Mar 30 08:56:23 localhost sendmail[24022]: p2UCuE8x024022: [email protected], delay=00:00:04, xdelay=00:00:04, mailer=relay, pri=31029, relay=[192.168.0.4] [192.168.0.4], dsn=2.0.0, stat=Sent (Ok: queued as B5E2E40E0A2) Which is basically a "User unknown" 550 error. Help Please help. What do I need to change? Should I just start over in the sendmail.mc file? It has a ton of config options stuffed in it, over days of trying things. Why is changing the envelope "from" address via the command line generating a "User unknown" error?

    Read the article

  • Radeon HD4850 serious issues when using DirectX 10

    - by ricsmania
    Hello, I have a problem with my video card. Whenever I run a DirectX 10 game, it works for a few seconds (10 or so) and then starts displaying nothing but big polygons. I have tested this with Crysis and Resident Evil 5, both have the same problems. The same games running under DirectX 9 work fine, except for some small black squares once in a while. I have the following specs: Asus P7P55D LE Intel Core i5 750 Sapphire Radeon HD4850 1GB 2x2GB Patriot Viper II Sector 5, DDR3 1600 MHz OCZ Stealth X Stream 500SXS 500W At first I thought it could be the video card overheating (it has stock cooling), but the game crashes even when it's running at 50 degrees C, and it's never been higher than 70. I also thought it could be the PSU, but as far as I know 500W is enough for this computer, especially because I haven't overclocked anything. My OS is Windows 7 X64 and I am using Catalyst 10.10, but I have also tried many older versions with no success. I don't think there is a problem with the card itself, or else it wouldn't run DirectX 9 games I believe. I have spent many hours searching for a solution but I couldn't, so any help is appreciated. Thank you. EDIT: I did some further investigation about the problem, and it seems taspeotis was right, it might be related to memory. I slightly underclocked the memory from 993 to 965 MHz and the problem went away completely. Both the black squares using DirectX 9 and the weird polygons using DirectX 10. I was using RE DirectX 10 Benchmark, as it consistently crashed around the same point, and now I can play the full benchmark with no artifacts at all. Unfortunately, the underclock has an obvious hit in performance. Although it's not critical, it's definitely noticeable. So, if the video memory test software showed no erros, but the card needs an underclock to work, what might be the problem? Temperature? Voltage? By the way, I couldn't find what the default voltage for this card is. And what is a good software to try and increase it? I tried Ati Tray Tools but it has a bug that increases the clock speed dramatically whenever I change something in the Overclock tab, so I'm afraid it might fry my card. Worst case scenario, if I don't find I solution I will try to slightly increase the GPU clock to compensate for the memory clock. Thank you again.

    Read the article

  • How do large blobs affect SQL delete performance, and how can I mitigate the impact?

    - by Max Pollack
    I'm currently experiencing a strange issue that my understanding of SQL Server doesn't quite mesh with. We use SQL as our file storage for our internal storage service, and our database has about half a million rows in it. Most of the files (86%) are 1mb or under, but even on fresh copies of our database where we simply populate the table with data for the purposes of a test, it appears that rows with large amounts of data stored in a BLOB frequently cause timeouts when our SQL Server is under load. My understanding of how SQL Server deletes rows is that it's a garbage collection process, i.e. the row is marked as a ghost and the row is later deleted by the ghost cleanup process after the changes are copied to the transaction log. This suggests to me that regardless of the size of the data in the blob, row deletion should be close to instantaneous. However when deleting these rows we are definitely experiencing large numbers of timeouts and astoundingly low performance. In our test data set, its files over 30mb that cause this issue. This is an edge case, we don't frequently encounter these, and even though we're looking into SQL filestream as a solution to some of our problems, we're trying to narrow down where these issues are originating from. We ARE performing our deletes inside of a transaction. We're also performing updates to metadata such as file size stats, but these exist in a separate table away from the file data itself. Hierarchy data is stored in the table that contains the file information. Really, in the end it's not so much what we're doing around the deletes that matters, we just can't find any references to low delete performance on rows that contain a large amount of data in a BLOB. We are trying to determine if this is even an avenue worth exploring, or if it has to be one of our processes around the delete that's causing the issue. Are there any situations in which this could occur? Is it common for a database server to come to the point of complete timeouts when many of these deletes are occurring simultaneously? Is there a way to combat this issue if it exists? (cross-posted from StackOverflow )

    Read the article

  • Operative systems on SD cards

    - by HisDudeness
    I was getting some wild ideas the last days, like putting some operative systems into SD cards rather than on my hard drive. I'll go further into details now and explain what lead me to consider this probably abominable decision. I am on a laptop (that means I have a native SD-card reader) which is currently running a cross-distro setup, with a bunch of Linux systems (placed in dedicated ext4 logical partitions into a huge extended one) regulated by an unique GRUB. Since today, my laptop haven't even seen any Windows system with binoculars. I was thinking about placing all the os part of my setup into a Secure Digital to save all my 500 Gb Hard Drive for documents, music, videos and so on, and being able to just remove the SD and boot my system into another computer too, as well as having the possibility of booting other systems into mine by just plugging in another SD, without having to keep it constantly placed in my PC. Also, in the remote case in the near future I just wanted to boot Windows 8 in it, I read it causes major boot incompatibility issues with other systems by needing a digital signature in order for them to start. By having it in a removable drive, I could just get rid of it when I'm needing him and switch its card with Linux one, and so not having any obstacles to their boot. Now, my questions are: I know unlikely traditional rotating disk drives, integrated circuits ones have a limited lifespan in terms of cluster rewriting. Is it an obstacle to that kind of usage? I mean, some Ultrabooks are using SSD now, is it the same issue, or there are some differences between Solid State Drives and Secure Digitals in that sense? Maybe having them to store system files which are in fixed positions (making the even-usage of cluster technology useless) constantly being re-read and updated and similar things just gets them soon unserviceable, do it? Second question: are all motherboards and BIOSes able to boot from SDs just like they are from USB pen drives (I mean, provided card reader is USB-connected, isn't it)? Or can't bootloaders like GRUB be installed on SDs working? If they can't, is it a solution installing GRUB to MBR and making boot option pointing to SD? Will it work? Are there any other problems to installing OSs on a Secure Digital?

    Read the article

  • Google Chrome issues

    - by Ben Hooper
    I'm an avid user of Chrome and would defend it to the death but there is no denying that it has issues, and I've been experiencing a lot of them recently. Issues Stuck tabs Issue: Around 60% of the time (higher when the system has just started), when Chrome is launched, some or all of the pinned tabs and startup pages will get stuck in the counter-clockwise-rotating "contacting server" mode and never snap out of it. Fix: Quit and launch Chrome until they load properly (this really isn't good, as quitting and launching Chrome can and has cleared all pinned tabs). Extra Information: If you stop the loading and re-enter the URL then the page will load perfectly. The amount of pinned tabs or startup pages seems to be irrelevant, but I could be wrong.   "Downloaded/out of" Issue: Each item in the download bar has a downloaded status section, formatted as "downloaded/out of". Sometimes Chrome doesn't display the "out of" part.   Collapsed settings windows Issue: In Chrome version 19(ish)+, settings are configured via overlayed / popup windows. Sometimes, the window will open fully collapsed. Fix: Resize Chrome's window or open the developer tools. "Network Error" with large downloads Issue: Sometimes, when downloading large files (500MB+) Chrome will download the entire file, the download status will freeze (for example, "1 GB/1 GB") for a few minutes, report "Network error", and delete the .crdownload file. Extra Information: The same file from the same website on the same computer downloads perfectly in other browsers. The website and file type seem to be irrelevant.   Information: I've experienced some of these issues on my home PC and some on my work PC, both of which are Windows 7 Ultimate x64. The only things that link them are my Google account (all settings are synced). Updating Chrome hasn't worked. Most of these issues presented themselves around about version 17 and have continued right through to 21 (current). Uninstalling Chrome, deleting all data in %programFiles(x86)%\Google and %localAppData%\Google, and reinstalling hasn't worked. I have yet to see whether disabling all extensions would make a difference, but it's hard to diagnose as these issues don't occur 100% of the time.   In my case, I don't know if there's an actual solution. I'm just curious as to whether anyone else is having similar issues to those that I'm experiencing. At least then I'll know it's an issue with Chrome itself.

    Read the article

  • Real-time offline folder-to-folder backup application needed (Windows)

    - by niktech
    I recently started using Intel Matrix Storage RAID solution that allowed me to use my 5 1TB drives for two RAID volumes. First one a 1TB RAID 0 striped across all 5 drives and second one a RAID 5 across the rest of the free space on all drives (around 2.85TB usable space). The RAID 0 I use for OS, applications and games while the RAID 5 I use as a more-permanent type storage (photos, etc). Now I do realize that running the OS and applications on RAID 0 across 5 drives is very dangerous, which is what brings up the following question. Is there a reliable freeware realtime backup application that can backup a set of folders from one drive to another drive (no online backups needed)? I've already tried a few (Mozy, Yadis, Comodo Backup, GFI Backup, Idoo, Crash Plan) but none meet my requirements: Low CPU and RAM usage. Realtime Backups - as soon as a file is modified in the source folder, it is added to the backup queue which will be processed with the lowest priority when the CPU is idle. This backup queue should persist in cases of computer restarts (ie: the source and destination folders should always have the same set of files, except for the ones waiting in the backup queue). Incremental Backups - if only 10 bytes changed in a 1GB file, the app should only copy those 10 new bytes. Ability to back up locked and opened files (some apps, like Yadis, can't back up critical files like browser favorites). Ability to run as a service (no need for any user to log-in to have the app started). Optional requirements: Compression of the destination into a well-known format (RAR, Zip) that can be directly read without the use of the application. Preset source folders (such as Browser Favorites, Game Saves, Application Settings, etc). The idea is to use RAID 0 array as "semi-persistent RAM-like" storage which in case of a failure can be quickly rebuilt by reinstalling the OS, apps and games and copying over the settings, saves, favorites from the RAID 5. I'm also thinking of taking this RAID 0 as RAM idea to the extreme with SSDs (as soon as we get some nice 6Gb/s SATA III SSDs out there), where a couple of SSDs chained in RAID 0 will work as yet another semi-persistent cache layer sitting between the RAM and the HD. I'm just hoping there already exists an application that satisfies these requirements... otherwise I'll have to write one myself, which I would prefer not to do.

    Read the article

  • Server 2008/Windows 7/Samba Unspecified error 80004005

    - by ancillary
    I have a Samba share on a LAN with 2008 PDC/DNS. Smb authenticates with AD and I have several Win7 Machines that can connect fine. I recently added a couple of new computers to the LAN which were imaged the same way (same software, etc.; different hardware so different drivers) as the other machines and they have the same policies set. I can not get the new machines to connect to the samba share no matter what. I am always met with either Unspecified Error 0x80004005 or Network Path not found. I've turned off the firewall; set LANMAN auth to respond to NTLM only/send LM & NTLM responses/use NTLM session security if negotiated in Local Sec Policy SEcurity Options; tried both ip and hostname to connect. SMB log shows that authentication succeeds; but then connection is immediately killed by the client. tcpdump shows nothing remarkable except that when trying to connect from the client via hostname there is an unknown packet type error: ack 201 win 255 NBT Session Packet: Unknown packet type 0xABData: (41 bytes) Here's a couple of lines from that error: 11:18:37.964991 IP 001-client.domain.local.49372 > smb.domain.local.netbios-ssn: P 1670:2146(476) ack 201 win 255 NBT Session Packet: Unknown packet type 0xABData: (41 bytes) [000] AA 46 96 FA D5 99 33 75 0C C4 20 CE 26 42 F3 61 \252F\226\372\325\2313u \014\304 \316&B\363a [010] F0 8C FB 65 18 17 40 A5 DB 42 BB 94 37 53 92 EC \360\214\373e\030\027@\245 \333B\273\2247S\222\354 [020] 55 98 7F C4 AE 3D 6B 10 C4 U\230\177\304\256=k\020 \304 11:18:37.964998 IP smb.domain.local.netbios-ssn > 001-client.domain.local.49372: . ack 2146 win 100 Here's smb.conf just in case (though don't see how if other machines are working fine): [global] workgroup = MYDOMAIN realm = MYDOMAIN.LOCAL server string = domain|smb share interfaces = eth1 security = ADS password server = 192.168.1.3 log level = 2 log file = /var/log/samba/%m.log smb ports = 139 strict locking = no load printers = No local master = No domain master = No wins server = 192.168.1.3 wins support = Yes idmap uid = 500-10000000 idmap gid = 500-10000000 winbind separator = + winbind enum users = Yes winbind enum groups = Yes winbind use default domain = Yes [samba-share1] comment = SMB Share path = /home/share/smb/ valid users = @"MYDOMAIN+Domain Users" admin users = @"MYDOMAIN+Domain Admins" guest ok = no read only = No create mask = 0765 force directory mode = 0777 Any ideas what else I could try or look for? Or what might be the problem? Thanks.

    Read the article

  • SQL Server database filled the hard drive and freeing up space isn't possible

    - by Jon
    I have a database in SQL Server 2008 on a 1Tb hard drive and it filled the drive, there is only 4Kb free. The MDF file is 323Gb and the LDF is 653Gb. The hard disk this DB is on has no other files on it other than the MDF and LDF so it's impossible to free up any space on the drive. The main hard disk is smaller but there is enough room to transfer the MDF to that drive, in case that helps. This server is overseas at a customer site and it's not possible at the moment to add more disk space to the server. It's also not possible to delete any records because the DB is in a failed mode (due to no disk space) and it doesn't respond to most commands. The Db is currently in full recovery mode which is why the LDF file is so large. This DB really doesn't need to be in full recovery so going forward we plan on switching it to simple mode which will save us a lot of space. I also don't care about losing the LDF file, but I need all of the data. I've spent a lot of time looking for a way out of this problem but everything I've found first involves either freeing up disk space or adding more disk space, neither of which is an option at this time. I'm stuck and any help would be greatly appreciated. I get the following log when trying to switch the DB to online mode. Msg 945, Level 14, State 2, Line 3 Database 'DBNAME' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. Msg 5069, Level 16, State 1, Line 3 ALTER DATABASE statement failed. Msg 1101, Level 17, State 12, Line 3 Could not allocate a new page for database 'DBNAME' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. I've found the following solutions but none work due to having no disk space on that drive, and since the DB is in a failed state I can't run most commmands. - DBCC SHRINKFILE - can't be run because doing a 'use DBNAME' fails - Detaching the DB and then changing the location of the MDF/LDF files, this fails because the DB is in an offline mode so you can't run detach. I'm at a loss about what else to try. Thanks.

    Read the article

  • Plesk Postfix Mail Server 9.5.4 very heavy load, 1000s of processes

    - by Eugene van der Merwe
    Our Plesk Linux Ubuntu 64-bit mail server has extremely high load and we don't know how to isolate it. The load was okay will two weeks ago but in the last two weeks it's seriously deteriorated. The mail server has been running for years and we have had sporadic performance issues. Normally we reduce the load by turning off all SPAM checks until the problem is sorted (which sometimes resolves itself). Currently we have turned of real time block lists, SPF checking and we have attempted to turn off SpamAssassin. No matter what we do the SpamAssassin check box stays ticked in the GUI. Out of desperation we have done /etc/init.d/psa-spamassassin stop. For years we haven't been able to do SpamAssassin because it kills the server. We would like to use it but performance is more important for now. We cannot turn off Greylisting. The moment we turn off Greylisting our help desk is inandated with calls. Out of desperation we investigated truncating the Greylisting database which is now 2.5 GB big but we abandoned this after noticing turning of Greylisting doesn't improve the performance at all. We have no anti-virus. It's just more load and Dr. Web never really worked that well for us. But we'll try that if it will make a difference. We have implemented Postfix Anvil. This seems to have made the situation worse so we disabled it. We’re not sure if this is the case. Our current mail server is configured to forward all SMTP to a relay server. We did so to reduce the load. This helped a lot because outgoing queues are generally empty. We are running in an Expand configuration. The mail server has about 12 000 accounts of which maybe half are active. We have read through this document: http://www.postfix.org/STRESS_README.html but there are too many settings and we don’t know which ones to choose. Please assist urgently. We need advice on how to fix this problem before all our clients abandon is. The only clue we have is that there are 100s of these processes: 30 13205 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue 30 13207 1 0 11:38 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue 30 13208 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10026 before-remote 30 13209 1 0 11:38 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10026 before-remote 30 13213 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue

    Read the article

  • Nagios shell script cannot be executed

    - by MeinAccount
    I'm trying to monitor GitLab with nagios. I've created the following command definition and shell script but when checking the service I'm receiving the following e-mail. How can I solve this? The file is executable. [...] nagios : 3 incorrect password attempts ; TTY=unknown ; PWD=/ ; USER=git ; COMMAND=/bin/bash -c /var/lib/nagios/custom_plugins/check_gitlab.sh Command definition: define command { command_name custom_check_gitlab command_line /var/lib/nagios/custom_plugins/check_gitlab.sh } Shell script: #! /bin/sh # [...] RAILS_ENV="production" # Script variable names should be lower-case not to conflict with internal /bin/sh variables such as PATH, EDITOR or SHELL. app_root="/home/git/gitlab" app_user="git" unicorn_conf="$app_root/config/unicorn.rb" pid_path="$app_root/tmp/pids" socket_path="$app_root/tmp/sockets" web_server_pid_path="$pid_path/unicorn.pid" sidekiq_pid_path="$pid_path/sidekiq.pid" ### Here ends user configuration ### # Switch to the app_user if it is not he/she who is running the script. if [ "$USER" != "$app_user" ]; then sudo -u "$app_user" -H -i $0 "$@"; exit; fi # Switch to the gitlab path, if it fails exit with an error. if ! cd "$app_root" ; then echo "Failed to cd into $app_root, exiting!"; exit 1 fi ### Init Script functions check_pids(){ if ! mkdir -p "$pid_path"; then echo "Could not create the path $pid_path needed to store the pids." exit 1 fi # If there exists a file which should hold the value of the Unicorn pid: read it. if [ -f "$web_server_pid_path" ]; then wpid=$(cat "$web_server_pid_path") else wpid=0 fi if [ -f "$sidekiq_pid_path" ]; then spid=$(cat "$sidekiq_pid_path") else spid=0 fi } # Checks whether the different parts of the service are already running or not. check_status(){ check_pids # If the web server is running kill -0 $wpid returns true, or rather 0. # Checks of *_status should only check for == 0 or != 0, never anything else. if [ $wpid -ne 0 ]; then kill -0 "$wpid" 2>/dev/null web_status="$?" else web_status="-1" fi if [ $spid -ne 0 ]; then kill -0 "$spid" 2>/dev/null sidekiq_status="$?" else sidekiq_status="-1" fi } check_pids check_status if [ "$web_status" != "0" -a "$sidekiq_status" != "0" ]; then echo "GitLab is not running." exit 2 fi if [ "$web_status" != "0" ]; then printf "The GitLab Unicorn webserver is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$sidekiq_status" != "0" ]; then printf "The GitLab Sidekiq job dispatcher is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$web_status" = "0" -a "$sidekiq_status" = "0" ]; then printf "GitLab and all it's components are \033[32mup and running\033[0m.\n" exit 0 fi

    Read the article

  • How to Setup Ubuntu Mail Server with Google Apps?

    - by Apreche
    I have a domain, let's call it foobar.com. All of the MX records for foobar.com point to Google's mail servers because I am using Google Apps for your domain to manage it. It's great because everyone gets all the advantages of GMail, but our e-mail addresses aren't @gmail.com. I also have a server. Primarily, it's a web server, but it also serves other things. One of the things it serves is the web site for foobar.com and also sites for various virtual hosts such as shop.foobar.com and forum.foobar.com. The server is running Ubuntu 8.04, because I like using LTS releases in production. The thing is, there are various applications running on the server that need the ability to send out emails. Various applications, like the cron jobs, send me e-mails in case of errors. Some of the web applications need to send e-mail to users when they forget their passwords, to confirm new registered users, etc. Lastly, it's nice to be able to send e-mail from the command line using the mail command, or mutt. How can I setup the mail on the web server to go through the Google apps mail servers? I don't need the web server to receive mail, though that would be cool. I do need it to be able to send mail as any legitimate address @foobar.com. That way the forum application can send mails with [email protected] in the from field, and the ecommerce application will have [email protected] in the from field. Also, by sending the mail through the Google servers, we can avoid a lot of the problems with the e-mails being blocked by various spam filters on the web. Google's SMTP servers are trusted a lot more than mine would be. I'm pretty good with administering Linux systems, but I am absolutely brain dead when it comes to e-mail. I need step by step directions from beginning to end on how to set this up. I need to know every thing to install, and every single change to the configuration files that is necessary. I have tried following various howtos and guides in the past, but none of them were quite right. Either they didn't work at all, or they offered a configuration that is not what I wanted. Please help. Thanks.

    Read the article

  • GPU not powering on

    - by Lerp
    So I got home from work yesterday and went to turn my computer on as per usual to be greeted by this: The screens remained black, so I rebooted; I go as far as GRUB before my screens went black again. I rebooted again, they didn't turn on. I rebooted again, I got as far as the windows login screen. This time I unplugged it, opened it up and cleaned it but to no luck. The GPU was still being tempermental. I repeated the process of turning off and on several times until one time it work as normal. I happily played games for the rest of the night (5-6 hours?) thinking everything was jolly good now. Well I get home from work today and it is doing the SAME thing. Sometimes everything displays normally for a few seconds to minutes then the screens go black; then sometimes the screens don't come on at all. Summary and additional points Screens sometimes turn on before shortly turning off, sometimes they don't; I cannot seem to determine any pattern between when they do or do not turn off. The build has been working fine for about 8 months now so I know it's not hardware incompatibility. If I plug a monitor into the on board graphics I can use the PC normally (just in low graphics mode) I have two monitors and it's a case of they both turn on or not. So I think I can rule out the monitors being dead. I have tried replacing the GPU I have tried replacing the RAM I have tried flashing the CMOS I have tried cleaning the inside The GPU is a Radeon HD 7870 My questions Is my GPU dead? It's not very old and I would rather have a method of being certain it's the GPU before I fork out some money I can't really afford. I do not have a second PC here to test it in. If my GPU is dead why does it sometimes work and sometimes not? Update Okay, it was working again.. at least I thought it was. I left it running for 10-20minutes with the screens black. Turned it off and straight back on and it worked for all of 10minutes. I was then updating the post in joy thinking I could play some games for the rest of the night when BAM it went black again. So yeah, I don't know :C

    Read the article

  • Raid1 with active and spare partition

    - by Daniel Baron
    I am having the following problem with a RAID1 software raid partition on my Ubuntu system (10.04 LTS, 2.6.32-24-server in case it matters). One of my disks (sdb5) reported I/O errors and was therefore marked faulty in the array. The array was then degraded with one active device. Hence, I replaced the harddisk, cloned the partition table and added all new partitions to my raid arrays. After syncing all partitions ended up fine, having 2 active devices - except one of them. The partition which reported the faulty disk before, however, did not include the new partition as an active device but as a spare disk: md3 : active raid1 sdb5[2] sda5[1] 4881344 blocks [2/1] [_U] A detailed look reveals: root@server:~# mdadm --detail /dev/md3 [...] Number Major Minor RaidDevice State 2 8 21 0 spare rebuilding /dev/sdb5 1 8 5 1 active sync /dev/sda5 So here is the question: How do I tell my raid to turn the spare disk into an active one? And why has it been added as a spare device? Recreating or reassembling the array is not an option, because it is my root partition. And I can not find any hints to that subject in the Software Raid HOWTO. Any help would be appreciated. Current Solution I found a solution to my problem, but I am not sure that this is the actual way to do it. Having a closer look at my raid I found that sdb5 was always listed as a spare device: mdadm --examine /dev/sdb5 [...] Number Major Minor RaidDevice State this 2 8 21 2 spare /dev/sdb5 0 0 0 0 0 removed 1 1 8 5 1 active sync /dev/sda5 2 2 8 21 2 spare /dev/sdb5 so readding the device sdb5 to the array md3 always ended up in adding the device as a spare. Finally I just recreated the array mdadm --create /dev/md3 --level=1 -n2 -x0 /dev/sda5 /dev/sdb5 which worked. But the question remains open for me: Is there a better way to manipulate the summaries in the superblock and to tell the array to turn sdb5 from a spare disk to an active disk? I am still curious for an answer.

    Read the article

  • How to conform to update-rc.d with LSB standard?

    - by user34881
    This is a migrated question from stackoverflow, as I was told, this is the place for it to be. http://stackoverflow.com/questions/2263567/how-to-conform-to-update-rc-d-with-lsb-standard I have set up a simple script to back up some directories. While I haven't had any problems setting up the functionality, I'm stuck with adding the script to rcX.d dir's using update-rc.d. My script: #! /bin/sh ### BEGIN INIT INFO # Provides: backup # Required-Start: backup # Required-Stop: # Should-Stop: # Default-Start: 0 6 # Default-Stop: # Description: Backs up some dirs ### END INIT INFO check_mounted() { # Check if HD is mounted } do_backup() { if check_mounted; then # Some rsync statements. fi } case "$1" in start) do_backup ;; restart|reload|force-reload) echo "Error: argument '$1' not supported" >&2 exit 3 ;; stop|"") # No-op ;; *) echo "Usage: backup [start]" >&2 exit 3 ;; esac : Using update-rc.d backup start 10 0 6 . I get the following warnings and errors: update-rc.d: warning: backup start runlevel arguments (none) do not match LSB Default-Start values (0 6) update-rc.d: warning: backup stop runlevel arguments (0 6.) do not match LSB Default-Stop values (none) update-rc.d: error: start|stop arguments not terminated by "." The syntax I try to use is the following: update-rc.d [-n] <basename> start|stop NN runlvl [runlvl] [...] . Google wasn't that helpful at finding a solution. How can I correctly set up a script and add it via update-rc.d? I'm using Ubuntu 9.10. UPDATE Using update-rc.d backup start 10 0 6 . stop 10 0 . the error disappears. The warnings about default values persists: update-rc.d: warning: backup start runlevel arguments (none) do not match LSB Default-Start values (0 6) update-rc.d: warning: backup stop runlevel arguments (0 6 0 6) do not match LSB Default-Stop values (none) It even is added to the appropiate rcX-dirs but it still does not get executed...

    Read the article

  • startup Error for Zend Server CE

    - by Jamison
    Hello! I've got a strange startup error for Zend Server CE - it's probably easy to fix, but I don't have much experience with Zend Server! I'm running the latest OSX 10.6.6 and the latest Zend Server CE for Mac. When I run the "start" command from the command line, here is what I get: /usr/local/zend/bin/apachectl start [OK] spawn-fcgi: child spawned successfully: PID: 4206 /usr/local/zend/bin/shell_functions.rc: line 133: 4210 Bus error $WATCHDOG -i $BINARY 1>&3 2>&4 /usr/local/zend/bin/shell_functions.rc: line 133: 4211 Bus error $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 Starting Zend Server GUI [Lighttpd] [FAILED] /usr/local/zend/bin/lighttpdctl.sh: line 46: 4212 Bus error $WATCHDOG -i $BINARY Starting MySQL SUCCESS! /usr/local/zend/bin/shell_functions.rc: line 133: 4304 Bus error $WATCHDOG -i $BINARY 1>&3 2>&4 /usr/local/zend/bin/shell_functions.rc: line 133: 4425 Bus error $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 Starting Java bridge [FAILED] /usr/local/zend/bin/java_bridge.sh: line 39: 4426 Bus error $WATCHDOG -i $BINARY Zend Server started... The challenge is that ZEND SERVER wont open the GUI with this error, and seemingly I can click on Zend Server in the Applications folder and it opens for a second and immediately closes. I've made sure that Web Sharing is turned off to avoid conflicts, and I've run Disk Utility from my recovery disk to make sure there are no file system errors. Here is what the lines that are referenced in the errors have in terms of code: shell_functions.rc: (starting on line 132 - the error message says line 133...): launch() { if [ -z "$DEBUG" ]; then exec 3>/dev/null 4>&3 else exec 3>&1 4>&2 fi $WATCHDOG -i $BINARY 1>&3 2>&4 RET=$? if [ $RET -eq 0 ];then $ECHO_CMD "$BINARY watchdog is up and running.. ${OK_COLOR}[OK]${T_RESET}" return $RET else #$WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY >> "$PREFIX/logs/watchdog_$BINARY.log" 2>&1 $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 report $? "Starting" fi } _kill() { $WATCHDOG -i $BINARY > /dev/null 2>&1 if [ $? -eq 1 ];then $ECHO_CMD "$BINARY is not running" else $WATCHDOG -t $BINARY > /dev/null 2>&1 report $? "Stopping" fi } lighttpdctl.sh: (starting on line 45 - the error message says line 46...): status() { $WATCHDOG -i $BINARY } case "$1" in start) start status ;; stop) stop ;; restart) stop sleep 1 start ;; status) status ;; *) usage exit 1 esac exit $? java_bridge.sh: (starting on line 38 - the error message says line 39...): status() { $WATCHDOG -i $BINARY } Question: "Watchdog" is library in this zend BIN folder - it seems to handle error reporting? all the errors in my start command seem to deal with this Watchdog thing, but I don't know what to do about it... Thanks!

    Read the article

  • Nginx Proxying to Multiple IP Addresses for CMS' Website Preview

    - by Matthew Borgman
    First-time poster, so bear with me. I'm relatively new to Nginx, but have managed to figure out what I've needed... until now. Nginx v1.0.15 is proxying to PHP-FPM v.5.3.10, which is listening at http://127.0.0.1:9000. [Knock on wood] everything has been running smoothly in terms of hosting our CMS and many websites. Now, we've developed our CMS and configured Nginx such that each supported website has a preview URL (e.g. http://[WebsiteID].ourcms.com/) where the site can be, you guessed it, previewed in those situations where DNS doesn't yet resolve to our server, etc. Specifically, we use Nginx's Map module (http://wiki.nginx.org/HttpMapModule) and a regular expression in the server_name of the CMS' server{ } block to 1) lookup a website's primary domain name from its preview URL and then 2) forward the request to the "matched" primary domain. The corresponding Nginx configuration: map $host $h { 123.ourcms.com www.example1.com; 456.ourcms.com www.example2.com; 789.ourcms.com www.example3.com; } and server { listen [OurCMSIPAddress]:80; listen [OurCMSIPAddress]:443 ssl; root /var/www/ourcms.com; server_name ~^(.*)\.ourcms\.com$; ssl_certificate /etc/nginx/conf.d/ourcms.com.chained.crt; ssl_certificate_key /etc/nginx/conf.d/ourcms.com.key; location / { proxy_pass http://127.0.0.1/; proxy_set_header Host $h; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } (Note: I do realize that the regex in the server_name should be "tighter" for security reasons and match only the format of the website ID (i.e. a UUID in our case).) This configuration works for 99% of our sites... except those that have a dedicated IP address for an installed SSL certificate. A "502 Bad Gateway" is returned for these and I'm unsure as to why. This is how I think the current configuration works for any requests that match the regex (e.g. http://123.ourcms.com/): Nginx looks up the website's primary domain from the mapping, and as a result of the proxy_pass http://127.0.0.1 directive, passes the request back to Nginx itself, which since the proxied request has a hostname corresponding to the website's primary domain name, via the proxy_set_header Host $h directive, Nginx handles the request as if it was as direct request for that hostname. Please correct me if I'm wrong in this understanding. Should I be proxying to those website's dedicated IP addresses? I tried this, but it didn't seem to work? Is there a setting in the Proxy module that I'm missing? Thanks for the help. MB

    Read the article

  • OpenVPN multiple servers on the same subnet, high availability

    - by andre
    Hey everyone. Let me start by saying that my Linux experience isn't super awesome but I can usually find my way around things easily. Over at work we have an OpenVPN setup that's been due for some improvement for a while now. The main server (tap mode) runs in our office, behind a rather slow DSL connection. The main problem is that, since I'm usually out of the office, every time I want to access something on the virtual network I have to go through that server to get anywhere else. We have two servers up on 100 Mbit connections that we use for development and production purposes, about 3 more servers in the office (one of them behind a different T1 line for VOIP) and about two dozen clients who use the network on a daily basis from various locations. We've had situations where network routing (outside of our control) would not allow people to reach our main OpenVPN server whilst the other locations were connectable. Also any time someone outside the office wants to fetch something from any of the servers (say, a 500 MB code repository), a whopping 20 KB/s download speed is just unacceptable these days (did I mention slow DSL? ok). We had to implement traffic shaping on this server since maxing out this connection was fairly trivial. I had the thought of running two (or more) OpenVPN servers in the network. These would have to have the same subnet though, as our application relies on virtual network's IP addresses for some of its core functionality. The clients would also preferably retain the same IP addresses but that's not vital. For simplicity, lets call the current server office and the second server I'm setting up, cloud. Call the server on the T1 phone. This proved to be rather complex because as soon as I connect to cloud, I cannot see office. Any routes to a server that would go through office also do not work while I'm connected to cloud (no ping, nothing) and vice-versa. There's no rules for iptables that would be blocking the traffic either. Recently I came across this article on linuxjournal but the solution they provide seems to only cover the use of two servers and somewhat outdated (can't even find much documentation, their wiki is offline). They also state that adding more servers would be a complex task. Ideally I would like to keep the existing server office running the virtual network and also run the OpenVPN daemon on the cloud and phone servers (100 Mbit and very reliable connection, respectively) so that we're on safe ground in case of a hardware failure, DSL failure, etc. So, in essence, I'm looking for a highly available OpenVPN solution (fix, patch, hack, tweak, whatever you want to call it) that will accept connections on multiple hosts (2 or more) whilst keeping the same IP address subnet regardless of the server to which you connect to. Thanks for reading and sorry for the long post, I hope it gets the point across :P

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • Suggestions for splitting server roles amongst Hyper-V virtual servers / RAID6 or RAID10? / AppAssure

    - by Anon
    We have 2 Hyper-V hosts at present running 1 virtual server that was converted from a physical box running all roles. My plan is to split the roles over various virtual machines, upgrading to the latest software versions as I go, and use the backup server as a standby in case the main server fails. AppAssure backup software has a feature called Virtual Standby, so the VHD's can be ready to be fired up on the backup server if necessary. Off-site backups will be done via external USB drive for now. I'm just seeking some input/suggestions into how I'm planning to split the roles out amongst various virtual servers. Also, I'm curious how to setup the storage on the servers. We do not have any NAS's, SAN'S or any budget for this. What would the best RAID level be to use? I'm thinking either RAID6 (which is currently used) however I'm concerned about the write speeds, or RAID10 but again I'm worried that I can only lose 1 drive (from the same mirror) as opposed to any 2 with RAID6. I realise I have a hot swap for this, but what if a further drive fails during a rebuild? Is the write penalty of RAID6 worth the extra reliability over RAID10? Or will it be too slow with all the roles I am planning, therefore RAID10 is my only real option? The reason for the needed redundancy is I am the only technician and I'm not always on-site. Options I've considered: 1) 5 drives in RAID6 set, 200gb for host OS, rest for VM storage. 1 drive for hot swap - this is how it is currently setup 2) 4 drives in RAID10 set, 200gb for host OS, rest for VM storage. 2 drives for hot swap 3) 4 drives in RAID10 set for VM storage, 2 drives in RAID1 set for host OS. No drives for hot swap - While this is probably the best option with the amount of drives I have, I don't like the idea of having no hot swap 4) 3 drives in RAID6 set for VM storage, 2 drives in RAID1 set for host OS. 1 drive for hot swap All options give us enough storage capacity for our files, etc. We don't have any budget for extra drives or extra hot swap HD chassis for the servers. We have about 70 clients and about 150 users. MAIN SERVER Intel Xeon 5520 @ 2.27 GHz (2 processors) 16GB RAM 6 x 1TB Seagate Barracuda ES.2 Enterprise SATA drives Intel SRCSATAWB RAID controller Virtual machine workload using Hyper-V on Windows Server 2008 R2: DC01 - Active Directory Domain Controller / DNS server / Global catalog - 1GB RAM DC02 - Active Directory Domain Controller / DNS server / Global catalog - 1GB RAM Member Server - DHCP server, File server, Print server - 1GB RAM SCCM Member Server - 4GB RAM Third Party Software Member Server - A/V server, Ticketing software, etc - 4GB RAM Exchange 2007 - 4GB RAM - however we are probably migrating to a hosted solution, therefore freeing up resources BACKUP SERVER Intel Xeon E5410 @ 2.33GHz (2 processors) 16GB RAM 6 x 2TB WD RE4 SATA drives Intel SRCSASRB RAID controller Virtual machine workload using Hyper-V on Windows Server 2008 R2: AppAssure backup software - 8GB RAM

    Read the article

  • SQL Clustering on Hyper V - is a cluster within a cluster a benefit.

    - by Chris W
    This is a re-hash of a question I asked a while back - after a consultant has come in firing ideas in to other teams in the department the whole issue has been raised again hence I'm looking for more detailed answers. We're intending to set-up a multi-instance SQL Cluster across a number of physical blades which will run a variety of different systems across each SQL instance. In general use there will be one virtual SQL instance running on each VM host. Again, in general operation each VM host will run on a dedicated underlying blade. The set-up should give us lots of flexibility for maintenance of any individual VM or underlying blade with all the SQL instances able to fail over as required. My original plan had been to do the following: Install 2008 R2 on each blade Add Hyper V to each blade Install a 2008 R2 VM to each blade Within the VMs - create a failover cluster and then install SQL Server clustering. The consultant has suggested that we instead do the following: Install 2008 R2 on each blade Add Hyper V to each blade Install a 2008 R2 VM to each blade Create a cluster on the HOST machines which will host all the VMs. Within the VMs - create a failover cluster and then install SQL Server clustering. The big difference is the addition of step 4 whereby we cluster all of the guest VMs as well. The argument is that it improves maintenance further since we have no ties at all between the SQL cluster and physical hardware. We can in theory live migrate the guest VMs around the hosts without affecting the SQL cluster at all so we for routine maintenance physical blades we move the SQL cluster around without interruption and without needing to failover. It sounds like a nice idea but I've not come across anything on the internet where people say they've done this and it works OK. Can I actually do the live migrations of the guests without the SQL Cluster hosted within them getting upset? Does anyone have any experience of this set up, good or bad? Are there some pros and cons that I've not considered? I appreciate that mirroring is also a valuable option to consider - in this case we're favouring clustering since it will do the whole of each instance and we have a good number of databases. Some DBs are for lumbering 3rd party systems that may not even work kindly with mirroring (and my understanding of clustering is that fail overs are completely transparent to the clients). Thanks.

    Read the article

< Previous Page | 799 800 801 802 803 804 805 806 807 808 809 810  | Next Page >