Search Results

Search found 21717 results on 869 pages for 'setup versions'.

Page 684/869 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • Best Practical RT, sorting email into queues automatically using procmail

    - by user52095
    I'm trying to get incoming e-mail to automatically go directly into whichever queue/ticket they are related to or create a new one if none exist and the right queue e-mail setup in the web interface is used. I will have too many queues to have two line items within mailgate per queue. A similar issue was discussed here (http://serverfault.com/questions/104779/procmail-pipe-to-program-otherwise-return-error-to-sender), but I thought it best to open a new case instead of tagging on what appeared to be an answer to that person's query. I'm able to send and receive e-mail (via PostFix) to the default rt user and this user successfully accepts all e-mail for the relative domain. I have no idea where the e-mail goes - it's successfully delivered, but it does not update existing tickets (with a Subject line match) and it does not create any new. Here's and example of my ./procmail.log: procmail: [23048] Mon Aug 23 14:26:01 2010 procmail: Assigning "MAILDOMAIN=rt.mydomain.com " procmail: Assigning "RT_MAILGATE=/opt/rt3/bin/rt-mailgate " procmail: Assigning "RT_URL=http://rt.mydomain.com/ " procmail: Assigning "LOGABSTRACT=all " procmail: Skipped " " procmail: Skipped " " procmail: Assigning "LASTFOLDER={ " procmail: Opening "{ " procmail: Acquiring kernel-lock procmail: Notified comsat: "rt@18337:./{ " From [email protected] Mon Aug 23 14:26:01 2010 Subject: RE: [RT.mydomain.com #1] Test Ticket Folder: { 1616 Does the notified comsat portion mean that it notified RT? The contents of my ./procmailrc: #Preliminaries SHELL=/bin/sh #Use the Bourne shell (check your path!) #MAILDIR=${HOME} #First check what your mail directory is! MAILDIR="/var/mail/rt/" LOGFILE="home/rt//procmail.log" LOG="--- Logging ${LOGFILE} for ${LOGNAME}, " VERBOSE=yes MAILDOMAIN="rt.mydomain.com" RT_MAILGATE="/opt/rt3/bin/rt-mailgate" #RT_MAILGATE="/usr/local/bin/rt-mailgate" RT_URL="http://rt.mydomain.com/" LOGABSTRACT=all :0 { # the following line extracts the recipient from Received-headers. # Simply using the To: does not work, as tickets are often created # by sending a CC/BCC to RT TO=`formail -c -xReceived: |grep $MAILDOMAIN |sed -e 's/.*for *<*\(.*\)>* *;.*$/\1/'` QUEUE=`echo $TO| $HOME/get_queue.pl` ACTION=`echo $TO| $HOME/get_action.pl` :0 h b w |/usr/bin/perl $RT_MAILGATE --queue $QUEUE --action $ACTION --url $RT_URL } I know that my get_queue.pl and get_action.pl scripts work, as those have been previously tested. Any help and/or guidance you can give would be greatly appreciated. Nicôle

    Read the article

  • "Learning" Linux

    - by Strider
    I've been interested in computers for a long time and have fiddled with a lot of stuff which includes Linux. I started out with Red Hat when I was young (around 13) and lost all data, converting a FAT32 drive to something else. Later it was Knoppix which was really helpful in recovery and such. Then, it was Ubuntu. Also, I fiddled with Arch for some time, but, it breaks too often for my liking (maybe, I should have been more careful). Anyway, currently I use Ubuntu 9.04. I want to dig deeper into the Linux world now. I want to learn how things work and use the terminal more. I am a programmer as well, so, it will help a lot. So, the thing I wanted to ask were: Good books to learn and understand Linux Good habits to use Linux more efficiently. Good tools about which I should know. Amount of time you set aside to learn about new things each day. As a programmer, how do you setup and use Linux efficiently. Long list. I will be grateful to the answerers.

    Read the article

  • OpenVPN: ERROR: could not read Auth username from stdin

    - by user56231
    I managed to setup openvpn but now I want to integrate a user/pass authentication method so, even though I haven't added the auth-nocache in the server config, whenever I try to connect it returns with the following message on the client side: ERROR: could not read Auth username from stdin My server.conf file contains basic stuff, everything works up untill I try to implement this for of authentication. mode server dev tun proto tcp port 1194 keepalive 10 120 plugin /usr/lib/openvpn/openvpn-auth-pam.so login client-cert-not-required username-as-common-name auth-user-pass-verify /etc/openvpn/auth.pl via-env ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem user nobody group nogroup server 10.8.0.0 255.255.255.0 persist-key persist-tun #persist-local-ip status openvpn-status.log verb 3 client-to-client push "redirect-gateway def1" push "dhcp-option DNS 10.8.0.1" log-append /var/log/openvpn comp-lzo I searched all over the net for a solution and all answers seems to be related to the auth-nocache param which I haven't set. The directive auth-user-pass-verify /etc/openvpn/auth.pl via-env points to a script which is executed to perform the authentication. A false authentication should result in a exit 1 while a true one should result with exit 0. For testing, that script auth.pl returns exit 0 no matter what the input is but it seems that the file is not executed before the error raises. auth.pl file contents: #!/usr/bin/perl my $user = $ENV{username}; my $passwd = $ENV{password}; printf("$user : $passwd\n"); exit 0; Any ideas?

    Read the article

  • How do I make dnsmasq serve IP addresses via IPoIB?

    - by Matt
    I have a cluster farm that I'm setting up. The nodes (computers in the farm) are connected via ethernet & IP over Infiniband. I'm needing to netboot the nodes and thought dnsmasq would fit well as it provides all the features including support for DHCP over IB and it works great for our ethernet setup. However, I can't seem to get it to provide IP addresses to the infiniband adaptors on the nodes. Each node is running an Ubuntu desktop 12.04 LTS. The dnsmasq server is running on ubuntu server 12.04LTS and has the following test config: dhcp-authoritative domain-needed bogus-priv expand-hosts no-hosts domain=local dhcp-range=eth0,10.0.0.10,10.0.0.255,12h dhcp-option=eth0,3,10.0.0.1 dhcp-range=ib0,10.1.1.10,10.1.1.255,12h dhcp-option=ib0,3,10.1.1.1 log-queries log-dhcp IPoIB works between nodes when configured statically but not with dhcp. On the nodes the file /etc/network/interfaces contains auto lo iface lo inet loopback auto ib0 iface ib0 inet dhcp #iface ib0 inet static #address 10.1.1.5 #netmask 255.0.0.0 up echo connected >`find /sys -name mode | grep ib0` Is there something I need to do on the client or server end to make this work?

    Read the article

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

  • How to tackle dell support system? [closed]

    - by Nishant Kumar
    We have purchased a Dell Optiplex 9010 SSFV for our organization's work. Since the first installation two of the USB keyboard keys were not working properly. I had to press those keys two times simultaneously, on first time keys did not work and for for second time it printed two characters (as it were buffering first character.) Two keys that were not working properly: Hexangrave (Below the ESC key: `) Double Quotes (Left the enter key ") We registered our complaint with DELL and they suggested (with some hard to understand and weird ENGLISH accent) some test and tricks, such as switching to different ports, checking keyboard on different PC, and it worked well with diff. PC(with Windows 7 Home Premium installed). It was clear that it is an OS fault, hence they suggested to re-install OS. Problem began here, we have a project on the run and currently a video editing project setup on our system, so can't re-install system in hurry and also DELL persons were not providing any other solution such as updating keyboard driver, etc. Arguments I am a Software Engg. and don't think it is a feasible solution to re-install entire system for simple problems. This prob is coming since the fresh system installation, so I don't think it will solve the problem. Finally, I had to find solution myself and got it here, now I want to show my disappointment to dell persons or at least tell them that they should improve there support system to not advice to re-install entire system for that simple problems. Notes We have purchased 5 years NEXT business day support from DELL for around 8000 INR (Not for that kind of solutions from DELL). So can anyone tell me how to tackle dell support system officially, so that they will pay more attention in near future. Thanks

    Read the article

  • LCD monitor flicker when connected to a laptop using VGA

    - by Björn Lindqvist
    I have a dual screen setup with two AOC e2450Sw monitors connected to a laptop. The laptop has one HDMI and one VGA output. When one of the monitors is connected using VGA, it flickers or displays static noise. The flickering is fairly subtle and only visible on darker colors. But it is there and noticable and appears like horizontal lines. The problem only appears on the monitor connected to the laptop using the VGA cable. If I swap the monitors, the one connected using VGA is displaying the flicker but not the one connected using HDMI. The simple solution would ofcourse be to connect both monitors using HDMI, but since the laptop only has one VGA and one HDMI out that isn't possible. I've tried tweaking the monitor setting using the OSD menu, but it had little or no effect. Update: After several more trouble shooting hours, it seems the problem is not related to the monitor or VGA cable as the problem persists even if I swap the display with another brand and different cables. So it may be the graphics card? Intel HD Graphics 4000. The laptop is Acer Aspire E1-571.

    Read the article

  • Squid Authentication & streaming

    - by Steve Butler
    I've got squid setup using Kerberos authentication. I'm also using squidguard as an URL redirector to block out the usual nastiness of the web. There are some sites though that we allow certain users to, and others not. This all works well, assuming I'm not using any streaming. From what i can determine from the squid logs and the wireshark traces I've done, when the initial request to stream is sent, everything is good, the authenticated username is sent with the request to squidguard. The problem is that on subsequent traffic the username is not sent to squidguard, causing it to be blocked based on default policy. I've tried using the squid built-in allow/deny stuff, but its relatively clunky, and so far squidguard has been pretty easy and fast. Here comes the question(s): How do i get Squid to pass username on all requests? (something tells me this isn't the best way) How do i get squidguard to see traffic is authenticated to a specific user even when a username isn't passed? Is there any other way of accomplishing this? A few details that may be of importance: I'm using a list of users stored in a text file for squidguard to compare against. I'm using full kerberos auth with Squid. CentOS 6.0 Squid 3.1.4 Squidguard 1.3

    Read the article

  • Unable to activate Windows XP

    - by Josh Kelley
    The latest round of Patch Tuesday updates left my Windows XP computer unbootable. ("Fatal System Error: The Windows Logon Process system process terminated unexpectedly.") After much messing around with the recovery console, an XP CD's repair mode, and manually copying registry files around, I have a system that can boot again. However, I overwrote my OEM XP installation's activation information while trying to run a retail XP CD's setup, so it needs reactivation. Here's my problem: I cannot activate it at all. I log in, Windows tells me I have to activate to continue, I click Yes, and absolutely nothing happens: no windows, no response to keyboard or mouse, no response to Ctrl-Alt-Del, nothing. Safe mode works, but I can't activate in safe mode (EDIT: not even safe mode with networking). I read a trick online of pressing [Windows Key]+U to bring up the Microsoft Narrator, and that works, but clicking its Microsoft Web Site link does nothing. My last attempt to resolve this was to reinstall Windows off of the OEM CD. Now I have two parallel Windows installations, both on the same hard drive, one with all of my stuff and no way to activate it, one fully activated with no usable programs. Any ideas? Any way to activate in safe mode? Any way to copy activation information from my activated installation to my unactivated installation (since they're both on the same hard drive)?

    Read the article

  • Using two ports on my ZyXel USG gateway to patch two devices together?

    - by Matthew Beza
    I don't know if this is possible but it would save me many long drives! I have drawn, with my epic MS Paint skills, my current setup. I have a ZyXel USG300 Gateway with a built in 5 port switch. It supports bridging, tunnles, VLANs, etc.. I have a Cisco WLC2112 plugged into port 6 (P6). The cisco is set to 192.168.6.2 and P6 is set to 192.168.6.1. This works but I need to incorporate a Nomadix AG3000 to handle guests. (Router (A) in the picture). So I need the Cisco WLC2112 to use the Nomadix as if it where plugged into it's LAN port. Right now the Nomadix WAN Port is plugged into (P2) on the ZyXel, and the Nomadix LAN port is plugged into (P3). Is it possible to set something up where (P3) and (P6) are somehow "Patched" as if the Cisco was plugged directly into the Nomadix LAN port? Basically making the ZyXel a fancy Cat5e coupler?

    Read the article

  • Ubuntu 10.04 - Add RAID 1 Array?

    - by N Rahl
    I have an existing Ubuntu 10.04 desktop system setup and running on a hard drive (Drive A). I'd like to add 2 more hard drives (Drives B & C, same size) to the system and mount them as a RAID 1 array. How do I do that? I know how to create RAID arrays during the installation, but I don't want to reinstall my system, and I shouldn't have to since my system files will stay on their own drive separate from the RAID array. I've physically added both drives to the system, and formatted them as EXT3 with gparted. Ubuntu's disk utility has a "create raid" option, but it won't let me select any of my drives (it thinks they're all full). I don't mind using mdadm, but I've found several guides that are old, and give conflicting advice. Some say I have to edit an /etc/raidtab file, some say this is done automatically. So what's the current (Ubuntu 10.04) preferred way of adding a RAID 1 to an existing system? It should turn into a raid at boot, and mount itself in /home/myname/files/. Thanks!

    Read the article

  • Active Directory LDAP and user issues (using apache2 for svn access)

    - by CaCl
    I currently have a setup where I work that lets users use their active directory domain logins and passwords to authenticate and authorize access to Subversion. Currently I need to allow application accounts the same access. So our IT group creates application accounts in the active directory for us to use. But they want to be "secure" so they set the "Workstations Allowed" to be only a limited number of workstations. So when an application account hits the apache2 server for authentication they can't login for some reason and I'm having a heck of a time trying to debug. The error logs only show me: [Tue Apr 06 11:24:25 2010] [warn] [client 24.24.24.24] [3469] auth_ldap authenticate: user appuser13 authentication failed; URI /svn [ldap_simple_bind_s() to check user credentials failed][Invalid credentials] [Tue Apr 06 11:24:25 2010] [error] [client 24.24.24.24] user appuser13: authentication failure for "/svn": Password Mismatch I've checked the password numerous times and it appears to be correct but I can't seem to get the user to authenticate properly. Below is a snippet of the apache configuration for ldap: # Auth providers # Active Directory <AuthnProviderAlias ldap ldap1> AuthBasicProvider ldap AuthLDAPURL "ldap://dmain.company.com:389/dc=dmain,dc=company,dc=com?sAMAccountName?sub?(objectClass=*)" AuthLDAPBindDN "CN=svnuser13,OU=Application Accounts,dc=dmain,dc=teradata,dc=com" AuthLDAPBindPassword secret3 </AuthnProviderAlias> # Another set of users from a different group <AuthnProviderAlias ldap ldap2> AuthBasicProvider ldap AuthLDAPURL ldap://diffldapserver:389/dc=specialusers,dc=com?uid </AuthnProviderAlias> # Another set of users from a different group <AuthnProviderAlias file file1> AuthUserFile /var/svn/auth/htpasswd </AuthnProviderAlias> <Location /svn> DAV svn SVNPath /var/svn Satisfy Any Require valid-user AuthType Basic AuthName "SVN Repository" AuthBasicProvider ldap1 file1 ldap2 AuthzSVNAccessFile /var/svn/auth/access AuthzLDAPAuthoritative on Require valid-user </Location> Any help, like tips for debugging is appreciated!

    Read the article

  • Are Colocation Cross Connects Worth While

    - by SvrGuy
    We currently operate three clusters of collocated machines in different data centers. Recently, I became aware that our newest data center will offer to cross connect us to a bandwidth provider free of charge. In the past, I never really investigated a cross connect for bandwidth because I figured that the rates would be similar to what we are paying the colo now and that it would reduce our resiliency (because we would only be using one or two carriers for IP, where as the colo uses, say 8 different providers). Then I saw an ad for hurricane electric internet services (http://he.net/cgi-bin/ip_transit_quote) that gave a price for IP transit at $1/Mbs, which is much better than the $30/Mb we pay for the blended bandwidth. What are people out there typically paying for bandwith via cross connect and how hard is to setup? Is my understanding that what you do is open agreemetns with two or three ISPs, cross connect to them and then configure your top of rack router on their network. Can you really get IP transit down to a couple of dollars per megabit per month just by doing the routing yourself? Or, is my understanding of cross connection fundamentally wrong?

    Read the article

  • PORT FORWARDING TO PUT MY WEB SERVER ON THE INTERNET

    - by Chadworthington
    I went to http://canyouseeme.org/ to check to see what my external IP address. Regardless of what port I enter, it tells me that the port is blocked. I have a LinkSys router that basically has the default settings with the exception that I have WEP encrptin setup and I have forwarded a few ports, including 80 and 69. I forwarded them to the 192.x.x.103 IP address of the PC which is running IIS. That PC runs Symantec Endpoint Protection, which I right mouse clicked in the tray to Disable. These steps used to make my PC visible so I could host my own web site in IIS on port 80, or some other port, like 69. Yet, the Open Port tool cannot see my IP when it checks eiether port and when I navigate to http://my external ip/ I get "page cant be displayed" At first I was thinking that maybe Comcast is blocking port 80, but 69 doesnt work eiether. I do not see any other blockking set up in my router and, as I mentioned, I went with teh defaults except where discussed. This is a corporate PC and Symantec End Point Protecion is new to it (this previously worked on teh same PC with Symantec Protection Agent), but I thought that disabling Sym End Pt from the tray, that that would effectively neutralize it. I do not have the rights to kill the program itself. Any suggestions on what else to try to make my PC externally visible?

    Read the article

  • Test whether svn REPO changes are reflected in Working Copy

    - by user492160
    Requirement Changes will be made to the REPO directory and this should get updated to wc(working copy) as opposed to the normal way of WC REPO. Senario: My svn repo- /var/www/svn/drupal My checkout-dir/working-copy- /var/www/html/drupalsite So I've done: edited post-commit hook to contain: "/usr/bin/svn update /var/www/html/drupalsite" I won't make any change to svn WC. I'll make changes to svn REPO- /var/www/svn/drupal. After changes are made to svn repo, run "svn commit /var/www/html/drupalsite". This will trigger the post-commit hook. This inturn will run "svn update /var/www/svn/drupal" and thus my WC will get updated with the changes of REPO. Query a. Would the above steps 1-3 help achieve my 'Requirement'? b. I'd need advise on how to test if the above setup works successfully or not. I'm at loss about the success of steps 1-3 the reason why query(a) is present. This is a bit more of a concern for me. NB: I'm new to subversion. Whatever I've configured till now have been done by reading articles online. Reason for query (b) is because I'm not into development. It seems to be a php drupal website and I happen to be setting it up. So I'm not aware as to how to make a "PROPER" change in REPO so that it gets reflected in WC. If reflected, my configs are right and the team can start on development. I manually put a random file/folder into REPO dir for seeing a change in WC and ran steps 1-3 but was of no avail and later on learned that it was NOT the way to make a change to a REPO. Pleas advise. Thanks

    Read the article

  • Mail server: can't connect via POP/IMAP

    - by MelkerOVan
    I've followed this guide on setting up a mail server on my dedicated server. I've been able to send mails from the php application I'm using and the linux commandline (using telnet, php, etc). The problem is that I cannot connect to the server via IMAP/POP which I've setup using Courier. I've tried using thunderbird but it complains that the username or password is wrong. I doubt it is the username/password but I don't know how to trouble shoot this. Edit: Here's the messages in mail.log: Jan 9 22:43:38 mail authdaemond: received auth request, service=imap, authtype=login Jan 9 22:43:38 mail authdaemond: authmysql: trying this module Jan 9 22:43:38 mail authdaemond: SQL query: SELECT id, crypt, "", uid, gid, home, "", "", name, "" FROM users WHERE id = '[email protected]' AND (enabled=1) Jan 9 22:43:38 mail authdaemond: password matches successfully Jan 9 22:43:38 mail authdaemond: authmysql: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/spool/mail/virtual, [email protected], fullname=peter, maildir=<null>, quota=<null>, options=<null> Jan 9 22:43:38 mail authdaemond: authmysql: clearpasswd=<null>, passwd=password Jan 9 22:43:38 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/spool/mail/virtual, [email protected], fullname=peter, maildir=<null>, quota=<null>, options=<null> Jan 9 22:43:38 mail authdaemond: Authenticated: clearpasswd=peter, passwd=password Jan 9 22:43:38 mail imapd: chdir Maildir: No such file or directory

    Read the article

  • Linux Port 80 to redirect to a Windows box

    - by Richard Staehler
    I have 2 servers here at work. One is a Windows 2008 Server R2 (for safety's sake, lets use 192.168.1.100) and the other is a Fedora 14 (192.168.1.101). Currently when you hit our subdomain, x.test.com, our routers tell it to go to our Fedora box, and since Apache is installed and listening to port 80, it displays the Fedora Apache Test Page. It's obvious that I don't use port 80 for this machine, however I do use NAGIOS on it and its always nice to be able to access that from anywhere in the world. So when I want to access it, I just type x.test.com/nagios. Now here comes the dilemma.... On the Windows R2 box, we recently have installed a program that requires us to setup a web server using IIS7. Because of this application, I'm going to be creating a new subdomain called y.test.com, but since we only have 1 WAN/router, it will still get pointed to our Fedora box. That being said, it wants to use port 80 as well (or whatever port I damn well wish to assign it). So my question is: since our router is pointing to the Fedora 14 box (.101), and I want to make sure I can access NAGIOS from anywhere in the world, how do I tell Apache (httpd) to redirect port 80 to the other server (.100)? If not possible, what are my other options? I have rinetd installed on Fedora and have even tried the option 192.168.1.101 80 192.168.1.100 80 and it didn't seem to work "because port 80 was already bound" Thoughts? and Thanks!

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • OS X Client & Ubuntu Server - Best way for client to access files on server?

    - by Camsoft
    I've got a local development web server running Ubuntu. I also have an iMac running OS X 10.6 which I use a client and is my development machine. I'm currently have Samba server installed on my Ubuntu server. I have shares setup for all the website directories. I then use my Mac and Coda to edit the files via their shares. This generally works really well but I noticed that my Mac was writing loads of resource fork ._filename files everywhere. I found out the following about the files: These files are created on volumes that don't natively support full HFS file characteristics (e.g. ufs volumes, Windows fileshares, etc). When a Mac file is copied to such a volume, its data fork is stored under the file's regular name, and the additional HFS information (resource fork, type & creator codes, etc) is stored in a second file (in AppleDouble format), with a name that starts with "._". (These files are, of course, invisible as far as OS-X is concerned, but not to other OS's; this can sometimes be annoying...) Does anyone know of a way of sharing files between a Mac client and a Linux server that is most compantable between the two operation systems? Ideally it needs to support the HFS filesystem so that the resource forks are not created and it also needs to support the permissions between server and client.

    Read the article

  • 24TB RAID 6 configuration

    - by Phil
    I am in charge of a new website in a niche industry that stores lots of data (10+ TB per client, growing to 2 or 3 clients soon). We are considering ordering about $5000 worth of 3TB drives (10 in a RAID 6 configuration and 10 for backup), which will give us approximately 24 TB of production storage. The data will be written once and remain unmodified for the lifetime of the website, so we only need to do a backup one time. I understand basic RAID theory, however I am not experienced with it. My question is, does this sound like a good configuration? What potential problems could this setup cause? Also, what is the best way to do a one-time backup? Have two RAID 6 arrays, one for offsite backup and one for production? Or should I backup the RAID 6 production array to a JBOD? EDIT: The data server is running Windows 2008 Server x64. EDIT 2: To reduce rebuild time, what would you think about using two RAID 5's instead of one RAID 6?

    Read the article

  • "The zone can be scavenged after" keeps incrementing

    - by kce
    What are you trying to do? I'm trying to enable DNS scavenging on a DNS zone that has about a hundred stale DNS records. What have you tried in order to make it happen? I setup DNS Scavenging per everyone's favorite TechNet Blog post: Don't be afraid of DNS Scavenging. Just be patient. I first disabled scavenging on all of our domain controllers: DNSCmd . /ZoneResetScavengeServers contoso.com 192.168.1.1 192.168.1.2 I then enabled automatic scavenging on the DNS zone: I then enabled DNS scavenging on one of the domain controllers: I then found a few records that I expected to get delete with timstamps from a few years ago and ensured that that the Delete this record when it becomes stale and that time stamp was actually set: Finally I reloaded the zone and waited 14 days (the sum of the Refresh + No-Refresh periods). What results did you expect? I expected to see a 2501 Event in the DNS server logs noting the deletion of a bunch of DNS records. What actually happened? Nothing happened. The Zone Aging/Scavenging Properties showed that the zone could be scavenged after 6/12/2014 10:00:00 AM last week. No 2501/2502 events were recorded. All of the records with "aged" time stamps are still present. The date at which the zone can be scavenged after incremented another seven days to ?6/?18/?2014 10:00:00 AM. As I understand it until that date stays at least 14 days in the past nothing will ever even be eligible for scavenging let alone actually be scavenged. The only 2501 events recorded in the event logs are ones that I have triggered by right clicking and selecting "Scavenge Stale Resource Records". They note that scavenging will try to run again in 168 hours which was this morning. I have DNS scavenging enabled for a few months and have waited patiently for something to happen. I have reloaded the zone multiple times (which resets this timestamp). What am I missing here?

    Read the article

  • Mysql Cluster not working on Ubuntu

    - by user53864
    I am unable to setup MySQL Cluster on ubuntu servers. As a starting point I started from the link but I am not successful and the tar ball version I download is 6.3.45. As I wanted to test the mysql cluster, the Data node and SQL node are same but sql never appeared as connected in management node console and it looks like below. [ndbd(NDB)] 2 node(s) id=2 @192.168.1.107 (Version: version number, Nodegroup: 0, Master) id=3 @192.168.1.108 (Version: version number, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.1.105 (Version: version number) [mysqld(API)] 2 node(s) id=4 (not connected, accepting connect from 192.168.1.107) id=5 (not connected, accepting connect from 192.168.1.108) On all the 3 machines mysql-server & client(apt-get install mysql-server mysql-client) were already installed and I completely stopped and also removed them at the system start up. Now the mysqld is from extracted cluster tar ball(/usr/local/mysql/support-files/mysql.server). As for testing, I created a test database on both the data nodes but the tables are also not syncing on other node. I checked many links, configurations are remained similar in all the links but somewhere it's going wrong. Anymore extra package is required?, Could anyone help me here..?. I am trying this for past 3 days... Thank you!

    Read the article

  • What determines what resolutions a laptop is willing to output over VGA?

    - by Joshua McKinnon
    I'm responsible for several conference rooms and have setup 1080p projectors and I provide both HDMI and VGA connectivity. HDMI for DisplayPort and Mini-DisplayPort, and VGA as a fallback, universal option. Contrary to what I expected, people seem to have much more trouble with the HDMI than VGA, so VGA gets used a lot more than you'd think (even as most workstation laptops made in the last 3-4 years have DisplayPort or Mini-DisplayPort...). Also to my surprise, VGA outputs over 1080p on a 50ft cable run with very minimal degradation on certain laptops - other laptops just don't offer 1080p as a resolution choice and top out at 1600x1200 or something else. Specific example: a ThinkPad W530 will do 1080p, a W520 won't, over VGA. (both do 1080p over displayport/mini-DP) What determines what resolutions a laptop is willing to output over VGA? I'm thinking this will come down to either a video driver that says it supports only certain resolutions for output, or limitations of the RAMDAC (which wouldn't be in play, at least DAC wise, on a digital output, but WOULD on VGA, an analog output). The basic reason for the question is that I noticed, say, a ThinkPad W520 with 1080p built in display, will output 1080p fine over DisplayPort to a 1080p projector, but will cap out at 1600x1200 (practically the same pixel count, just a little shy) on VGA. Now, this wouldn't be surprising at all except SOME laptops have no issue outputting 1080p over VGA, even with lower native resolutions. Why do I care? Well if there's some way I could enable it... for situations where my users end up using VGA anyway, it's preferable for display mirroring if they can output their laptop's native resolution, which, you guessed it, is very often 1080p on 15" models. DISCLAIMER: This is primarily a curiosity, I'm not claiming 1080p over VGA is ideal by any means, but hey, if it works. I've seen HDMI start artifacting more over same-length, same gauge cabling (up to 50' run in certain rooms). If you think this is better suited to SuperUser, please move it, but this is framed from an IT standpoint of something that affects a real pool of users in a multiple conference room, 50+ deployed laptop scenario.

    Read the article

  • Ubuntu 10.04 Keyboard and Mouse Freezing Problem

    - by nitbuntu
    I had a partition setup with Windows XP and Ubuntu 8.04 dual booting. I recently upgraded to Ubuntu 10.04 by installing fresh from CD but leaving the previous /home folder as is. Things seemed to be working fine, but started finding that my mouse and keyboard were freezing. After a quick search on the internet, I found the following suggestions as shown here:- Ubuntu Forums Here the suggestion was to:- Edit /etc/default/grub, go to the line that begins like: GRUB_CMDLINE_LINUX_DEFAULT= Change it to: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi=off" After that, run this command: sudo update-grub and Reboot This seemed to have resolved the issue but after a couple of days I again find my mouse and keyboard freezing. I also find that my parallel port printer had also stopped working. I have saved the output of dmesg and my syslog. The first can be viewed here but the syslog had too many characters, so if someone can suggest an alternative to freetexthost, I can post it there. Moreover, if there is any other information that should be provided, do let me know. I do hope we can get to the bottom of this issue. Thank you in advance for any help that could be provided.

    Read the article

  • DHCP Server on local machine

    - by EralpB
    Hello I am trying to setup a dhcp3-server on Ubuntu. But my question is more generic, if dhcp server is in a blockbox and all clients are connected to it I think I get what is going on but when dhcp server is installed on one of the "clients" that confuses me. When I send a dhcp packet from that client to the dhcp server, will my ethernet card read and write at the same time? Or will it handle it internally without writing any data to ethernet cable. It's the first time I am encountering these network things so I am a little bit confused. Also I wonder If I am in a big network with lan IP let's say 192.168.0.100 and I install a dhcp server to my computer, can any other computers accidentally get IP from my dhcp server? Every computer has one ethernet card (if that matters?). And every computer is connected to one router. I guess the answer is no because the broadcast message won't reach to my computer since when router receives a dhcp search packet it will answer and it won't let other computers know about it because they don't need to. And without router sending that packet one by one, it cannot travel further. I'd be glad if someone enlightens me. Thank you very much.

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >