Search Results

Search found 24623 results on 985 pages for 'linux'.

Page 355/985 | < Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • Configure Iptables to allow a PHP-app accessing a port-nr

    - by Camran
    I have a php-application which connects to another app called Solr (database search engine). I can via this php app add/remove documents (records) from the Solr index. However, the Solr security is low, and anybody with the right port nr can access Solr and remove documents (records). I wonder, is it possible to ONLY allow my own php-app to have access to Solr somehow? Prefferably via Iptables. I am thinking I can only allow my own servers IP to that port, and it would solve my problem, because PHP is a server-side code. But I am not sure. About the Php-app: The website is a classifieds website, and when users wants to add or remove classifieds, they do so through a php app, which is this one. The app has a function which connects to solr and updates the database (index). I appreciate detailed answers... Thanks

    Read the article

  • Bad font anti-aliasing in Ubuntu

    - by Juliano
    I'm switching from Fedora 8 to Ubuntu 9.04, and I can't seem to get it to get a good font anti-aliasing to work. It seems that Ubuntu's fontconfig tries to keep characters in integral pixel widths. This makes text more difficult to read, when 1 pixel is too thin and 2 pixels is too thick. Check the image below. In Fedora, when fontconfig anti-aliasing is enabled, fonts have their thickness proportional to the font size. Below, the thickness is different for 8, 9 and 10pt sizes. In Ubuntu, on the other hand, even when anti-aliasing is enabled, all 8, 9 and 10pt sizes have 1 pixel thickness. This makes reading larges amount of text difficult. I'm using the very same home directory, and I already checked that X resources are the same in both systems: ~% xrdb -query | grep Xft Xft.antialias: 1 Xft.dpi: 96 Xft.hinting: 1 Xft.hintstyle: hintfull Xft.rgba: none GNOME settings: ~% gconftool-2 -a /desktop/gnome/font_rendering antialiasing = grayscale hinting = full dpi = 96 rgba_order = rgb So, the question is: What should I change in the new box (Ubuntu) in order to get anti-aliasing like in the old box (Fedora)?

    Read the article

  • Vim: tab-align multiple lines?

    - by Andrew Bolster
    In GUI style editors, you can generally select multiple lines, press tab a few times to move all the lines across (or shift-tab to go back). I have no idea how to do this in VIM. I googled around and couldn't find any straight answer to I came here.

    Read the article

  • rsync general question

    - by CaptnLenz
    I'm trying to use rsync. At first, everything looks very good: rsync -Pniahv -e ssh /home/xxx/Videos/ [email protected]:"/shares/Public/Shared\ Videos/" --stats ... <f+++++++++ Serien/blah.avi <f+++++++++ Serien/blah S01E01 <f+++++++++ Serien/blah - S01E02 <f+++++++++ Serien/blah - S01E03 <f+++++++++ Serien/blah - S01E04 <f+++++++++ Serien/blah - S01E05 <f+++++++++ Serien/blah - S01E06 <f+++++++++ Serien/blah - S01E07 ... Number of files: 232 Number of files transferred: 223 Total file size: 118.24G bytes Total transferred file size: 117.51G bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 9.46K File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 10.18K Total bytes received: 712 After that, i copied some of the files manually and runned rsync again in dry mode: rsync -Pniahv -e ssh /home/xxx/Videos/ [email protected]:"/shares/Public/Shared\ Videos/" --stats ... <f..tpo.... Serien/blah.avi <f..tpo.... Serien/blah S01E01 <f..tpo.... Serien/blah - S01E02 <f..tpo.... Serien/blah - S01E03 <f..tpo.... Serien/blah - S01E04 <f..tpo.... Serien/blah - S01E05 <f..tpo.... Serien/blah - S01E06 <f..tpo.... Serien/blah - S01E07 ... Number of files: 232 Number of files transferred: 223 Total file size: 118.24G bytes Total transferred file size: 117.51G bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 9.46K File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 10.18K Total bytes received: 712 Why hasn't changed something in the --stats, although only the permissions and the timestamp have to be updated and not the full files need to be copied?

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • Sendmail SMART_HOST not working

    - by daniel
    Hello, I've defined SMART_HOST to be a specific server, lets call it foo.bar.com. However, when I send a test mail using 'sendmail -t', sendmail tries to use mx.bar.com, which subsequently rejects my mail. I've verified that foo.bar.com works and that mx.bar.com does not work (yay telnet). I've recompiled sendmail.mc vi make, make -C and m4. I've verified the DS entry in sendmail.cf. I've restarted sendmail correctly. I'm not sure how to proceed at this point. Any ideas? Here is my SMART_HOST line: define(SMART_HOST',foo.bar.com')dnl ...and here is the result of a test mail. It never tries to use foo.bar.com, instead it uses mx.bar.com. $ echo subject: test; echo | sendmail -Am -v -flocaluser -- [email protected] subject: test [email protected]... Connecting to mx.bar.com via relay... 220 mx.bar.com ESMTP >>> EHLO myhost.bar.com 250-mx.bar.com 250-8BITMIME 250 SIZE 52428800 >>> MAIL From:<[email protected]> SIZE=1 250 sender <[email protected]> ok >>> RCPT To:<[email protected]> 550 #5.1.0 Address rejected. >>> RSET 250 reset localuser... Connecting to local... localuser... Sent Closing connection to mx.bar.com. >>> QUIT 221 mx.bar.com And last, here is a test mail sent using foo.bar.com: $ hostname myhost.bar.com $ telnet foo.bar.com 25 Trying ***.***.***.***... Connected to foo.bar.com (***.***.***.***). Escape character is '^]'. 220 foo.bar.com ESMTP Sendmail 8.14.1/8.14.1/ITS-7.0/ldap2-1+tls; Tue, 21 Dec 2010 13:27:44 -0700 (MST) helo foo 250 foo.bar.com Hello myhost.bar.com [***.***.***.***], pleased to meet you mail from: [email protected] 250 2.1.0 [email protected]... Sender ok rcpt to: [email protected] 250 2.1.5 [email protected]... Recipient ok data 354 Enter mail, end with "." on a line by itself testing . 250 2.0.0 oBLKRikZ003758 Message accepted for delivery quit 221 2.0.0 foo.bar.com closing connection Connection closed by foreign host. Any ideas? Thanks

    Read the article

  • How does Kerberos work with SSH?

    - by Phil
    Suppose I have four computers, Laptop, Server1, Server2, Kerberos server: I log in using PuTTY or SSH from L to S1, giving my username / password From S1 I then SSH to S2. No password is needed as Kerberos authenticates me Describe all the important SSH and KRB5 protocol exchanges: "L sends username to S1", "K sends ... to S1" etc. (This question is intended to be community-edited; please improve it for the non-expert reader.)

    Read the article

  • dovecot/postfix: can send & receive via webmin, however squirrel mail and outlook fail to connect

    - by Jonathan
    I have just finished setting up dovecot and postfix on my server (centos 5.5/apache) earlier today. So far I've been able to get email working through webmin (can send/receive to and from external domains). However, attempting to telnet xxx.xxx.xx.xxx 110 returns the following errors: Connected to xxx.xxx.xx.xxx. Escape character is '^]'. +OK Dovecot ready. USER mailtest +OK PASS ********* +OK Logged in. -ERR [IN-USE] Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 22:55:48] Connection closed by foreign host. Which further logs the following error dovecot: Feb 11 21:32:48 Info: pop3-login: Login: user=, method=PLAIN, rip=::ffff:xxx.xxx.xx.xxx, lip=::ffff:xxx.xxx.xx.xxx, TLS dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 21:32:48] dovecot: Feb 11 21:32:48 Info: POP3(mailtest): Couldn't open INBOX top=0/0, retr=0/0, del=0/0, size=0 Also, when attempting to login to squirrelmail or access the account via thunderbird/live mail etc, it obviously fails with a similar issue. Any suggestions or outside thinking on this would be a massive help! I've pretty much exhausted every resource, and tried every suggestion for my dovecot.conf file, but so far nothing seems to work :( I feel like it may be a permissions/ownership issue, but i'm lost as to specifics.

    Read the article

  • insserv: Script <SCRIPT_NAME> is broken: missing end of LSB comment.

    - by udo
    I am getting this error when running: insserv -r udo-startup.sh insserv: Script udo-startup.sh is broken: missing end of LSB comment. insserv: exiting now! The content of udo-startup.sh is this: #!/bin/bash ### BEGIN INIT INFO # Provides: udo-startup.sh # Required-Start: $local_fs $remote_fs $network $syslog # Required-Stop: $local_fs $remote_fs $network $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: - # Description: - ### END INIT INF ID=$(xinput list | grep -i touchpad | sed '/TouchPad/s/^.*id=\([0-9]*\).*$/\1/') xinput set-prop $ID "Device Enabled" 0 exit 0

    Read the article

  • Nagios Woudn't Start, now won't Stop!

    - by Bart B
    I ran an update on a CentOS server running Nagios, after the update, Nagios failed to start. The error in the logs was: Failed to obtain lock on file /var/run/nagios.pid: Permission denied So, I checked and there was no pid file for Nagios in /var/run. I created one and gave it the following permissions: -rwxr--r-- 1 nagios nagios 6 May 31 11:58 nagios.pid Nagios then started and seems to be running normally. The only problem is, it refuses to stop now, so I can't re-start it to add new servers and services to be monitored! When I issue the command "service nagios stop", I get [FAILED], but nothing at all gets outputted to the log, and the service remains up. Any ideas on how I can get the service to stop now? I'm running the RPM version which was installed via yum from the RPMForge repositories. The server is CenotOS 5.5.

    Read the article

  • Problem with testsaslauthd and kerberos5 ("saslauthd internal error")

    - by danorton
    The error message “saslauthd internal error” seems like a catch-all for saslauthd, so I’m not sure if it’s a red herring, but here’s the brief description of my problem: This Kerberos command works fine: $ echo getprivs | kadmin -p username -w password Authenticating as principal username with password. kadmin: getprivs current privileges: GET ADD MODIFY DELETE But this SASL test command fails: $ testsaslauthd -u username -p password 0: NO "authentication failed" saslauthd works fine with "-a sasldb", but the above is with "-a kerberos5" This is the most detail I seem to be able to get from saslauthd: saslauthd[]: auth_krb5: krb5_get_init_creds_password: -1765328353 saslauthd[]: do_auth : auth failure: [user=username] [service=imap] [realm=] [mech=kerberos5] [reason=saslauthd internal error] Kerberos seems happy: krb5kdc[](info): AS_REQ (4 etypes {18 17 16 23}) 127.0.0.1: ISSUE: authtime 1298779891, etypes {rep=18 tkt=18 ses=18}, username at REALM for krbtgt/DOMAIN at REALM I’m running Ubuntu 10.04 (lucid) with the latest updates, namely: Kerberos 5 release 1.8.1 saslauthd 2.1.23 Thanks for any clues.

    Read the article

  • rm -rf not erasing directory

    - by chief
    I am attempting to erase a directory called apps. When I run rm-rf apps it looks like it erases it for the moment. When I log back on to the server the directory is still there, though it is highlighted in green. drwxrwxrwx 3 user user 4096 2010-04-24 18:33 apps

    Read the article

  • rm -rf not erasing directory

    - by chief
    I am attempting to erase a directory called apps. When I run rm-rf apps it looks like it erases it for the moment. When I log back on to the server the directory is still there, though it is highlighted in green. drwxrwxrwx 3 user user 4096 2010-04-24 18:33 apps Ubuntu 9.10

    Read the article

  • copy boot-able partition

    - by Dima
    I have an disk image with 3 partitions: first partition (hd0,0) is boot-able with GRUB1 with the following configuration GRUB file: default=0 timeout=5 title Bank A root (hd0,1) chainloader +1 title Bank B root (hd0,2) chainloader +1 The partitions (hd0,1) and (hd0,2) are also boot-able. I'm trying to clone partition (hd0,1) to (hd0,2) by creating device map using kpartx and copying whole partition using dd command. The problem is: after partition cloning, the cloned partition did not boot (but all files are OK). What the wrong? I need both partitions to bee identical (I'm using them for fail-over purposes into embedded device)

    Read the article

  • Ubuntu: Multiple NICs, one used only for Wake-On-LAN

    - by jcwx86
    This is similar to some other questions, but I have a specific need which is not covered in the other questions. I have an Ubuntu server (11.10) with two NICs. One is built into the motherboard and the other is a PCI express card. I want to have my server connected to the internet via my NAT router and also have it able to wake from suspend using a Magic Packet (henceforth referred to as Wake-On-LAN, WOL). I can't do this with just one of the NICs because each has an issue - the built-in NIC will crash the system if it is placed under heavy load (typically downloading data), whilst the PCI express NIC will crash the system if it is used for WOL. I have spent some time investigating these individual problems, to no avail. My plan is thus: use the built-in NIC solely for WOL, and use the PCI express card for all other network communication except WOL. Since I send the WOL Magic Packet to a specific MAC address, there is no danger of hitting the wrong NIC, but there is a danger of using the built-in NIC for general network access, overloading it and crashing the system. Both NICs are wired to the same LAN with address space 192.168.0.0/24. The built-in ethernet card is set to have interface name eth1 and the PCI express card is eth0 in Ubuntu's udev persistent rules (so they stay the same upon reboot). I have been trying to set this up with the /etc/network/interfaces file. Here is where I am currently: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.0.3 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 auto eth1 iface eth1 inet static address 192.168.0.254 netmask 255.255.255.0 I think by not specifying a gateway for eth1, I prevent it being used for outgoing requests. I don't mind if it can be reached on 192.168.0.254 on the LAN, i.e. via SSH -- it's IP is irrelevant to WOL, which is based on MAC addresses -- I just don't want it to be used to access internet resources. My kernel routing table (from route -n) is Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 My question is this: Is this sufficient for what I want to achieve? My research has thrown up the idea of using static routing to specify that eth1 should only be used for WOL on the local network, but I'm not sure this is necessary. I have been monitoring the activity of the interfaces using iptraf and it seems like eth0 takes the vast majority of the packets, though I am not sure that this will be consistent based on my configuration. Given that if I mess up the configuration, my system will likely crash, it is important to me to have this set up correctly!

    Read the article

  • WEB based HPC cluster node management

    - by Skuja
    Hello, i am working on my school diploma thesis. The main goal is to create web based application where logged users could see free and busy nodes, turn them on and off, see what process they are running etc. Figured out that i could do something like this - write some cron daemon that would run every 30seconds or so, and it could run ping utility for each node to find out if it is on or off, then write results to some file. Then from my web app (i will write in PHP) i could read the info. Will it be a good solution? How would you suggest me to do it? And finally, is there any existing solutions (it may not be a definetly ewb based) for managment of cluster nodes?

    Read the article

  • How do I setup unison to sync a folder one way

    - by Rob
    I have a 1tb NAS that has a 1tb usb external hard attached I have prepared the file system on the usb disk and mounted it I want to 100% sync my data from my nas to the usb disk - but I want it to be incremental and only have the NAS as the 'master' - eg if a file changes on the usb external hard drive I want it to ignore this change as its not the live version (not that I think the files will change on the usb disk but im paranoid the live could get overwritten) Also if a file gets deleted on live I want to retain the deleted file on the usb disk Can unison sync one-way and achive the above for me? if so with simply unison sorce/ target/ Work? Thanks Rob

    Read the article

  • Post compiled php 5.4 curl installation

    - by user140657
    I recently compiled php 5.4 from source. I have Centos 6. I used this configuration: # ./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql # make # make install # cp php.ini-dist /usr/local/lib/php.ini I realize now that I do not have cURL installed. I don't know how to install cURL after a compiled installation of php. Using yum install php-curl installs cURL for php 5.3. I tried this already with an apache restart and it did not show up on my phpinfo file. How do I install cURL under these circumstances?

    Read the article

  • Reset user passwd when you don't know it

    - by warren
    I have a small problem. I have shared keys setup on my domain, so I never type my password to login anymore. I've forgotten my password now. This is a problem because only my user can sudo. Password authentication for root has been disabled, so without my password, I cannot do maintenance on my web server. Is there a way to reset my password as my [now only] key-authenticated user? Specifically, can this be done on CentOS 4?

    Read the article

  • Secure openVPN using IPTABLES

    - by bob franklin smith harriet
    Hey, I setup an openVPN server and it works ok. The next step is to secure it, I opted to use IPTABLES to only allow certain connections through but so far it is not working. I want to enable access to the network behind my openVPN server, and allow other services (web access), when iptables is disabaled or set to allow all this works fine, when using my following rules it does not. also note, I already configured openVPN itself to do what i want and it works fine, its only failing when iptables is started. Any help to tell me why this isnt working will appreciated here. These are the lines that I added in accordance with openVPN's recommendations, unfortunately testing these commands shows that they are requiered, they seem incredibly insecure though, any way to get around using them? # Allow TUN interface connections to OpenVPN server -A INPUT -i tun+ -j ACCEPT #allow TUN interface connections to be forwarded through other interfaces -A FORWARD -i tun+ -j ACCEPT # Allow TAP interface connections to OpenVPN server -A INPUT -i tap+ -j ACCEPT # Allow TAP interface connections to be forwarded through other interfaces -A FORWARD -i tap+ -j ACCEPT These are the new chains and commands i added to restrict access as much as possible unfortunately with these enabled, all that happens is the openVPN connection establishes fine, and then there is no access to the rest of the network behind the openVPN server note I am configuring the main iptables file and I am paranoid so all ports and ip addresses are altered, and -N etc appears before this so ignore that they dont appear. and i added some explanations of what i 'intended' these rules to do, so you dont waste time figuring out where i went wrong : 4 #accepts the vpn over port 1192 -A INPUT -p udp -m udp --dport 1192 -j ACCEPT -A INPUT -j INPUT-FIREWALL -A OUTPUT -j ACCEPT #packets that are to be forwarded from 10.10.1.0 network (all open vpn clients) to the internal network (192.168.5.0) jump to [sic]foward-firewall chain -A FORWARD -s 10.10.1.0/24 -d 192.168.5.0/24 -j FOWARD-FIREWALL #same as above, except for a different internal network -A FORWARD -s 10.10.1.0/24 -d 10.100.5.0/24 -j FOWARD-FIREWALL # reject any not from either of those two ranges -A FORWARD -j REJECT -A INPUT-FIREWALL -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT-FIREWALL -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT-FIREWALL -j REJECT -A FOWARD-FIREWALL -m state --state RELATED,ESTABLISHED -j ACCEPT #80 443 and 53 are accepted -A FOWARD-FIREWALL -m tcp -p tcp --dport 80 -j ACCEPT -A FOWARD-FIREWALL -m tcp -p tcp --dport 443 -j ACCEPT #192.168.5.150 = openVPN sever -A FOWARD-FIREWALL -m tcp -p tcp -d 192.168.5.150 --dport 53 -j ACCEPT -A FOWARD-FIREWALL -m udp -p udp -d 192.168.5.150 --dport 53 -j ACCEPT -A FOWARD-FIREWALL -j REJECT COMMIT now I wait :D

    Read the article

  • Setup symbolic link where users can access it with FTP

    - by Dan Shields
    I have a folder on a server where a client of mine has a bunch of folders that they upload images and what not for a site, I do a symbolic link to those folders to the root of the website. This way I can give them ftp access to upload whatever they need without having access to the root level of the website. I have another folder that I can't setup as a symbolic link to their folder, which has images they need to upload to. I know that if I create a symbolic link the other way around where the sym link is in their folder, they can't access it through FTP. There has to be a way without creating two separate FTP accounts and give a user the ability to upload to a different directory that is outside of their home directory. I see that it is ftp specific and that there are some settings that can be changed but I haven't seen any clear cut answers for the best way to handle this.

    Read the article

< Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >