Search Results

Search found 26810 results on 1073 pages for 'linux scripting'.

Page 340/1073 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • Server Security

    - by mahatmanich
    I want to run my own root server (directly accessible from the web without a hardware firewall) with debian lenny, apache2, php5, mysql, postfix MTA, sftp (based on ssh) and maybe dns server. What measures/software would you recomend, and why, to secure this server down and minimalize the attack vector? Webapplications aside ... This is what I have so far: iptables (for gen. packet filtering) fail2ban (brute force attack defense) ssh (chang default, port disable root access) modsecurity - is really clumsy and a pain (any alternative here?) ?Sudo why should I use it? what is the advantage to normal user handling thinking about greensql for mysql www.greensql.net is tripwire worth looking at? snort? What am I missing? What is hot and what is not? Best practices? I like "KISS" - Keep it simple secure, I know it would be nice! Thanks in advance ...

    Read the article

  • do i need a dns server?

    - by ajsie
    i have set up a website (lamp) in a vps from a hosting company. im wondering, in what circumstances would i want to set up a dns server on my vps? cause from what i have learned basically a dns just converts domain names into ip addresses. and at the moment my domain provider is doing this in their dns. so in what situations do i benefit from setting up an own dns server?

    Read the article

  • Indirect Postfix bounces create new user directories

    - by hheimbuerger
    I'm running Postfix on my personal server in a data centre. I am not a professional mail hoster and not a Postfix expert, it is just used for a few domains served from that server. IIRC, I mostly followed this howto when setting up Postfix. Mails addressed to one of the domains the server manages are delivered locally (/srv/mail) to be fetched with Dovecot. Mails to other domains require usage of SMTPS. The mailbox configuration is stored in MySQL. The problem I have is that I suddenly found new mailboxes being created on the disk. Let's say I have the domain 'example.com'. Then I would have lots of new directories, e.g. /srv/mail/example.com/abenaackart /srv/mail/example.com/abenaacton etc. There are no entries for these addresses in my database, neither as a mailbox nor as an alias. It's clearly spam from auto-generated names. Most of them start with 'a', a few with 'b' and a couple of random ones with other letters. At first I was afraid of an attack, but all security restrictions seem to work. If I try to send mail to these addresses, I get an "Recipient address rejected: User unknown in virtual mailbox table" during the 'RCPT TO' stage. So I looked into the mails stored in these mailboxes. Turns out that all of them are bounces. It seems like all of them were sent from a randomly generated name to an alias that really exists on my system, but pointed to an invalid destination address on another host. So Postfix accepted it, then tried to redirect it to another mail server, which rejected it. This bounced back to my Postfix server, which now took the bounce and stored it locally -- because it seemed to be originating from one of the addresses it manages. Example: My Postfix server handles the example.com domain. [email protected] is configured to redirect to [email protected]. [email protected] has since been deleted from the Hotmail servers. Spammer sends mail with FROM:[email protected] and TO:[email protected]. My Postfix server accepts the mail and tries to hand it off to hotmail.com. hotmail.com sends a bounce back. My Postfix server accepts the bounce and delivers it to /srv/mail/example.com/bob. The last step is what I don't want. I'm not quite sure what it should do instead, but creating hundreds of new mailboxes on my disk is not what I want... Any ideas how to get rid of this behaviour? I'll happily post parts of my configuration, but I'm not really sure where to start debugging the problem at this point.

    Read the article

  • Solr error; JNDI not configured for solr; Anybody know what this means?

    - by Camran
    I am installing solr on my VPS (Ubuntu 9.10) via PuTTY. First, I thought about installing Solr with Tomcat, but then after installing tomcat, I changed my mind and went for the Jetty which comes with Solr. Now that I have setup everything on my Server, and try to start the "start.jar" file, I get some errors... Here is some text from the log file: 2010-05-29 00:22:42.074::INFO: jetty-6.1.3 2010-05-29 00:22:42.134::INFO: Extract jar:file:/var/www/webapps/solr.war!/ to /var/www/work/Jetty_0_0_0_0_8983_solr.war__solr__k1kf17/webapp May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: JNDI not configured for solr (NoInitialContextEx) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: solr home defaulted to 'solr/' (could not find system property or JNDI) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader <init> INFO: Solr home set to 'solr/' May 29, 2010 12:22:42 AM org.apache.solr.servlet.SolrDispatchFilter init INFO: SolrDispatchFilter.init() May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: JNDI not configured for solr (NoInitialContextEx) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: solr home defaulted to 'solr/' (could not find system property or JNDI) May 29, 2010 12:22:42 AM org.apache.solr.core.CoreContainer$Initializer initialize INFO: looking for solr.xml: /var/www/solr/solr.xml May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader <init> INFO: Solr home set to 'solr/' Anybody know what this is? Thanks

    Read the article

  • Alternatives to native LDAP

    - by Matt
    We've implemented an LDAP to NIS solution and have begun transitioning some systems to native LDAP binding for authentication and automount maps. Unfortunately we have a very mixed environment with more than 20 *nix environments. The setup for each variant is of course unique and has required various workarounds to get full functionality. We're now at the point where we're willing to revisit the solution and possibly migrate toward something like Likewise (http://www.likewise.org), but would like to know what others are using to solve this problem.

    Read the article

  • Limit Apache 2 Memory Usage

    - by UltraNurd
    I am running a hobby webserver off of an ancient Blue & White G3/300 running Debian PPC Squeeze 2.6.30. The performance is okay for a while after a restart, but it eventually gets more and more bogged down. Right now it's at 76 days uptime, and the main culprit seems to be the memory usage of 10+ apache2 processes. I think I need to lower the values for StartServers, MinSpareServers, and/or MaxSpareServers, but I'm not sure which one to adjust, and there are three sections for each depending on which mpm module is in use. How do I tell which of the following sections I need to change, and what are some reasonable values given that the box has 448 MB physical memory (weird upgrade history of one each 64, 128, and 256 sticks)? <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> There aren't any other instances of StartServers in my apache2.conf, but none of those mpm modules appear in mods-available or mods-enabled. Ideas? Thanks!

    Read the article

  • Duplicate IP address detection with multiple NICs

    - by sfink
    I am using arping -D to detect duplicate IP addresses within a network when setting up servers. (The network is controlled by someone else, and we have had many issues with IP allocation in the past.) It works fine as long as my host has a single NIC on a given VLAN, but when my host has more than one (I have one with 9 NICs on one VLAN and 1 on the other), arping -D always returns false collisions. The problem is that all 9 of my NICs respond to an ARP request for any of the IPs on those NICs. (These are real physical NICs, not aliases or anything.) I send out one ARP request packet, and get 9 ARP is-at ARP replies, one for each MAC address. I could implement my own solution by sniffing packets and checking for any replies with a MAC address other than the local NICs', but it seems like there ought to be an easier way.

    Read the article

  • Cannot run logwatch due to Date::Manip issue

    - by Quintin Par
    I tried to run logwatch at follows [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined. My date is as follows root@machine cron.daily]# date Thu Aug 23 06:25:21 GMT 2012 Now based on details in various forums I tried to fix this by setting /etc/timezone to “+0800” but it didn’t work My /etc/localtime points to /usr/share/zoneinfo/GMT and is managed by puppet How do I go about fixing this? I still want all my machines to be in GMT timezone. EDIT: Sadly, Both the changes are not working: [root@machine cron.daily]# cat /etc/TIMEZONE UTC Quanta’s [root@machine cron.daily]# cat ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export TZ=GMT export PATH [root@machine cron.daily]# source ~/.bash_profile [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined.

    Read the article

  • Is it possible to change the mount point used for external USB devices from /media to something else under GNOME?

    - by slm
    I'm using CentOS 5.x and am trying to change the mount point that get's used when I insert a USB thumb drive or external USB drive. They're showing up under /media/KINGSTON for example. I'd like to change this so that they show up under /external/KINGSTON for example. If you must know my reasons for asking this, I have a domain where they're already using /media for something else and it would be more work to move this domain's automount from /media to something else. I'm trying to explore all my options before I decide on a path forward. Thanks!

    Read the article

  • Getting more helpful tab completion prompts in bash?

    - by Rory McCann
    Let's say I have a directory with a few files in it like this: $ ls file1 file2 file3 And I want to do some tab completion in bash: $ cat file<tab> file1 file2 file3 I remember seeing someone doing tab completion and the shell bolded the next parts, so in this case, it would bold the 1, 2 and 3 of the filename so it'll look like this: file**1** file**2** file**3** which will tell you what you should type in next. I think this was a feature of zsh, but is there any way to get it in bash?

    Read the article

  • Memcached Debuging/Server Logs Monitor the Memcached Servers?

    - by user1179459
    I have chat engine which is based on the Memcached variables, putting them into arrays and reading them in other end via jquery, which works fine 95% of the times, however when the server load is high memcached (presume its the memcached) the crash and browser gets stucks up. I dont think its jquery issue since this only happens when the server load is very high. I need a way to monitor the memcached servers or somehow write a log file into where the fails/errors comes in... Any idea on how i can do this ? or any idea why memcached servers fails ? I run the memcached as follows $GLOBALS['MemCached'] = FALSE; $GLOBALS['MemCached'] = new Memcache; $GLOBALS['MemCached']->pconnect('localhost', 11211); My memcached config is as follows #! /bin/sh # # chkconfig: - 55 45 # description: The memcached daemon is a network memory cache service. # processname: memcached # config: /etc/sysconfig/memcached # pidfile: /var/run/memcached/memcached.pid # Standard LSB functions #. /lib/lsb/init-functions # Source function library. . /etc/init.d/functions PORT=11211 USER=memcached MAXCONN=1024 CACHESIZE=128 OPTIONS="" if [ -f /etc/sysconfig/memcached ];then . /etc/sysconfig/memcached fi # Check that networking is up. . /etc/sysconfig/network if [ "$NETWORKING" = "no" ] then exit 0 fi RETVAL=0 prog="memcached" pidfile=${PIDFILE-/var/run/memcached/memcached.pid} lockfile=${LOCKFILE-/var/lock/subsys/memcached} start () { echo -n $"Starting $prog: " # Ensure that /var/run/memcached has proper permissions if [ "`stat -c %U /var/run/memcached`" != "$USER" ]; then chown $USER /var/run/memcached fi daemon --pidfile ${pidfile} memcached -d -p $PORT -u $USER -m $CACHESIZE -c $MAXCONN -P ${pidfile} $OPTIONS RETVAL=$? echo [ $RETVAL -eq 0 ] && touch ${lockfile} } stop () { echo -n $"Stopping $prog: " killproc -p ${pidfile} /usr/bin/memcached RETVAL=$? echo if [ $RETVAL -eq 0 ] ; then rm -f ${lockfile} ${pidfile} fi } restart () { stop start } # See how we were called. case "$1" in start) start ;; stop) stop ;; status) status -p ${pidfile} memcached RETVAL=$? ;; restart|reload|force-reload) restart ;; condrestart|try-restart) [ -f ${lockfile} ] && restart || : ;; *) echo $"Usage: $0 {start|stop|status|restart|reload|force-reload|condrestart|try-restart}" RETVAL=2 ;; esac exit $RETVAL

    Read the article

  • VMware Server 2.0.2 and Firefox 3.6 RC1

    - by Mads
    Hi, it seems that VMware Server 2.0.2 and Firefox 3.6 RC1 don't like each other. I have a reproducible problem on different networks, each with same software configuration (FF3.6 RC1 and VMware Server 2.0.2 on Ubuntu 8.04.3 LTS 64bit). The login screen doesn't show up and the Firefox remarks and non loadable page. It cannot load the page at all. The redirect is done (from http://:8222 to https://:8333 ) Maybe someone can help?

    Read the article

  • Trouble serving vhosts when trying to set up wildcard subdomains with dnsmasq in local development e

    - by Jeremy Kendall
    I'm trying to get wildcard DNS enabled on my laptop using dnsmasq. I realize that this has been asked and answered more than once on this forum, but I can't get the solution to work for me. Steps taken so far: Installed dnsmasq Set address=/example.dev/127.0.0.1 in dnsmasq.conf Set listen-address=127.0.0.1 in dnsmasq.conf Ensured nameserver 127.0.0.1 is in /etc/resolv.conf Set prepend domain-name-servers 127.0.0.1; in /etc/dhcp3/dhclient.conf Created a vhost for example.dev Restarted apache and dnsmasq Note: example.dev is not set in /etc/hosts My vhost for example.dev <VirtualHost *:80> ServerName example.dev DocumentRoot /home/jkendall/public_html/example/public ServerAlias *.example.dev # This should be omitted in the production environment SetEnv APPLICATION_ENV development <Directory /home/jkendall/public_html/example/public> DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> The setup above will server example.dev locally without any problem. It will also serve test.example.dev, but test.example.dev returns the default apache "It works!" index.html from /var/www rather than my index.php in /home/jkendall/public_html/example/public. The solution in this Server Fault thread suggests that address=/.example.dev/127.0.0.1 would resolve my problem, but when I try to use that solution, restarting dnsmasq results in a failure with the error message dnsmasq: error at line 62 of /etc/dnsmasq.conf For grins, I moved my project over to /var/www/example and modified the vhost appropriately. I got the same result as described above. At this point I'm not sure what other steps I can take to resolve the issue. Thoughts?

    Read the article

  • MySQL open files limit

    - by Brian
    This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped to. Anyway, I tried getting around it by adding a line like this in /etc/security/limits.conf: mysql hard nofile 8192 I also tried adding this to the pre-start script in mysql's upstart config (/etc/init/mysql.conf): ulimit -n 8192 Obviously neither of those things worked. So where is the hoop that has been added between Ubuntu 8.04 and 10.04 through which I must jump in order to actually increase the open files limit?

    Read the article

  • How to get AMD Catalyst working on Arch x86_64

    - by gh403
    I've got a Dell Inspiron 15R 7520 with AMD's hybrid "PowerXpress" graphics. The integrated graphics card is (if I understand it correctly) integrated with the i7-3612QM processor, and the discrete graphics card is a "Southern Islands" Radeon HD 7730M. The integrated graphics work perfectly under Arch. However, the discrete graphics don't. I have tried several different methods, and the one that seems to get me the farthest with the least effort is the AUR package catalyst-total-pxp. After installing, rebooting, and issuing the commands # aticonfig --initial # pxp_switch_catalyst amd # X X completely fails to start. The X log can be found here. I don't understand what is failing; potentially, it has something to do with the way my card is hooked up--I think it's muxless, but I really don't know. What is the matter here? Any help would be appreciated.

    Read the article

  • Remote Sending of Emails via SMTP/EXIM Issue

    - by Christian Noel
    I have been encountering a problem when sending messages via EXIM. Here is the scenario: I have 2 servers lets just say host1.com = where all my apps and programs are hosted. host2.com = is another server which handles some apps but is also my smtp mail server. whm and cpanel are installed in both hosts as well as exim. right now, messages are being sent out as [email protected] to clients. host1.com uses the [email protected] so that it can send messages outbound as well. here's the problem, after a few hours from a fresh reboot of host1.com, sending messages from host1.com is no longer possible because i encounter an error that states: system/vendor/swift/Swift/Connection/SMTP.php [309]: The SMTP connection failed to start [tls://mail.host2.com]: fsockopen returned Error Number 110 and Error String 'Connection timed out'` also note that this was working fine earlier (like 10 hours ago) but then it suddenly fails. everytime i restart the host1.com then sending messages will work again. i have checked logs and traces but to no avail the only means of fixing this problem is restarting host1.com.

    Read the article

  • Using an allowed user list with VSFTPD

    - by Naftuli Tzvi Kay
    According to the Wiki here, you can only allow certain users to log in over FTP using the following configuration in your /etc/vsftp.conf file: userlist_enable=YES userlist_file=/etc/vsftp.user_list userlist_deny=NO I've configured my system to use this configuration, and I only have one user which I'd like to expose over FTP named streams, so my /etc/vsftp.user_list looks like this: streams Interestingly enough, I cannot log in once I enable to user list. If I change userlist_enable to NO, then things work properly, but if I enable it, I can't log in all, it just keeps trying to reconnect. I don't get a login failed message, it just keeps trying to reconnect when using lftp. My /etc/vsftp.conf file is available on Pastebin here and my /etc/vsftp.user_list is available here. What am I doing wrong here? I'd just like to only make the streams user able to log in.

    Read the article

  • running vmstat around a command

    - by svrist
    Im trying to automate a long list of database experiments. I would like to save vmstat output while specific parts of the experiments are running. Does something exist along the lines of time(1) or script(1) that I could use to say: " While this command is running, save vmstat 2(or similar) output to a file " I am aware that vmstat gives my vmstat for the complete system including other processes, but that is OK for my usage. (Running Ubuntu 9.10 if that makes a difference)

    Read the article

  • "File" exists or not

    - by SnailTang
    ls -il ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory total 55456 ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N and when i use to show these files, i get the info: p+st.ó·e p¦ft.¦·N Please, where do these files or somethings others exist. Or what makes them show here.

    Read the article

  • MySQL - Why would SHOW SLAVE HOSTS cause a binlog dump?

    - by Rory McCann
    We're getting loads of binlog files in our MySQL 5.0.x. We have a normal master/slave replication thing going with 1 master, 1 slave. Looking at /var/log/mysql.log, nearly 90% of the time the replicator connects and does a SHOW SLAVE HOSTS causes a bin log dump. For example: 7020 Query SHOW SLAVE HOSTS 7020 Binlog Dump Log: 'mysql-bin.029634' Pos: 13273 However when I do a SHOW SLAVE HOSTS on the mysql myself, I get no results. Occasionally when the replicator does a SHOW SLAVE HOSTS, mysql will hang for hours. I see nothing in the /var/log/syslog at the same time... What's going on here? How can I debug this more? For the record the MySQL master and slave servers are ubuntu dapper.

    Read the article

  • How to tell X.org to reload input device module?

    - by Vi
    When X.org boots up, Synaptics touchpad works well. But when I remove the module it falls back to /dev/input/mice and don't use normal driver even when touchpad is available again. Xorg.0.log: ... (II) XINPUT: Adding extended input device "Synaptics Touchpad" (type: TOUCHPAD) (--) Synaptics Touchpad: touchpad found # { rmmod psmouse && echo mem /sys/power/state && modprobe psmouse; } (WW) : No Device specified, looking for one... (II) : Setting Device option to "/dev/input/mice" ... How to tell X.org to try it's InputDevice again (without restarting X server)? P.S. rmmod psmouse is needed to prevent crashing of Acer Extensa 5220 when resuming from suspend-to-ram. Update: Found answer myself: Doing xinput set-int-prop "Synaptics Touchpad" "Device Enabled" 8 1 after reloading the kernel module reloads touchpad. Now suspend-to-ram works OK.

    Read the article

  • Install 64-bit Ubuntu or 32-bit?

    - by nitbuntu
    I'll be receiving a new notebook in a few days and was planning on running Ubuntu on it as it's compatible and the notebook has no OS pre-installed. The specifications are: Core 2 Duo, T6600, 4 GB RAM, Intel integrated graphics. I know a year or two ago, running a 64-bit version of Ubuntu was not advised due to much of the applications and plugins (e.g. Flash) only running on 32-bit. Is this still the case? Would I get better performance with 64-bit Ubuntu since I have 4 GB of RAM? Are there any downsides anymore?

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >