Search Results

Search found 33182 results on 1328 pages for 'linux port'.

Page 497/1328 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • Iptables rules make communication so slow

    - by mmc18
    When I have send a request to an application running on a machine which following firewall rules are applied, it waits so long. When I have deactivated the iptables rule, it responses immediately. What makes communication so slow? -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p esp -j ACCEPT -A INPUT -i ppp+ -j ACCEPT -A INPUT -p udp -m udp --dport 500 -j ACCEPT -A INPUT -p udp -m udp --dport 4500 -j ACCEPT -A INPUT -p udp -m udp --dport 1701 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -i lo -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 -A FORWARD -i ppp+ -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT

    Read the article

  • Tool to run same key strokes on multiple unix machines

    - by virtualvoid
    I want to run the same commands on multiple machines, I know I can do this using ssh scripting or things like clusterssh, however I don't want to install anything on the server. (Don't have the rights) What I want is to just clone the keystrokes across multiple machines e.g. run cat /etc/oratab on one window and same is run on multiple windows e.g. in putty, is there a tool to do that from a windows client.

    Read the article

  • How do I minimize Evolution to the system tray in Ubuntu?

    - by Jephir
    In Ubuntu some applications can be set to minimize instead of exit on close. For example, Empathy minimizes to the system tray (mail icon) when the close button is pressed in the application window. How do I make Evolution do this as well? Essentially I would like to have Evolution hidden in the system tray instead of having to re-launch it every ten minutes to check for new messages (or leave it open and clutter the taskbar).

    Read the article

  • Why doesn't the value in /proc/meminfo seem to map exactly to the system RAM?

    - by Eric Asberry
    The values in /proc/meminfo for MemTotal don't make sense. As a human, eyeballing it, it seems to roughly correspond to the installed RAM, but for using it to display the installed RAM from an automated utility it appears to be inexact, and inconsistent. For a system with 1G of RAM, I would expect the MemTotal line to have a value of 1048576 - 1024*1024. But instead, I'm seeing 1029392. On another 4G box, I'm seeing 3870172, which is not a multiple of 1024, and it's not even close to 1029392*4. On an 8G box, I get 8128204, which again seems to have no correlation to the other values, nor is it a multiple of 1024. I'm trying to use this information to report the RAM on a status web page. My work-around is to just "round" it to the nearest 1G multiple, but I'd like to understand why these values seem inconsistent and don't match my expectations. Can somebody fill me in on what I'm missing here? EDIT: To expand on the accepted answer below.... The reference can be found here. Also of interest to me from that page, which explains the inconsistency, is this bit: meminfo: Provides information about distribution and utilization of memory. This varies by architecture and compile options. ...

    Read the article

  • Running Solr on VPS problem

    - by Camran
    I have a VPS with Ubuntu OS. I run solr om my local machine (windows xp laptop) just fine. I have configured Jetty, and Solr just the same way as on my computer, but on the server. I have also downloaded the JRE and installed it on the server. However, whenever I try to run the start.jar file, the PuTTY terminal shows a bunch of text but gets stuck. I could pase the text here but it is very long, so unless somebody wants to see it I wont. Also, I cant view the solr admin page at all. Does anybody have experience in this kind of problem? Maybe java isn't correctly installed? It is a VPS so maybe installation is different. Thanks UPDATE: These are the last lines from the terminal, in other words, this is where it stops every time: INFO: [] webapp=null path=null params={event=firstSearcher&q=static+firstSearcher+warming+query+from+solrconfig.xml} hits=0 status=0 QTime=9 May 28, 2010 8:58:42 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 28, 2010 8:58:42 PM org.apache.solr.handler.component.SpellCheckComponent$SpellCheckerListener newSearcher INFO: Loading spell index for spellchecker: default May 28, 2010 8:58:42 PM org.apache.solr.core.SolrCore registerSearcher INFO: [] Registered new searcher Searcher@63a721 main Also you should know that I installed jetty by just dragging the folders from my HD to the VPS server.

    Read the article

  • Additional Hard Drives for Servers

    - by Abs
    Hello all, I am developing a web app where I will have to save lots of files and I am just trying to work out the directory structure and where things should be saved to. I have had a look at the dedicated server I want to buy and for storage it shows this: 2x 1TB SATA in RAID1 The space is enough but I am guessing this will not be on one hard drive? I will have to save files on one hard drive and when that fills up, I have to use the other? For the Fedora distro - what is the path for the second drive? Is there a primary drive where I will be able to setup my webroot? I am sorry, this is all new to me. It would be great to links and advice on how things actually work when it comes to additional hard drives etc. Thanks all

    Read the article

  • Where to get the network NIC's firmware rtl_nic/rtl8105e-1.fw?

    - by Kyrol
    Im installing Debian testing version wheezy on my Asus X53Sc with Intel Centrino Wireless-N 100. Im having a problem with my wifi connection. When I try to connect to internet with the wireless connection, an error occurs: Possible missing firmware /lib/firmware/rtl_nic/rtl8105e-1.fw for module r8169. I installed successfully the iwlwifi-100-5.ucode, but now I have this error. Any ideas or suggestions to resolve the problem ?

    Read the article

  • rsync invocation to replace symlinks pointing to source?

    - by bdbaddog
    Currently I'm moving a big filesystem to a new server as the original fileserver is no longer able to handle the filesystem writes. To make this quick I made symlinks at the target filesystem pointing to the original filesystem. Initially: /company/release (mountpoint of the original filesystem) After migration: /company/release.old (points to original filesystem after automount map update) /company/release (points to new fileserver/filesystem after automount map update) In /company/release there are symlinks like the following: /company/release/product-1.0.tar.gz - /company/release.old/product-1.0.tar.gz /company/release/product-1.0 - /company/release.old/product-1.0 (this is a tree of files) Using symlinks allowed me to move the writes to the new filesystem quickly. Now I'd like to slowly migrate the existing files and directories to the new filesystem. The problem I'm running into is that since the symlinks point back at the original files rsync doesn't see any difference and so it doesn't actually copy the file(s) or directory(s) and remove/overwrite the symlinks. Is there a set of rsync flags which will do what I want?

    Read the article

  • Launch script after SFTP disconnect

    - by Mates
    I'm currently using Caja (basically the same as Nautilus) to connect using SSH to my server and work with files. What I'm looking for is a way to launch a simple script when I disconnect - I can launch a script after disconnecting from the TTY by putting it into ~/.bash_logout file, but that is not executed when disconnecting from a file manager. The only idea I have is to set up a cronjob which would be checking for existing sftp-server or sshd processes periodicaly and launched the script when there's no such process running. Is there any easier way to do this?

    Read the article

  • RHEL 6.5 and LDAP

    - by zuboje
    I am trying to connect our Active directory server to brand new RHEL 6.5 server. I want to authenticate users using AD credentials, but I want to restrict that only certain users can login, I don't want to allow anybody from AD to connect to it. I would like to use something like this: CN=linuxtest,OU=SecurityGroups,DC=mydomain,DC=local but I am not sure how would I setup OU and CN. I use sssd for authentication and my id_provider = ad. I wanted to use id_provider = ldap, but that did not work at all and RHEL customer service told me to setup this way. But I want to have a little bit more control who can do what. I know I can use this to restrict simple_allow_users = user1, user2, but I have 400+ users, I really don't want to go and type them all. Question is how would I setup OU or CN for my search?

    Read the article

  • Is it logical that file system acls would be corrupted in a way that adds permission for another user?

    - by wilbbe01
    I was having issues on a shared hosting provider with the host's web server instance not serving some files. I asked the companies support about the issue and they responded with the results of getfacl on my home directory, and added the necessary line to allow their web server to obtain the necessary permissions. All is working happily now, but I noticed a line in the getfacl that was for what appeared to be another username to which I had no relation. I asked them about this and their response was that it was likely some minor corruption and that I could remove the unwanted line with the setfacl -x option. I know I never added the user to my home directory, and I also find it weird that that could truly happen due to corruption. So now that it is fixed I'm a little bit weary of whether or not they were trying to cover up a problem they accidentally gave someone permissions to my account, or if this kind of thing can really be corrupted in that way. Especially when that user is a real user on the same server. Any thoughts? Thanks.

    Read the article

  • Prevent rmdir -p from traversing above a certain directory

    - by thepurplepixel
    I hacked together this script to rsync some files over ssh. The --remove-source-files option of rsync seems to remove the files it transfers, which is what I want. However, I also want the directories those files are placed in to be gone as well. The current part of the find command, -exec rmdir -p {} ; tries to remove the parent directory (in this case, /srv/torrents), but fails because it doesn't have the right permissions. What I'd like to do is stop rmdir from traversing above the directory find is run in, or find another solution to get rid of all the empty folders. I've thought of using some kind of loop with find and running rmdir without the -p switch, but I thought it wouldn't work out. Essentially, is there an alternative way to remove all the empty directories under the parent directory? Thanks in advance! #!/bin/bash HOST='<hostname>' USER='<username>' DIR='<destination directory>' SOURCE='/srv/torrents/' rsync -e "ssh -l $USER" --remove-source-files -h -4 -r --stats -m --progress -i $SOURCE $HOST:$DIR find $SOURCE -mindepth 1 -type d -empty -prune -exec rmdir -p \{\} \;

    Read the article

  • How to change log rotate Extension..???

    - by Jayakrishnan T
    Hi all, currently my logrotate configuration adds a single number after the rotated log file: mylogfile.log is rotated to mylogfile.log.1 I would like to change the extension to mylogfile.log.Current date does anyone know a way to do this? my log rotate code is :- /usr/local/jboss/jboss-3.2.7-ND1/server/default/log/consolelog.log { copytruncate rotate 1 missingok notifempty } Currently am renaming the rotated file with script.is there any option to change the extension of log rotate default configuration. Please help me

    Read the article

  • My servers been hacked EMERGENCY

    - by Grant unwin
    I'm on my way into work at 9.30 pm on a Sunday because our server has been compromised somehow and was resulting in a DOS attack on our provider. The servers access to the Internet has been shut down which means over 5-600 of our clients sites are now down. Now this could be an FTP hack, or some weakness in code somewhere I'm not sure till j get there. Does anyone have any tips on how I can track this down quickly. Were in for a whole lot of litigation if I dont get the server back up asap. Any help appreciated.

    Read the article

  • Directory permissions on Ubuntu Server 10.04 LTS

    - by SebastianOpperman
    I have set up a second drive on Ubuntu Server. The directory displays correctly but Windows users cannot write or create files on the directory. I have Samba set up so Windows can access the drives. here is the last bit of my /etc/samba/smb.conf [personeel] path = /media/windows browsable = yes guest ok = yes writable = yes read only = no create mask = 0775 directory mask = 0775 I want the directory to be shared with writable permissions to everyone who can access the Ubuntu Server. I have tried sudo chmod but to no success. Any help would be appreciated

    Read the article

  • puppet onlyif specified nodes

    - by Valintinr
    I'm trying to write a puppet template. I have a puppet-master and a few puppet-agents and they all must be divided. I think it's good to do this by the node's hostname. But when I tried to do this I've encountered an error "puppet-agent[169037]: (/Stage[main]//Exec[adduser]) Could not evaluate: Could not find command 'ru1'" see code below exec { 'adduser': command => 'sudo adduser -m -p pawSfQewWrUAA test -G wheel', path => [ '/bin','/usr/bin' ], onlyif => "$hostname == ru1" } I need to specify this task for only one node with the hostname ru1. So have can I do this? Thanks.

    Read the article

  • Why does my CentOS logrotate run at random times?

    - by Mike Pennington
    I put a logrotate configuration file in /etc/logrotate.d/ and expected the logs to rotate at a consistent time; however, they do not... log rotation times are seemingly random +/- one hour. Why are the log rotation start times random, and how can I change this? Informational: my logrotate config file looks like this... /opt/backups/network/*.conf { copytruncate rotate 30 daily create 644 root root dateext maxage 30 missingok notifempty compress delaycompress postrotate ## Create symbolic links in daily/ PATH=`/usr/bin/dirname $1`; FILE=`/bin/basename $1`; /bin/ln -s $1 $PATH/daily/$FILE endscript }

    Read the article

  • Do I have a bad SD card?

    - by User1
    I'm trying to copy data from my computer to an SD card. After a few hundred megs, I keep getting the following errors in dmesg: [34542.836192] end_request: I/O error, dev mmcblk0, sector 855936 [34542.836284] FAT: unable to read inode block for updating (i_pos 13694981) [34542.836306] MMC: killing requests for dead queue [34542.836310] end_request: I/O error, dev mmcblk0, sector 9280 [34542.837035] FAT: unable to read inode block for updating (i_pos 148486) [34542.837062] MMC: killing requests for dead queue [34542.837066] end_request: I/O error, dev mmcblk0, sector 1 [34542.837074] FAT: bread failed in fat_clusters_flush [34542.837085] MMC: killing requests for dead queue These were all files I copied from a smaller SD card. I just want to transfer them to my new, larger card for my phone. I tried the same experiment with different files on a different machine and the card failed again. Reading data from the old card went fine. My systems are older and the new SD card is new (16GB Class 4). Could this be that my computers are too old? Is there a definitive test to verify if my SD card is bad?

    Read the article

  • How can I let my users set PHP.ini settings for wordpress?

    - by jldugger
    I set up a wordpress server from a fairly standard Ubuntu 9.10 for a class and they're constantly running into problems with the default PHP.ini settings. First memory settings were too low, then the file upload limits were too small, etc. And more concerning was a wordpress wide blank page that I suspect was killed for ram consumption but turning on php errors in php.ini didn't reveal anything! I'm not familiar with shared hosting, but I feel there's a way such places allow users to edit such things without needing me to intervene and restart Apache.

    Read the article

  • Website & Forum sharing the same login credentials ?

    - by Brian
    I am going to be running a small site (100 hits a week maybe) and I am looking for a quick and easy way to share login information between the main website, a control panel (webmin, cpanel, or something), and the forum. One login needed to access any of the three. The website won't have use for the login, per say. But it will display "logged in" when you are on the website. Any custom solutions, any thoughts, logic, examples?

    Read the article

  • Getting PAM/user info into php - something like Net_Finger instead of a db?

    - by digitaltoast
    I've got a very small user group who just need to login, upload, check and then move specific files to a different area when ready. Right now, I use the nginx PAM auth module to log them in against their unix accounts. As their login is their home directory, I've already got the info to send the uploads to the right area - one line of php and no database needed. But I'm maintaining a separate DB just so PHP can welcome them, grab their email and send them an email when processed. Yes, sure I could use nosql or sqlite instead so as to not need a whole mysql install. But it occurred to me that as I've got all these blank user fields for phone numbers I could populate with any data, that I could use something like php's Net_Finger. Which failed for me with: sudo pear install Net_Finger Starting to download Net_Finger-1.0.1.tgz (1,618 bytes) ....done: 1,618 bytes could not extract the package.xml file from "/build/buildd/php5-5.5.9+dfsg/pear-build-download/Net_Finger-1.0.1.tgz" Download of "pear/Net_Finger" succeeded, but it is not a valid package archive Error: cannot download "pear/Net_Finger" At which point I thought I'd stop, and take a ServerFault reality check - is this a really bad/dangerous/stupid idea just to stop me having to maintain details in two places rather than one? It there a better way? Googling shows that it's not an oft-asked thing, so perhaps with good reason?

    Read the article

  • How do I debug an upstart job?

    - by Cerales
    I have the following job in /etc/init/collector: start on runlevel [2345] stop on runlevel [!2345] expect daemon exec /usr/bin/twistd -y /path/to/my/tac/file When I start the job with sudo service collector start, it hangs. If I ctrl-c and run initctl list, I see this: collector start/killed, process 616 I can't see an instance of the twistd daemon in ps, and the HTTP server it's supposed to be providing does not exist. I even tried this without 'expect daemon' and with a simple call to a one-line bash script using a script stanza, and it still doesn't work. I think I'm doing something very wrong. What could it be?

    Read the article

  • Regex working in RedHat is not giving any result in Ubuntu

    - by Supratik
    My goal is to match specific files from specific sub directories. I have the following folder structure `-- data |-- a |-- a.txt |-- b |-- b.txt |-- c |-- c.txt |-- d |-- d.txt |-- e |-- e.txt |-- org-1 | |-- a.org | |-- b.org | |-- org.txt | |-- user-0 | | |-- a.txt | | |-- b.txt I am trying to list the files only inside the data directory. I am able to get the correct result using the following command in RHEL find ./testdir/ -iwholename "*/data/[!/].txt" a.txt b.txt c.txt d.txt e.txt If I run the same command in Ubuntu it is not working. Can anyone please tell me why it is not working in Ubuntu ?

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >