Search Results

Search found 27819 results on 1113 pages for 'linux intel'.

Page 471/1113 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • Single application through OpenVPN tunnel (Debian Lenny)

    - by user14124
    I'm using Debian Lenny and I want to tunnel rtorrent only through a OpenVPN tunnel. I have a tunnel running, the config file looks like this: client dev tun proto udp remote openvpn.xxx.com 1194 resolv-retry infinite nobind persist-key persist-tun ca /etc/openvpn/xxx/keys/ca.crt cert /etc/openvpn/xxx/keys/client.crt key /etc/openvpn/xxx/keys/client.key tls-auth /etc/openvpn/xxx/keys/tls.key 1 ns-cert-type server comp-lzo verb 3 auth-user-pass script-security 3 reneg-sec 0 My idea is that I could run a sockd proxy internally that redirects traffic to the openvpn tunnel. I could use the *nix "proxifier" application "tsocks" to make it possible for rtorrent to connect through that proxy (as rtorrent doesn't support proxies). I have trouble configuring sockd as my IP inside the VPN changes every time I connect. This is a config file someone said would help: http://ircpimps.org/sockd.conf As my IP changes at each connect I don't know what to put in that config file. I have no control over the host side config file. Any help wanted. Any other method is very welcome.

    Read the article

  • How expensive is a hostname in htaccess? Other solutions possible?

    - by Nanne
    For easy allow or disallowing of dynamic IP-adresses you can add them as a hostname in a .htaccess file. As I have read from: .htaccess allow from hostname? it does a reverse lookup on the connecting ip address, seeing if the response matches the allowed name. (Well, actually Apache is doing a double lookup, first a reverse lookup and then a forward lookup on the result of the reverse.) This is the reason we are currently not using dynamic-ip hostnames in the .htaccess: this "sounds" quite heavy: 2 extra lookups for every request. Is this indeed quite heavy, and would a reasonably busy server that is rather looking for less then more load get away with this :)? (e.g.: how does this 'load' compare to the rest? If a request is 1000 times more expensive then the lookups it might be negligible. otoh, it could be that final straw :) ) Are there other solutions? I can write a script that does a lookup of the hostname and put it in .htaccess files ofcourse, but this feels a bit like a hack.

    Read the article

  • What is the --daemon option?

    - by Pascal Dimassimo
    I was installing Solr with Jetty using these instructions. Basically, those instructions made you download the Jetty startup script and copy it to /etc/init.d/jetty. But it was not working. Each time I was starting Jetty, I had a "FAILED" message and nothing to understand why it was happening. I decided to open up the /etc/init.d/jetty script to understand what was happening. I saw that this script was using start-stop-daemon to launch jetty. After a couple of time of debugging, I discovered that removing the --daemon option at the end of the start-stop-daemon call was fixing my problem. I did a couple of research and discovered that this guy had the same problem and resolved it like I did: my removing the --daemon option. What is weird is that the switch does not seem to be specific to start-stop-daemon, because it is not documented in the man page. Also, I've seen it used for other commands. So what is that --daemon option doing? And why removing it resolved my problem? Note that I am working on Ubuntu 10.04.2 LTS.

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • Mail Secure & Stable Open Source Mail Server

    - by Fanar ALHAYALI
    I have asked question on http://stackoverflow.com/questions/9868426/i-need-to-know-which-email-server-i-have-to-use and someone tell me my question would be better on serverfault. I know that this is a common question and asked many times. but there are so many available mail servers that i am not able to decide the one. Kindly tell that which is the Secure, Stable and fast open source mail server for Centos or Redhat Server. Is there any guide which can be used to deploy the mail server with all its components e.g. smtp, pop3, imap, spam, calender server, antivirus, DNS Setting. Currently I'm using sun messaging V6 which installed on Solaris 10 and my boss ask me to make a report for the best mail server today in the marketing? I tried to have a look on Google but I couldn't find interesting information for my report. Any advice would be appreciated.

    Read the article

  • Execute script before shutting down in Ubuntu

    - by juanefren
    When I shut down my computer I want to show some pending tasks that I have to do before leaving the office... I did a local application to manage those tasks, so basically I just want to run a command, and shut down after I kill the app executed. I have already tried with these options: * /etc/gdm/PostSession/Default -- this works only when I select LogOut option instead Shutdown. * /etc/rc0.d/K01mycustomscript -- execute script after X is killed * $HOME/.bash_logout -- This looks like does nothing. * ./app-to-run && sudo shutdown -h now -- Don't like it for 2 reasons, prompts for sudo password, and can't use my laptop shutdown button. I am using Ubuntu 10.04

    Read the article

  • How to restrict all services to single domain in Ubuntu?

    - by harold
    Someone has pointed an unknown domain to my server's IP address likely via A records. I would like to reject access to ALL services (httpd, ssh, mail, etc.) from this domain and only allow requests from my domain. I want to make it so when I connect to that domain it's completely rejected from my server. I can disallow access from HTTP by changing my web server settings, but I want to do this for every single type of connection. How can I do this?

    Read the article

  • How to troubleshoot this memory usage?

    - by Camran
    I have a classifieds website. I use PHP, MySql, and SOLR. Solr uses a Servlet Container, in my case JETTY, which is java application. I just noticed that something was terribly wrong on my website. I opened the terminal and entered the "top" command and noticed that JAVA was EATING all the cpu and mem. Now I thought "Ok, maybe I need more mem and cpu" So I increased it. But along with the increase the java app started eating more. This has never happened before, and it is either a bug, or a hack of some kind. Anyways, I need to troubleshoot this now, and so I wonder how do I do this? Can I somehow pinpoint exactly when the memory usage started to go up from some error log? How does one troubleshoot this? How do I prevent it? Is it possible to prevent too many requests somehow, if they are within a timeline? Thanks

    Read the article

  • How to get the speed of a network card on the command line?

    - by nelaar
    I am trying to see what the speed of some network cards on a remote server. Our reporting software says they are 10Mbps, but I am sure that is wrong they should be 1Gbps. Our monitoring software uses SNMP to query the servers, perhaps the servers are reporting information incorrectly. ifconfig does not report what the speed of the devices are. How can I see what the currently configured speed of the cards are.

    Read the article

  • can i use an ip-list include file for iptable blacklisting

    - by rubo77
    I would like to block all countries except mine in iptables, that is a lits with about 100.000 Entries. how can i define this blacklistfile in a script, so iptables blocks all those ip-ranges? maybe i can use http://www.ipdeny.com/ipblocks/data/countries/ that provides lists in the form 117.55.192.0/20 117.104.224.0/21 119.59.80.0/21 121.100.48.0/21 ... i want to be able to change the blacklistfile easily without having to change the iptables-script

    Read the article

  • php extensions & apache mods gone/not working after server restart?

    - by user1782359
    I was wondering if anyone has ever come across this before, as I'm pretty stumped to be honest, and my server admin knowledge isn't particular good so I'm not sure what could even be wrong, let alone how to fix it. Basically, Thursday last week everything was fine on our server. I come in on Friday and it's a mess: php extensions are missing/not working, apache modules are gone. (e.g. oci_* was gone completely, odbc_ not working but still there, the apache ntlm_auth for single sign on was gone and so the website wasn't even loading in IE). I'm ruling out anything deliberate because it's just incredibly unlikely. The only thing that really happened between thursday & friday is that on thursday evening one of the network guys did a RAM upgrade on the server and restarted it. That's it, nothing else. Now I'm wondering if somehow those extensions and such which we installed months ago were somehow only saved in a local memory of sorts, and a restart has wiped them? But we installed them all as root, so I don't see why it should be any different from installing anything else. It makes little/no sense to me. To expand on an example of something that's gone very wrong, the php odbc_ extension: It's still on the server, it doesn't return undefined function or anything. But it just cannot connect to the datasource any more. I've tested it through the command line and it's working perfectly fine with that datasource and login details, but all of a sudden having it in the php odbc_connect() function and it just can't connect. ( [S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source. ) But unixODBC is set up fine. Like I say i've tested it all through the terminal and it can connect, and we've not changed anything, it's just now all of a sudden not working through the PHP function. Anyone have any ideas whatsoever as to what could be going on? This is on CentOS 5.x by the way.

    Read the article

  • Set LD_LIBRARY_PATH and CLASSPATH on cluster nodes before running a hadoop job

    - by Ashish Sharma
    I need to set LD_LIBRARY_PATH and CLASSPATH before running a job a cluster. In LD_LIBRARY_PATH i need to add location of some jars which are required while running the job, As these jars are avaiable at my cluster, similar with CLASSPATH. I have a 3 NODE cluster, I need to set this LD_LIBRARY_PATH and CLASSPATH for all the 3 data nodes so that the following jar are available while running the job

    Read the article

  • Will deleting partitions affect my hard drive in any way?

    - by Portali5t
    I installed a Suse partition of around 200 Gigabytes on my hard drive, primarily running Windows 7. I am sick of Suse's crap, and just want to get rid of the OS and get that partition back for Windows' use. Is it as simple as that partition gets deleted,and I can choose what partition that space goes to, or is it communal that all partitions can access? I know next to nothing about partitions, so any help would be great. Also, if someone knows HOW to delete partitions, that would be a great help too. Thanks!

    Read the article

  • How to read iptables -L output?

    - by skrebbel
    I'm rather new to iptables, and I'm trying to understand its output. I tried to RTFM, but to no avail when it comes to little details like these. When iptables -vnL gives me a line such as: Chain INPUT (policy DROP 2199 packets, 304K bytes) I understand the first part: on incoming data, if the list below this line does not provide any exceptions, then the default policy is to DROP incoming packets. But what does the 2199 packets, 304K bytes part mean? Is that all the packets that were dropped? Is there any way to find out which packets that were, and where they came from? Thanks!

    Read the article

  • cannot find java even though it is there (ubuntu 12.04)

    - by Jeff Storey
    I'm trying to just execute the java command and it's saying it cannot be found, even though it is there. Here's what my output looks like root@oneiric:/usr/lib/jvm/default-java/bin# ls -al java -rwxrwxrwx 1 uucp 143 5750 2012-09-20 11:14 java root@oneiric:/usr/lib/jvm/default-java/bin# ./java -su: ./java: No such file or directory So the ls shows it's there, but it doesn't seem to execute. Can someone explain why this is?

    Read the article

  • blocking port 80 via iptables

    - by JoyIan Yee-Hernandez
    I'm having problems with iptables. I am trying to block port 80 from the outside, basically plan is we just need to Tunnel via SSH then we can get on the GUI etc. on a server I have this in my rule: Chain OUTPUT (policy ACCEPT 28145 packets, 14M bytes) pkts bytes target prot opt in out source destination 0 0 DROP tcp -- * eth1 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW,ESTABLISHED And Chain INPUT (policy DROP 41 packets, 6041 bytes) 0 0 DROP tcp -- eth1 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW,ESTABLISHED Any guys wanna share some insights?

    Read the article

  • Samba domain controller: remove 1 Windows client

    - by K B
    my domain is controlled by a Samba domain controller running on openSUSE 11.3. It manages other openSUSE boxes and some Windows 7 boxes. Now one harddisk of a Windows 7 computer crashed and I had to reinstall. I wasn't able to get the computer name ("Win26") of the broken PC out of the domain and so I couldn't add the reinstalled "Win26" to the domain again. So how can I remove the entry of the old "Win26" computer out of the domain controller, so that I can add the new "Win26" to the domain again? Is it one configuration file I have to edit and restart Samba? Which file would this be? Thanks in advance for your help! Regards, KB

    Read the article

  • Cannot Login as root

    - by Josh Moore
    At my work we ship our product on pre-installed servers as a software/hardware package. We are using open SUSE 10.3 for the OS and we setup and we always log in with the root user to do maintenance on the box. Recently we just had box returned to us that the customer said the could not longer connect to the box through the network interface. So when I started to work on the box I run into the this problem: At the command prompt to login i type the user name "root" and hit enter. Then even before it asks me for a password I get "Login incorrect". I have never seen this behavior before and could not find any information about it online. Does anybody know what is going on? Thanks.

    Read the article

  • Permission denied when trying to execute a binary burned to a CD-R

    - by user16654
    On an Ubuntu 9.10 (Karmic Koala) machine, I burned a CD from the command prompt using: cdrecord -v speed=16 dev=0,1,0 /FPS.iso The CD now contains an executable and some files. I tested the CD by loading it onto another machine (Red Hat 5.3) and when I try to run the program I get the following message: bash: ./FPS1_1: Permission denied I can open other files like text documents (the executable also comes with shared libraries). I realized I had burned the CD as root so I burned another one as another user but I still have the same problem. How can I remove this permission or what is the problem? P.S. the image was in / if that helps

    Read the article

  • Hardware recommendations for building an Ubuntu encrypted file server

    - by Robert Mashlan
    I would like to build a file server for my home network using Ubuntu. It will serve files from RAID1 configured disks, either in the OS or in hardware. It will be connected to a Gigabit ethernet LAN. The disks will use an encrypted file system. It will serve samba shares. I would like a recommendation on what kind of processing power/memory I would need to build a box that would be able to sustain the full capacity of the Gigabit ethernet connection in a file transfer for a single connection with the overhead of serving from an encrypted disk. I'm not looking to build a dream server, I just want enough processing capacity for high performance (and reliable) file sharing and spend as little as possible for it. This may be tangential, but what kind of hardware would I need to have a server be able to reliably go into a low power mode when no requests are being made of it?

    Read the article

  • What is the meaning of the 'Personalities' feature under /proc/mdstat

    - by drcelus
    On some systems I see this : Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] md1 : active raid1 sdb1[1] sda1[0] 10485696 blocks [2/2] [UU] md2 : active raid1 sdb2[1] sda2[0] 477371328 blocks [2/2] [UU] And other systems show : Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204788 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 4193272 blocks super 1.1 [2/2] [UU] md2 : active raid1 sda3[0] sdb3[1] 483985276 blocks super 1.1 [2/2] [UU] bitmap: 0/4 pages [0KB], 65536KB chunk I wonder what is the meaning of Personalities and the impact of having different values.

    Read the article

  • Is there a version of Debian-Lenny that is legal for export from the US?

    - by molecules
    I wanted to bundle my application in a Debian-Lenny Virtual Machine so others could download it and run it without having to configure anything. However, I don't want to have to worry about US legal issues. Many of the packages in a default Debian installation include encryption algorithms. Are all default versions export-safe?    If not, is there an export-safe version?       If not, is there an easy way to make one?

    Read the article

  • sendmail appends server name to external domains when relaying

    - by Chris
    My server is set to send all email to a corporate relay server. For the company domain, it works perfectly. I've recently found emails being sent to an outside domain are getting the hostname of my server appended to the email prior to being sent. Here is the log entry for one such attempt. Nov 6 09:46:45 myservername sendmail[45023]: rA6EkjiI045023: [email protected], delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30590, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (rA6Ekj2g045037 Message accepted for delivery) Nov 6 09:46:45 myservername sendmail[45061]: rA6Ekj2g045037: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=120885, relay=relay.company.com [x.x.x.x], dsn=2.0.0, stat=Sent (ok: Message 342335947 accepted) Notice the email address difference between it being accepted by my server for delivery (correct email address), and being sent and accepted by the corporate relay (incorrect with server name appended). To make it more interesting, the application on my server uses email for user account verification/activation. In August, this particular user was able to register his account and activate it. I have made no configuration changes to mail since setting the server up over a year ago. DNS is also a corporate service. I've never touched my /etc/resolv.conf configuration. domain company.com nameserver <ip1> nameserver <ip2> search myservername Thanks!

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >