Search Results

Search found 33182 results on 1328 pages for 'linux port'.

Page 510/1328 | < Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >

  • Path erased in Debian

    - by Lyon83
    I'm trying to deploy a rails app in Debian, using Apache/Passenger. I was trying to fox a problem with some GEMs and in the process I put executed this in console: export PATH=/var/lib/gems/1.8/bin/:${vendor/cache} Now my path environmental variable is gone, or at least its content. My server is running under Debian 6. Is there a way to recover my path info? Or at least can someone point me where to find the file where that variable i s stored? Some help please. This is a BIG problem for me. Thanks in advance!

    Read the article

  • Using u32 together with extension headers (how to jump over them?)

    - by bortzmeyer
    I'm trying to filter on some parts of the payload, for an IPv6 packet with extension headers (for instance Destination Options). ip6tables works fine with conditions like --proto udp or --dport 109, even when the packet has extension headers. Netfilter clearly knows how to jump over Destination Options to find the UDP header. Now, I would like to use the u32 module to match a byte in the payload (say "I want the third byte of the payload to be 42). If the packet has no extension headers something like --u32 "48&0x0000ff00=0x2800"` (48 = 40 bytes for the IPv6 header + 8 for the UDP header) works fine, If the packet has a Destination Options, it no longer matches. I would like to write a rule that will work whether the packet has Destination Options or not. I do not find a way to tell Netfilter to parse until the UDP header (something that it is able to do, otherwise --dport 109 would not work) then to leave u32 parse the rest. I'm looking for a simple way, otherwise, as BatchyX mentions, I could write a kernel module doing what I want.

    Read the article

  • Can I use squid (or anything) to do this?

    - by user269334
    I have a really crappy VPS, and a really good computer at my office (with a really good internet connection), but behind a NAT. Is it possible to expose my good computer by doing this: 1. The good computer connects to the VPS (and keeps the connection alive) 2. The users connects to the VPS, and sends http(s) requests to the VPS. 3. The VPS just passes that http(s) requests to the good computer (including some identifications, so the servers can distinguish connections) 4. The good computer passes that http(s) response to the VPS 5. In turn, the VPS receives the http(s) response, and passes back to the client. Is it possible to do this? (btw, the VPS and the good computer are located in different countries) And also, is this "reverse proxy"? I heard that reverse proxy is for protecting the internal network by putting a middle server. And will this affect SSL configurations? (or make SSL impossible?) I'm intending to run nginx on the good computer. Thanks in advance : )

    Read the article

  • How do shared hosting servers keep executing code from crossing accounts?

    - by acidzombie24
    I am kind of curious, how does a hosting server support multiple users with php but keep each user away from the other code? The 'easy' solution i thought were file permissions. So every user can have www-data belong to their group and the server would have executing access but the users cant access the others file. But then i realize the user running the php would be www-data who has permission to read everyones data. So how does a shared host prevent this from happening? PS: I personally use nginx (with fastcgi php). But i am somewhat familiar on how apache works.

    Read the article

  • What is the best time to set the IP address for a server headed to a server colocation facility?

    - by jim_m_somewhere
    What is the best time to set the IP address for a server? I have a server that I am going to install the OS on and then I am going to send it to a server colocation facility. The server is going to have Internet facing services (www, email, etc.) I can set up a "fake" IP address during install (by fake I mean private as in RFC 1918) and change the "fake" IPs to the real IPs once I set up the colocation service. The other option is to set up the colocation service...wait for them to give me the "real" IPs and use them during the OS install. The ramification are that...if I use "fake" IPs during install...I will have to wait before I set up things like SSL certs. If I wait for IPs from the colocation provider...then I can set up SSL certs that use the "correct" (as in "real") IP addresses...no changes to the certs until they expire. Do the "gotchas" of changing an IP address on a server outweigh the benefits of a quick install? The other danger with using "fake" IPs is that I could make a mistake when I go through the various files to change the IP address to the "live" IP address. Server OS: CentOS 6.2 or CentOS 6.3, 64 bit. Apps: Apache 2.4.X httpd, MySQL 5.X (will eventually use replication)

    Read the article

  • What are the different file permission codes and what do they mean?

    - by zeckdude
    I am working with a file upload script. I am currently uploading a file and then trying to echo out an anchor linking to that file, but since I used mkdir() with 0700 permissions to upload the file, it won't allow me access to view the file. I am pretty sure the problem I am experiencing is because of the file permission code I used. The problem is I just don't know what all the different file permission codes are and what they mean. Can somebody please list out all the different file permissions and what they each do?

    Read the article

  • Ubuntu Wired network(ethernet does not work)

    - by badnaam
    It was working just fine, until the other day I yanked it out. The wireless works just fine on the same router. If I login to a windows 7 instance on this dual boot laptop then the ehternet works just fine. So it's not a hardware, cable or router issue. The card even gets an ip, but I can't connect to the internet. Here are the details from route, iptables, ifconfig, ping etc. Any ideas? I have been struggling with this for day, none seems to have an answer. http://pastie.org/954816

    Read the article

  • what is best multi-server configuration with OpenVPN

    - by sebut
    We have a number of Database severs running MongoDB on Debian plus a number of Application servers also on Debian. The db servers hold replicating db clusters, so they need to talk to each other. Application servers need to talk to all db servers (for reasons of fault tolerance). The servers are potentially spread across multiple hosting centers, so we need secure channels between all servers. The number of servers is bound to grow, so we need a VPN solution that's easy to maintain and expand. This is why I feel that SSH that we use for testing might not be up to the task and OpenVPN seems the way to go. I have ruled out TAP, since I understand that this would mean all traffic going to all the servers - perhaps this is a misunderstanding and TAP acts more like a switch? With TUN devices I imagine that all DB servers would live in their own separate subnet, they would also need a client configured to be able to connect to each of their peers. The application servers could live in a common subnet range with a client config only. Does this sound like a reasonable setup? Strangely, on the web I did not find anything about multi-server with OpenVPN. Thanks for all insights and ideas!

    Read the article

  • Ubuntu 9.10 Server (minimal virtual machine) partitioning

    - by John
    I am setting up a generic Ubuntu server and am trying to figure out the (best) way to partition the machine. Again, this is just a generic one: The default drive is 20GB. Some guides show: Separate /home, /usr, /var and /tmp partitions Another one suggested something like this: / 4GB /boot 512MB /tmp 1GB /home 5GB /usr 5GB /var 5GB What is the best way to accomplish this?

    Read the article

  • apt-get install Error

    - by LINUX4U
    syslogd during install give following error from the server? How to diagnose this problem debconf: falling back to frontend: Readline Selecting previously deselected package sysklogd. (Reading database ... 32541 files and directories currently installed.) Unpacking sysklogd (from .../sysklogd_1.5-5ubuntu4_amd64.deb) ... Selecting previously deselected package klogd. Unpacking klogd (from .../klogd_1.5-5ubuntu4_amd64.deb) ... Setting up sysklogd (1.5-5ubuntu4) ... * Starting system log daemon... [ OK ] Setting up klogd (1.5-5ubuntu4) ... * Starting kernel log daemon... [fail]

    Read the article

  • Problems using "at" with Apache

    - by Alex Padgett
    I'm trying to use a PHP script to create at jobs, but when it comes time to execute the jobs, nothing seems to be happening. I've tried to output any errors to log files, but have had no luck. It seems obvious that it's a permissions issue, because when I set apache to run as my personal user, everything works fine. However, when I exec wget directly from PHP, everything works fine so it seems that apache has the correct permissions to use it. The problem appears to be when using at in conjunction with apache. So I need to find a way to make this work with apache running as its own user. Here is the command I'm using: echo "wget -qO- http://example.com/" | at now + 1 minute 2>&1 Any ideas? EDIT: Apache can create the at jobs, it just seems that when they execute nothing is happening.

    Read the article

  • How to diagnose computer freezing problem

    - by reinierpost
    I have a laptop (a Medion from Aldi) that tends to hang quite often - so often, in fact, that several attempts to install Windows XP or Ubuntu on it have all failed. However, I am able to boot and run Ubuntu as found on the standard Ubuntu 10.10 installation image. I have done this two times thus far. The first time everything was running smoothly, until at some point the GUI (i.e. X) became unresponsive. The cursor kept moving with the mouse, but menus would no longer show and clicking things no longer produced any response. So I switched to the consoles (Ctrl-F1, Ctrl-F2, etc., which in this setup automatically run shells. The shells were still responsive, and the cd command would still work, but any command that invoked an executable (e.g. /bin/ls or cd /bin; ./find caused the shell to hang up uninterruptibly. My hypothesis was that all attempts at disk access were hanging up, but I didn't actually try a command like echo /proc/$$ or while read line; do echo $line; done < /var/log/syslog to verify this. Another possibility is that an essential system library is cached in memory and somehow failing to function properly. The second time I left the system running overnight and it didn't hang itself spontaneously. I'm not sure I have the patience to just twiddle with the running system until the condition reappears, and I'm, not sure what to do once it does. Clearly we can rule out a software cause. It seems disk access related, but clearly it's not permanent hard disk failure because the system will reboot just fine. What kind of hardware problem might produce these symptoms? Can it be a memory problem?

    Read the article

  • intermittent SSH with ssh_exchange_identification error

    - by rafamvc
    My ssh connection to my server works every 30 min for around 10 min. Things that I figure out that might be the problem: The server is underload (it is a database server), but on those spare moments that I can connect, it is still under the same load, which doesnt make sense. The server is a ubuntu, and the consolekit was using a lot of virtual memory. Restarted the consolekit and it seems to be using a right amount of memory now. It is not the host alows or deny. Those are setup properly. It is not a firewall problem. Those settings were working and the same settings are working for other similar machines. It is on the ec2. Amazon cloud.

    Read the article

  • Squid 2.7.6 not honoring ACL rules

    - by peppery
    Hello there, I have a /24 block of IP addresses assigned to a single server that I have been attempting to install Squid on an Ubuntu server machine. All of the IP addresses are set up correctly (aliases of eth0) in /etc/networking and work as they should be, using cURL I can specify an interface and it goes out on the correct address as it should be. I would like Squid to take the incoming IP address the request was sourced to and proxy the request out on the same IP (e.g incoming 123.123.123.1:3128 - 123.123.123.1, .2 - .2, etc) and have set up these ACL rules in /etc/squid.conf acl ip1 myip x.x.x.1 tcp_outgoing_address x.x.x.1 ip1 acl ip2 myip x.x.x.2 tcp_outgoing_address x.x.x.2 ip2 acl ip3 myip x.x.x.3 tcp_outgoing_address x.x.x.3 ip3 and so on, as this seems to be the only way to do what I want (from research). However, after much frustration, Squid seems to be ignoring these rules and sending requests out on the default interface. Does anybody have any suggestions? Thanks.

    Read the article

  • How do I upgrade to PHP 5.4 in CentOS 6.3 with yum?

    - by Vicary
    I found some blog posts about this, but it's rather lack of descriptions on possible side effects. I could really use some detailed on these steps: How to add a repo that provides PHP 5.4 into yum Can this seamlessly replaces the current PHP version in CentOS? How can I switch back to the official repo when it supports PHP 5.4? (current 5.3.3 in my system) Will there be any potential to break PHP modules I currently using?

    Read the article

  • Are there any OpenGL implementations which can use a server to do the rendering?

    - by user1973386
    Assume I have 2 independent machines, one running Debian sid, and the other running Windows 7. The one running Debian sid has a decent graphics card, the Windows 7 machine has no graphics card and a weak processor. The two are connected over a fast local network. Are there any OpenGL implementations, where Windows 7 would use the Debian machine's graphics card to do OpenGL rendering "over the network"?

    Read the article

  • How to jump back to the first character in *nix command line?

    - by clami219
    When writing a long command in the *nix command line and having to go back to the first character, in order to add something at the beginning (for instance a nohup, when you realize the process will be a long one, or a sudo, when you realize you need root permissions) it can take a long time for the cursor to make its way back to the first character... Is there a short cut that allows you to jump straight there? I'm using a mac, so Home is not an option

    Read the article

  • How do I enable subfolders in Dovecot?

    - by yarun can
    In the past I was able to drag and drop my folder with subfolders (local emails) into my imap accounts inside Thunderbird. Now I moved to my own vps and its running Dovecot. So far so good with emails. Today I wanted to copy some folders with messages again but I realized that it does not let me copy to folders. I can drag and drop individual emails into some folders in Imap however folders stuff does not work. I am not sure what this feature is called. The previous email servers might be using some other imap server so I am really not sure what it might be even called. Is this a Dovecot or Thundenbird thing? If it is a Dovecot feature how I enable it on my server? Dovecot is running on Debian Wheezy 64 bit vps thanks

    Read the article

  • Using cd to go up multiple directory levels

    - by Tossrock
    I'm dealing with java projects which often result in deeply nested folders (/path/to/project/com/java/lang/whatever, etc) and sometimes want to be able to jump, say, 4 directory levels upwards. Typing cd ../../../.. is a pain, and I don't want to symlink. Is there some flag to cd that lets you go up multiple directory levels (in my head, it would be something like cd -u 4)? Unfortunately I can't find any man page for cd specifically, instead just getting the useless "builtins" page.

    Read the article

  • How to I configure a swap partition using swapspace

    - by jcalfee314
    I finally have the swapspace project installed and running (via init.d). The purpose is to have a dynamically re-sizing swap partition. I'm clueless however on how to use it. It has good documentation but just does not go into that last step. How to I configure a swap partition using swapspace? The process is probably the same for any 3rd party program that would provide a swap space implementation to the kernel. I know this was intended to run as a process because the project provides an init.d script.

    Read the article

< Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >