Search Results

Search found 45752 results on 1831 pages for 'ubuntu linux'.

Page 652/1831 | < Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >

  • If I ssh to a domain provided by dyndns, does my password go through them?

    - by D Connors
    I'm running Ubuntu on my work PC, and my work place provides me with a static IP address but not with a domain. It's sometimes useful for me to connect to that PC through ssh, but it's not common enough for me to instantly remember the IP number. So I set um a dyndns account, and associated a short and intuitive domain name to that IP. Here's my question, when I try to ssh to the domain, it asks me $ ssh [email protected] The authenticity of host 'something.there.foo (xx.xx.xx.xx)' can't be established. RSA key fingerprint is 'ALPHANUMERIC STRING' Are you sure you want to continue connecting (yes/no)? That surprised me a little bit. I have already registered the RSA fingerprint by connecting directly to the IP address. I thought the domain name was simply a convenient way of pointing me in the right direction (i. e. the ip address), but that message makes me think my data is actually going through their servers or something. Which one is it? Am I sending my password through someone else's server? Or is ssh just really really careful, thus warning me even if the final destination is a know host? The ssh server I'm using is the openssh-server package.

    Read the article

  • Can I use Upstart to start a script which requires the user's X session?

    - by ledneb
    I wrote a script which greps through the output of synclient to determine whether a laptop's touchpad has miraculously turned itself off (Ubuntu seems to /love/ doing this recently) and, if so, turns it back on. The script is something like this: #!/bin/bash while true ; do if [ `synclient | grep -e"TouchpadOff[\s]*1" | wc -l` -ge 1 ] ; then synclient TouchpadOff=0 fi sleep(3) done (I don't have the laptop to hand right now but you get the point! I will update later when I'm at my laptop if that's incorrect) So I tried running this as an upstart script so my touchpad can heal itself without any interaction. But it seems synclient doesn't find the current user's X session when my script is upstart'ed. I tried running it by using something like su -c myscript.sh ledneb in my script stanza, but to no avail. Should I be looking in the direction of /etc/X11/xinit/xinitrc rather than upstart? Is there a proper way to have this script run in the context of the current (or even a hard-coded) user's x session?

    Read the article

  • How can I recover my data from a damaged hard drive?

    - by krk
    a few days ago when I was working on windows my laptop was beaten on the side where the hard drive is located. As a result, it was damaged and I couldn't access the windows partition. I had to boot the linux one, which is working without any trouble. I have 2 partition formatted with ntfs, the one with windows on it, and the other one intended to store data. I mounted the windows partition from ubuntu and I could see all my files. But when I tried to mount the data partition it was impossible. It threw me an error, it couldn't recognize ntfs partition. I try to copy the damaged disk into an external hard drive using the command: dd if=/dev/sda of=/dev/sdb conv=noerror,sync The progress stopped at 60%. I was still unable to mount the data partition. Now I'm trying to backup my files using an utility called Photorec. The problem is that it is recovering my files in a disorderly way, it is all mixed up and I need my original directory structure, it will become an endless task to organize the files as they were before. Is there any way I can get my partition back?

    Read the article

  • Removing all traces of GNU java and openjdk and replacing with Sun JDK

    - by user61766
    I have installed latest Sun JDk. But when I do: java -version I still got OpenJDK version. So I completely removed OpenJDK. But now when I do: java -version I get even older GNU java 1.5 something libgcj. So I completely removed that too but it was asking to remove bunch of dependent apps like OpenOffice.org Writer etc. Even though I need the writer, I let it go because I do not want ever to see the face of any GNU java on my linux. So everything related to GNU java is removed. Luckily I am able to start Eclipse and it works fine and start normally (apparently using the installed Sun JDK which is what I want). But now when I run java -version I get bash: /usr/bin/java: No such file or directory Now what I need to do so that when I open any terminal window and enter java -version I should get Sun JDK version? Sun JDK is installed in /usr/java/jdk1.6.021. I also have symlinks: /usr/java/latest and /usr/java/defaults pointing to sun jdk.

    Read the article

  • Explain why .bash_logout won't run commands?

    - by Droogans
    So I've been wondering how to run these two lines of code everytime I close an open instance of Terminal: history -c cat /dev/null > ~/.bash_history I export HISTFILE=5 on startup, but still want to flush that out when I'm done. I've tried looking around a bit in a couple of places, and haven't had much luck. I run Linux Mint, and would also note here that I ran into a similar issue with .bash_profile; eventually, I discovered I needed to place all start up code in .bashrc, so maybe that has something to do with it. Here's my .bash_logout file: #!/bin/bash # ~/.bash_logout: executed by bash(1) when login shell exits. # when leaving the console clear the screen to increase privacy if [ "$SHLVL" = 1 ]; then history -c cat /dev/null > ~/.bash_history [ -x /usr/bin/clear_console ] && /usr/bin/clear_console -q fi #this does nothing on exit... echo 'logout'; sleep 2s I've tried re-arranging this script many ways, I'm not sure if I don't understand how bash works, and if any of this is running in the first place. Does the fact that I run Xserver make bash consider Terminal something that isn't a log-out on exit?

    Read the article

  • Very high CPU and low RAM usage - is it possible to place some of swap some of the CPU usage to the RAM (with CloudLinux LVE Manager installed)?

    - by Chriswede
    I had to install CloudLinux so that I could somewhat controle the CPU ussage and more importantly the Concurrent-Connections the Websites use. But as you can see the Server load is way to high and thats why some sites take up to 10 sec. to load! Server load 22.46 (8 CPUs) (!) Memory Used 36.32% (2,959,188 of 8,146,632) (ok) Swap Used 0.01% (132 of 2,104,504) (ok) Server: 8 x Intel(R) Xeon(R) CPU E31230 @ 3.20GHz Memory: 8143680k/9437184k available (2621k kernel code, 234872k reserved, 1403k data, 244k init) Linux Yesterday: Total of 214,514 Page-views (Awstat) Now my question: Can I shift some of the CPU usage to the RAM? Or what else could I do to make the sites run faster (websites are dynamic - so SQL heavy) Thanks top - 06:10:14 up 29 days, 20:37, 1 user, load average: 11.16, 13.19, 12.81 Tasks: 526 total, 1 running, 524 sleeping, 0 stopped, 1 zombie Cpu(s): 42.9%us, 21.4%sy, 0.0%ni, 33.7%id, 1.9%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8146632k total, 7427632k used, 719000k free, 131020k buffers Swap: 2104504k total, 132k used, 2104372k free, 4506644k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 318421 mysql 15 0 1315m 754m 4964 S 474.9 9.5 95300:17 mysqld 6928 root 10 -5 0 0 0 S 2.0 0.0 90:42.85 kondemand/3 476047 headus 17 0 172m 19m 10m S 1.7 0.2 0:00.05 php 476055 headus 18 0 172m 18m 9.9m S 1.7 0.2 0:00.05 php 476056 headus 15 0 172m 19m 10m S 1.7 0.2 0:00.05 php 476061 headus 18 0 172m 19m 10m S 1.7 0.2 0:00.05 php 6930 root 10 -5 0 0 0 S 1.3 0.0 161:48.12 kondemand/5 6931 root 10 -5 0 0 0 S 1.3 0.0 193:11.74 kondemand/6 476049 headus 17 0 172m 19m 10m S 1.3 0.2 0:00.04 php 476050 headus 15 0 172m 18m 9.9m S 1.3 0.2 0:00.04 php 476057 headus 17 0 172m 18m 9.9m S 1.3 0.2 0:00.04 php 6926 root 10 -5 0 0 0 S 1.0 0.0 90:13.88 kondemand/1 6932 root 10 -5 0 0 0 S 1.0 0.0 247:47.50 kondemand/7 476064 worldof 18 0 172m 19m 10m S 1.0 0.2 0:00.03 php 6927 root 10 -5 0 0 0 S 0.7 0.0 93:52.80 kondemand/2 6929 root 10 -5 0 0 0 S 0.3 0.0 161:54.38 kondemand/4 8459 root 15 0 103m 5576 1268 S 0.3 0.1 54:45.39 lvest

    Read the article

  • Mixing both local and nonlocal addresses on three switches

    - by klew
    I have four computers that have nonlocal addresses like 150.X.X.X. Now I also get another few computers that should be only accessible through a gateway (it will be computing cluster) and they addresses are 10.0.0.X. I also wanted to include those four older computers to this new cluster, but I want them to be accessible from internet on nonlocal addresses (so I would like to set up them on both 150.X.X.X and 10.0.0.X addresses - I've set up it as interface eth0:0 since I have only one NIC). Those new computers have their switch and old computers also have their own switch. Both of them are connected to another (third) switch. The problem is that those old computers see each other (I can ping them), and also new computers see each other, but I can't ping old computer from new computer and vice versa. However pinging on nonlocal adresses works as expected. I looked into switch configuration and didn't find anything useful. I have no idea what I missed here. Can somebody help? All computers have Ubuntu Server 10.04

    Read the article

  • how can a web page change my mouse speed?

    - by Tekaholic
    I usually have many tabs open in Firefox and I haven't been able to find one specific website that causes this because I don't seem to notice it right away. I'm going to click on something on my desktop and I am lifting up the mouse several times to get across the screen. It doesn't seem to matter what program I might be using because this happens on all desktops and in Firefox, too. So I go in my settings and I turn up the mouse speed all the way and it's still not really acceptable. It doesn't matter if I click on different tabs but when I close the browser, my mouse is way too sensitive, like I'd expect at the max setting. Then I go back to Control Center and return my mouse speed and acceleration to normal. When I restart my browser, the mouse remains normal. So is there something to this before I start wasting my time hunting through my history to discover which website or sites are having this effect? ...and if it is a specific site and I locate it, what can I change to stop it's effect on my mouse besides not visiting it? I am using Linux Mint 13 on a box with an AMD Athlon processor and 2gigs of ram. I never installed another browser because everything works for me.

    Read the article

  • Query specific nameserver for a particular domain upon VPN connect

    - by MT
    Some background: I have a work laptop with Ubuntu 9.10 on it. I have a small network at home where I've been running some basic services (for myself/my family) for 10 some years. In my home network there is a nameserver (Fedora) running Bind 9 with two "views". One view is the "outside" view and it provides name resolution (to the Internet at large) for email, a wiki, and a couple of blogs. The "inside" view provides name resolution (to the internal RFC1918 addresses of theses servers) as well as all the inside hosts, network equipment, ...etc. I connect with an openvpn client to my home network from outside (such as work). What I'd like to be able to do is resolve names on my internal network across this VPN (so I get the RFC1918 "inside" responses) without fully changing my resolver to the DNS server at my hose. For example, if I connect to the VPN from work, I can change my resolver (by editing resolv.conf) to the DNS server at my house (across the VPN) and then successfully resolve all of the inside DNS names on my home network. The issue I have with this is that now I'm no longer able to resolve "inside" names provided by my work's DNS servers (because I'm using my home DNS server). Alternatively, I can connect to the VPN and access my home severs via IP addresses directly, but this is inconvenient and causes issues with Apache name-based hosting (among other things). In the end, the effect I'm trying to achieve is as follows: When I connect to the VPN I automatically start sending DNS requests for *.myhomedomain.com to my home nameserver, but any other requests continue to go the the nameserver I was using before (the one I received on my company LAN via DHCP). When I disconnect the VPN, requests for *.myhomedomain.com go back to the local LAN DNS server (e.g. all requests are going there now). I'm looking for suggestion at to how this can be accomplished.

    Read the article

  • upstart scripts: run a task after networking goes up

    - by The Journeyman geek
    I'm working on moving my current server setup to newer hardware, and migrating from ubuntu karmic koala to lucid lynx. Currently i'm using gw6c (compiled from the gogo6 website, as opposed to the version from the repositories) to get ipv6 access for my systems. On the karmic koala system, i used simple init.d script to get the ipv6 client started #! /bin/sh /usr/local/gw6c/bin/gw6c -f /usr/local/gw6c/bin/gw6c.conf I figured since this runs at any runlevel, it should translate to respawn console none start on startup stop on shutdown script exec /usr/local/gw6c/bin/gw6c -f /usr/local/gw6c/bin/gw6c.conf emit free6_ipv6_started end script this works fine started from initctrl, but it apparently fails to start when it boots. - its status being stop/waiting. It works fine (and respawns) when started otherwise.Any ideas on where i'm going wrong, and what would be the appropriate 'start on' arguement? EDIT: the exact error is 'init: gw6c main process (xxx) ended with status 8' followed by the process respawning , with xxx being a PID i suspect. I'm also half suspecting this is cause gw6c starts before networking does, and i need my eth0 up before gw6c is

    Read the article

  • Running Solr on VPS problem

    - by Camran
    I have a VPS with Ubuntu OS. I run solr om my local machine (windows xp laptop) just fine. I have configured Jetty, and Solr just the same way as on my computer, but on the server. I have also downloaded the JRE and installed it on the server. However, whenever I try to run the start.jar file, the PuTTY terminal shows a bunch of text but gets stuck. I could pase the text here but it is very long, so unless somebody wants to see it I wont. Also, I cant view the solr admin page at all. Does anybody have experience in this kind of problem? Maybe java isn't correctly installed? It is a VPS so maybe installation is different. Thanks UPDATE: These are the last lines from the terminal, in other words, this is where it stops every time: INFO: [] webapp=null path=null params={event=firstSearcher&q=static+firstSearcher+warming+query+from+solrconfig.xml} hits=0 status=0 QTime=9 May 28, 2010 8:58:42 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 28, 2010 8:58:42 PM org.apache.solr.handler.component.SpellCheckComponent$SpellCheckerListener newSearcher INFO: Loading spell index for spellchecker: default May 28, 2010 8:58:42 PM org.apache.solr.core.SolrCore registerSearcher INFO: [] Registered new searcher Searcher@63a721 main Also you should know that I installed jetty by just dragging the folders from my HD to the VPS server.

    Read the article

  • Multiple SSL certificates on one server

    - by Kyle O'Brien
    We're hosting two websites on our fairly tiny but dedicated production server. Both website require SSL authentication. So, we have virtualhosts set up for both of them. They both reference their own domain.key, domain.crt and domain.intermediate.crt files. Each CSR and certificate file for each site was setup using its own unique information and nothing is shared between them (other than the server itself) However, which ever site's symbolic link (set up in /etc/apache2/sites-enabled) is reference first, is the site who's certificate is referenced even if we're visiting the second site. So for example, assume our companies are Cadbury and Nestle. We set up both sites with their own certificates but we create Cadbury's symbolic link in apache's site-enabled folder first and then Nestle's. You can visit Nestle perfectly fine but if you check the certificate installation, it reference's Cadbury's certificate. We're hosting these websites on a dedicated Ubuntu 12.04.3 LTS server. Both certificates are provided by Thawte.com. I came across a few potential solutions with no degree of success. I'm hoping someone else has a decent solution? Thanks Edit: The only other solution that seems to have provided success to some people is using SNI with Apache. However, the setups here didn't seem to coincide with our setup at all.

    Read the article

  • Grub Autostart with timeout

    - by BetaRide
    On Ubuntu 10.4 LTS I want Grub to start the default OS after 5 Seconds. I'd like to see the output of the startup scripts. Currently grub wait forever until I hit return and the output of the startup scripts isn't visible. Can someone tell me how I have to configure /etc/default/grub or any other setups? I tried to play with GRUB_TIMEOUT and GRUB_DEFAULT and did a sudo update-grub afterwards, but nothing changed. Any ideas? # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=5 #GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" GRUB_SAVEDEFAULT=true # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux # GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_LINUX_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1"

    Read the article

  • Fedora Core 6 Migration

    - by Matthew Sprankle
    I am at a loss as to what I should to for this server. I need it to run php5.3 and corresponding version of mysql. I received a client today through work that is using Fedora core 6 running 10 very small websites on some very hodge podge setup. My original idea was just upgrade to php5.3. I have yum (installed 3.0.8) reconfigured for the fedora archive. The latest version of php it allows is 5.1.8. I am still relatively new to server setups and am nervous about wiping their server to upgrade it. Since it is about 6-8 years old I'm not sure if it will even support the newest version of fedora. The server specs are: Parallels Plesk Panel version 9.5.4 Operating system Linux 2.6.9-023stab048.4-smp CPU GenuineIntel, Intel(R) Xeon(R)CPU E5335 @ 2.00GHz (10gb disk space and 1gb of memory). I use fedora for my personal server so I was a little familiar with it. I haven't done anything too extravagant. Is there a way I can escape this nightmare with installing php5.3 or do I need to migrate these sites to a new server?

    Read the article

  • How to keep multiple servers in sync file wise?

    - by GForceSys
    I'm currently managing a cluster of PHP-FPM servers, all of which tend to get out of sync with each other. The application that I'm using on top of the app servers (Magento) allows for admins to modify various files on the system, but now that the site is in a clustered set up modifying a file only modifies it on a single instance (on one of the app servers) of the various machines in the cluster. Is there an open-source application for Linux that may allow me to keep all of these servers in sync? I have no problem with creating a small VM instance that can listen for changes from machines to sync. In theory, the perfect application would have small clients that run on each machine to be synced, which would talk to the master server which would then decide how/what to sync from each machine. I have already examined the possibilities of running a centralized file server, but unfortunately my app servers are spread out between EC2 and physical machines, which makes this unfeasible. As there are multiple app servers (some of which are dynamically created depending on the load of the site), simply setting up a rsync cron job is not efficient as the cron job would have to be modified on each machine to send files to every other machine in the cluster, and that would just be a whole bunch of unnecessary data transfers/ssh connections.

    Read the article

  • Network external hard drive reports not enough free space

    - by mzhang
    I'm running an Ubuntu (10.04) Samba server on a local network. The server has a 50GB internal drive with only 24MB free. I've shared a folder /samba from that drive. I also have a 1TB NTFS external hard drive mounted to the system. There is a symbiotic link from the Samba shared folder on the nearly-full internal drive to the plenty-of-free-space external drive (i.e. /samba/external_hd). I wish to copy a 3.25GB folder into the (remote) external hard drive, via a Mac (10.6.8). The Mac reports (correctly) that there's 24MB free on the server, and so will not let me copy the folder on the Mac over to the external drive (dragging the folder into /samba/external_hd), failing with a "server does not have enough free space" error. However, it seems that I can still scp the folder into the external drive, via the symbolic link. Is there a reason as to why this is happening (and are there any ways to prevent it)? Is this even good practice (to mount a drive and link into the directory)?

    Read the article

  • MAC-Address based routing

    - by d-fens
    Here is what i want to do: I have a bunch of systems, some might have the same Public-IP, i disable ARP. I have a Firewall (either IP Layer or bridge-FW) between these systems and the internet. Depending on the destination port of incoming IP-Packets to some of these Public-IPs i want to set the destinsation-Ethernet-Adress. So for instance System A has IP 8.8.8.8, mac de:ad:be:ef:de:ad, arp disabled System B has IP 8.8.8.8, mac 1f:1f:1f:1f:1f:1f, arp disabled Firewall has IP 8.8.8.1, arp disabled on that interface Incoming packet to IP 8.8.8.8 tcp dest port 100 Incoming packet to IP 8.8.8.8 tcp dest port 101 Firewall sets dest-mac for 1.) - de:ad:be:ef:de:ad Firewall sets dest-mac for 2.) - 1f:1f:1f:1f:1f:1f Second scenario: System A and System B establish outgoing TCP-Connections, and the firewall matches the dst-mac of the incoming IP-Packets (response packets) to the senders-mac address. is this possible in any way with linux and iptables? edit: i read ebtables might "work" in a hackish way for this purpose but i am not sure...

    Read the article

  • Tunnell network requests with Windows 7

    - by mark
    I've Windows 7 64bit Pro client in a private LAN behind a Netgear wgr614v7 router. I've also a remote Debian server machine outside. I'd like to tunnel all (or specified ports/protocols) over this outside server, so when I'm on the Windows machine and I request serverfault.com it would not appear from the wgr614v7 public IP but from the server. But it's not only about HTTP traffic, it's basically about everything I'd like to: other TCP ports, even UDP, etc. It must be transparent to the application, e.g. they shouldn't be aware of this. All their requests just appear as being from the server and the tunnel between them takes care about the packets. I'm aware of e.g. Putty and forwarding individual ports or using it as a socks proxy, however not many applications to support this and the support in windows itself looks non-existent to me. I might add it should be something "reasonable" easy to set up. I've heard about PPTP but I'm unsure about it's security implications (by design). Should I go for VPN? There seem to be two common solutions for Linux (OpenSwan and StrongSwan), why would I pick the one over the other? I also fear that setting up a VPN might be quite complex, OTOH maybe it's the only sane way to do the things right? Or is OpenVPN sufficient? I'm seeking for open (source) solutions, what other options to I have or which direction should I head to?

    Read the article

  • very slow connection to ssh server from client (but not other servers)

    - by AntonOfTheWoods
    I have an Ubuntu 12.04 laptop that is taking so long to connect to various servers (in different data centres) that it seems like a bit of a lottery whether I'll actually get a connection. If I connect to the servers between themselves it's instantaneous, and I've set UseDNS no AddressFamily inet On the servers I'm connecting to (and rebooted for good measure). I also put in the reverse DNS+IP of the cable connection I'm connecting from. If I connect from the laptop via telnet: telnet my.server 22 Then the connection is also instantaneous, so it doesn't appear to be a problem with an intervening firewall. I have the same behaviour whether I connect with the IP, a short name in my hosts or the FQDN. I'm connecting with a 50mbps (cable, sync) connection so that doesn't appear to be the problem, and when I do finally get a connection then it's a good, quick, stable one. I have tried listening on another port (8000) and that makes no difference. Web and other connections from the laptop to the machine are also very good. Does anyone have any ideas here?

    Read the article

  • Why would my network slow down?

    - by monkthemighty
    The network at my work has about 40 computers on it and a quite a few printers. When there are a lot of people working the network will be slow. I can test the ping between my computer and the router and it will keep rising, sometimes to the point that it times out. The router we are using is running Ubuntu on a atom processor and it has 4gb of ram. When the network slows the process Ksoftirq will be using most if not all of the processing power. I have found that Ksoftirq is a process that handles irq requests. Also when the network slows down I have captured packets from the router and using tshark and looked at it using wireshark on my laptop. With the capture show a lot of packets with TCP Dup ACK and TCP Retransmissions. The destinations of the TCP Dup and TCP retransmissions are to most of the computers on the network but there are some that are far more than others. What could this problem be caused by?

    Read the article

  • Apache Virtualhosts with PHP and custom logs - how to isolate PHP errors?

    - by Repox
    I'm trying to setup a simple hosting enviroment for my application on an Ubuntu server. I created a virtualhost like this: <VirtualHost *:80> ServerAdmin [email protected] ServerName www.example.com ServerAlias example.com DocumentRoot /home/owner/example.com/docs <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/owner/example.com/docs/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /home/owner/example.com/logs/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /home/owner/example.com/logs/access.log combined php_flag log_errors on php_value error_log /home/owner/example.com/logs/php-error.log </VirtualHost> Now, my problem is that PHP errors and warnings are thrown in the error.log - not the php-error.log as I was hoping. How can achieve this?

    Read the article

  • Route all traffic of home network through VPN [migrated]

    - by user436118
    I have a typical semi advanced home network scenario: A cable modem - eth A wireless router (netgear n600) eth and wlan A home server (Running ubuntu 12.04 LTS, connected over wlan) A bunch of wireless clients (wlan) Lying around I have anoher cheaper wlan router, and two different USB wlan NIC's that are known to work with Linux. ACTA struck. I want to route ALL of my WAN traffic through a remote server through a VPN. For sake of completition, lets say there is a remote server running debian sqeeze where a VPN server is to be installed. The network is then to behave so that if the VPN is not operative, it is separated from the outside world. I am familiar with general system/network practices, but lack the specific detailed knowledge to accomplish this. Please suggest the right approach, packages and configurations you'd use to reach said solution. I've also envisioned the following network configuration, please improve it if you see fit: Client ip:10.1.1.x nm:255.0.0.0 gw:10.1.1.1 reached via WLAN Wlan router 1: ip: 10.1.1.1 nm:255.0.0.0 gw: 10.10.10.1 reached via ETH Homeserver: <<< VPN is initiated here, and the other endpoint is somewhere on the internet. eth0: ip:10.10.10.1 nm: 0.0.0.0 gw:192.168.0.1 reached via WLAN Homeserver: wlan0: ip: 192.168.0.2 nm: 255.255.255.0 gw: 192.168.0.1 reached via WLAN Wlan router 2: ip: 192.168.0.1 nm: 0.0.0.0 gw: set via dhcp uplink connector: cable modem Cable Modem: Remote DHCP. Has on-board DHCP server for ethernet device that connects to it, and only works this way. All this WLAN fussery is because my home server is located in a part of the house where a cable link isnt possible unfortunately.

    Read the article

  • Disk (EXT4) suddenly empty without any sign of why

    - by Ohnomydisk
    I have a Ubuntu 10.04 server with several disks in it. The disks are setup with a union filesystem, which presents them all as one logical /home. A few days ago, one of the disks appears to have suddenly 'become empty', for lack of better explanation. The amount of data on the /home mount almost halved within minutes - the disk appears to have had just over 400 GB of data prior to 'becoming empty'. I have absolutely no idea what happened. I was not using the server at the other time, but there are half a dozen other users who may have been (without root access and without the ability to hose a whole disk). I've ran SMART tests on the disk and it comes back clean. The filesystem checks fine (it has 12 GB used now, as some user software continued downloading after the incident). All I know is that around around midnight on October 19, the disk usage changed dramatically: The data points are every 15 minutes, and the full loss occured between captures: 2012-10-18 23:58:03.399647 - has 953.97/2059.07 GB [46.33 percent] 2012-10-19 00:13:15.909010 - has 515.18/2059.07 GB [25.02 percent] Other than that, I have not much to go off :-( I know that: There's nothing interesting in log files at that time Nobody appeared to be logged in via SSH at the time it occured (most users do not even use SSH) The server was online through whatever occured (3 months uptime) None of the other disks were affected and everything else on the server looks completely normal I have tried using "extundelete" on the disk and it didn't really find anything (some temporary files, but they looked new anyway) I am completely at a loss to what could have caused this. I was initially thinking maybe root escalation exploit, but even if someone did maliciously "rm" the disk contents, it would take more than 15 minutes for 400 GB?

    Read the article

  • Global Email Forwarding with EXIM?

    - by Dexirian
    Been trying to find a solution to this for a while without success so here i go : I was given the task to build a High-Availability Load-Balanced Network Cluster for our 2 linux servers. I did some workaround and managed to get a DNS + SQL + Web Folders + Mails synchronisation going between both. Now i would like my server 2 to only do mailing and server 1 to only do web hosting. I transfered all the accounts for 1 to 2 using the WHM built-in account transfert feature. I created 2 different rsync jobs that sync, update, and delete the files for mail and websites. Now i was able to successfully transfer 1 mail accounts from 1 to 2, and the server 2 works flawlessly. All i had to do was change the MX entries to point to the new server and bingo. Now my problem is, some clients have their mail softwares configured so that they point to oldserver.domain.com. I cant make the (A) entry of oldserver.domain.com point to the new server for obvious reasons. I thought of using .foward files and add them to the home directories of the concerned users but that would be very difficult. So my question is : Is there a way to configure exim so that it will only foward mails to the new server? I need to change all the users so they use their mail on server 2 without them doing anything. Thanks! EDIT : TO CLARIFY MY PROBLEM Some clients have their mail point to oldserver.xyz instead of mail.olderserver.xyz I want to know if i can do something to prevent modifying the clients configuration I would also like to know is there is a way to find out what clients aren't properly configured

    Read the article

  • How to prevent response to who-has requests on virtual eth interface?

    - by user42881
    Hi, we use small embedded X86 linux servers equipped with a single physical ethernet port as a gateway for an IP video surveillance application. Each downstream IP cam is mapped to a separate virtual IP address like this: real eth0 IP address= 192.168.1.1, camera 1 (eth0:1) =192.168.1.61, camera 2 (eth0:2) =192.168.1.62, etc. etc. all on the same eth0 physical port. This approach works well, except that a specific third-party windows video recording application running on a separate PC on the same LAN, automatically pings the virtual IPs looking for unique who-has responses on system startup and, when it gets back the same eth0 MAC address for each virtual interface, freaks out and won't allow us to subsequently manually enter those addresses. The windows app doesn't mind, tho, if it receives no answer to the who-has ping. My question - how can we either (a) shut off the who-has responses just for the virtual eth0:x interfaces while keeping them for the primary physical eth0 port, or, in the alternative, spoof a valid but different MAC address for each virtual interface? Thanks!

    Read the article

< Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >