Search Results

Search found 26179 results on 1048 pages for 'linux from scratch'.

Page 278/1048 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • TCP Keepalive and firewall killing idle sessions

    - by Carlos A. Ibarra
    In a customer site, the network team added a firewall between the client and the server. This is causing idle connections to get disconnected after about 40 minutes of idle time. The network people say that the firewall doesn't have any idle connection timeout, but the fact is that the idle connections get broken. In order to get around this, we first configured the server (a Linux machine) with TCP keepalives turned on with tcp_keepalive_time=300, tcp_keepalive_intvl=300, and tcp_keepalive_probes=30000. This works, and the connections stay viable for days or more. However, we would also like the server to detect dead clients and kill the connection, so we changed the settings to time=300,intvl=180,probes=10, thinking that if the client was indeed alive, the server would probe every 300s (5 minutes) and the client would respond with an ACK and that would keep the firewall from seeing this as an idle connection and killing it. If the client was dead, after 10 probes, the server would abort the connection. To our surprise, the idle but alive connections get killed after about 40 minutes as before. Wireshark running on the client side shows no keepalives at all between the server and client, even when keepalives are enabled on the server. What could be happening here? If the keepalive settings on the server are time=300,intvl=180,probes=10, I would expect that if the client is alive but idle, the server would send keepalive probes every 300 seconds and leave the connection alone, and if the client is dead, it would send one after 300 seconds, then 9 more probes every 180 seconds before killing the connection. Am I right? One possibility is that the firewall is somehow intercepting the keepalive probes from the server and failing to pass them on to the client, and the fact that it got a probe makes it think that the connection is active. Is this common behavior for a firewall? We don't know what kind of firewall is involved. The server is a Teradata node and the connection is from a Teradata client utility to the database server, port 1025 on the server side, but we have seen the same problem with an SSH connection so we think it affects all TCP connections.

    Read the article

  • How to completely disable IPv6 for loopback interface on RHEL 5.6

    - by Marc D
    I've done lots of research on how to disable IPv6 on RedHat Linux and I have it almost completely disabled. However the loopback interface is still getting an inet6 loopback address (::1/128). I can't find where IPV6 is still enabled for loopback. To disable IPV6 I added the following settings to /etc/sysctl.conf: net.ipv6.conf.default.disable_ipv6=1 net.ipv6.conf.all.disable_ipv6=1 And also added the following line to /etc/sysconfig/network: NETWORKING_IPV6=no After rebooting, the inet6 address is gone from my physical interface (eth0), but is still there for lo: # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:50:56:xx:xx:xx brd ff:ff:ff:ff:ff:ff inet 10.x.x.x/21 brd 10.x.x.x scope global eth0 If I manually remove the IPV6 address from loopback and then bounce the interface, it comes back: # ip addr del ::1/128 dev lo # ip addr show lo 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo # ip link set lo down # ip link set lo up # ip addr show lo 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever I believe IPV6 should be disabled at the kernel level as confirmed by sysctl: # sysctl net.ipv6.conf.lo.disable_ipv6 net.ipv6.conf.lo.disable_ipv6 = 1 Any ideas on what else would cause the loopback interface to get an IPV6 address?

    Read the article

  • Plesk FTP not working but SFTP and Shell is working

    - by shamittomar
    I am facing a strange problem. The FTP on my Plesk VPS is not working. Whenever I try to connect, FileZilla FTP client says: Status: Resolving address of xxxxxxxxxxxxx.com Status: Connecting to xxx.xxx.xxx.xxx:21... Status: Connection established, waiting for welcome message... Error: Could not connect to server So, it's not even going to the step of asking username/password. So, it's something else. The SFTP on port 22 is working fine. Also, I can successfully do shell access and run commands. But, I NEED FTP access too on port 21. I have searched everywhere but can not find any setting to enable it. This is the Plesk version info: Parallels Plesk Panel version 9.5.2 Operating system Linux 2.6.26.8-57.fc8 CPU GenuineIntel, Intel(R) Pentium(R) 4 CPU 3.00GHz Any help is appreciated. [EDIT]: The firewall is not blocking it. I have checked it on server and there are absolutely no blocking rule. Firewall states: All incoming/outgoing connections are accepted on FTP And on client-side (my PC), I can connect to other FTP servers so this is not an issue in my PC's firewall. Moreover, I can not even connect to the FTP from online FTP clients like net2ftp.

    Read the article

  • Intermittently uncommunicative subnets

    - by mhd
    Last week proved me a veritable Cassandra: I've always said that it's a bad idea to have only one firewall/router, without a backup or failover. And thus our Cisco PIX went haywire, refusing to route properly. And of course, the only one available here on short notice is me, and while I'm quite grounded in Linux, I'm really a developer not a sysadmin (the fact that this hit me on sysadmin appreciation day is a bit ironic). Anyway, this weekend I tried to hack up a temporary solution: I used an old server with enough NICs (two built-in, four on a card) to serve as a gateway and firewall. Due to some problems with the raid controller, I got only two router distros running, and between Untangle and Ebox I decided for the latter. Now everything is quite okay. I've got all the different subnets we've got here (all with separate switches) talking to each other and even to the internet (Cisco 2800 router, T1 lines). But from time to time (20-60 minute intervals), I get a total routing failure. Our main, office subnet can't talk to our server subnet and can't connect to the internet. This is not the end of a gradual slowdown, either everything's working perfectly or I get a total lack of communication for about two minutes each time. Now I'm a bit at wits end what to check. At least with the default EBox setup, nothing in /var/log shows anything weird and it doesn't exactly have lots of built-in monitoring tools. So I'm hoping someone here could give me some pointers about what to look out for. I did change the ethernet cable from the office switch to the firewall, with no results. I might change switches, although within the switch it seems to work ok enough. Edit: I'm not sure whether this is the sole cause of the problem, but after I noticed a few DHCP entries just before the last drop of connectivity, I tried to reproduce that. And alas, whenever I renew a DHCP connection, I can't access other subnets anymore. Running ISC DHCPD 3.0.6.

    Read the article

  • Getting Started in SuSE as an Ubuntu User

    - by Subhamoy Sengupta
    I am not a Linux newbie, but haven't touched SuSE in a very very long time (last time I tried it, it was SuSE 7!). Finally now I felt like giving it a try, and many things seem strange or unnecessarily complex. I have a series of questions. How do I ensure that my packages are uptodate? It sounds silly, but I tried the obvious methods already. I have disabled the default repositories that show up when you do zypper lr, and added Tumbleweed and packman repositories (Essentials, Multimedia, Extra). Then I did a sudo zypper ref --force and then sudo zypper dup, and it tells me many dependencies are not met. I have already added solder.allowVendorChange=true to /etc/zypp/zypp.conf, so it should not care which repository the latest versions are in, and just upgrade to it. Even when I chose to skip the packages with unmet dependencies, and seemed like quite a bit happened in the background, I opened Firefox afterwards and the version was 7! I am guessing things did not go as expected. But of course this is not a problem with SuSE, but I am not understanding the system right. How do I do it right? When I start typing arguments of a command, for example sudo zypper install, when I type sudo zypper ins and keep hitting TAB, nothing happens! It always worked in Ubuntu and I feel very uneasy with this. Is this how SuSE is supposed to be? When I try to install something, and I start writing its name, even though the package exists and I am sure of it, hitting TAB does not autocomplete it. This is also quite inconvenient. Why is it not happening? There are many things in SuSE that are really great, and I think I will stay with it and not go back to Ubuntu once I settle these very rudimentary issues. But right now they are giving me a lot of grief! Please help!

    Read the article

  • How to set umask globally?

    - by DevSolar
    I am using a private user group setup, i.e. a user foo's home directory is owned by foo:foo, not foo:users. For this to work, I need to set the umask to 002 globally. After a quick grep -RIi umask /etc/*, it seemed for a moment that modifying the UMASK entry in /etc/login.defs should do the trick. It does, too -- but only for console logins. If I log in to my desktop, and open a terminal there, I still get to see the default umask 022. Same goes for files created from apps started through the menu. Apparently, the display manager (or whatever X11 component responsible) does source some different setting than a console login does, and damned if I could tell which one it is. (I tried changing the setting in /etc/init.d/rc, and no, it did not help.) How / where do I set umask globally (and for all users), so that the X11 desktop environment gets the memo as well? (The system is Linux Mint / Ubuntu, in case that changes anything...)

    Read the article

  • DHCPD (Slackware) - Disabling auto-generation of gateway as DNS server

    - by Dogbert
    Good day, I am using a Linux workstation on Slackware 13.37. One "problem" I have had to deal with ever since 11.0 is the following: DNS servers are queried and determined at startup by DHCP daemon (DHCPD) This is invoked at startup by a script located at /etc/rc.d/rc.dhcpd My DNS servers for my ISP are resolved correctly, and are stored in a list located at /etc/resolv.conf However, the one annoying problem is that my gateway IP (ie: 192.168.1.1) is always automatically put at the top of the list in resolv.conf, meaning I have to always wait for a timeout before a valid DNS server is used to resolve an address (ie: timeout on 192.168.1.1 because it is not actually a DNS server, then DHCP uses the next server in the list). I could lower my DNS resolution timeout so the gateway query times out quicker, but that's not what I want, as I don't want to degrade the abilities of legitimate DNS servers. What I would like to do is change how DHCPD operates so that it does NOT put my gateway IP address at the beginning of this list. I've searched via "man dhcpd", etc, and haven't found the exact answer yet. Any help on this issue is appreciated. Thank you all in advance for your time and assistance.

    Read the article

  • Ubuntu xrandr rotate issue

    - by user83544
    I've just bought a second monitor for my PC which happens to be a pivot monitor. I've already read lots of forums related to my problem but haven't come across a solution - I have the same symptoms as dozens of posts but no matter whatever I try it just doesn't work. I've already changed the xorg.conf file and added in the device section just under Driver "nvidia" the following for my second monitor: Option "RandRRotation" "on" When I save and reboot I try to rotate my screen with the nvidia X server settings by choosing the second monitor and clicking either "left" or "right" for the rotation. It immediately exits the nvidia settings window and does nothing. I tried within the terminal by typing: xrandr -o right I get the following error: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 154 (RANDR) Minor opcode of failed request: 2 (RRSetScreenConfig) Serial number of failed request: 14 Current serial number in output stream: 14 I actually manage to rotate it with Option "Rotate" "CCW" instead of "RandRRotation". The problem with this solution is that you get the second monitor in the right position, but any window you open on that screen is practically unchangeable. You can't change the size nor move it, making it useless for reading PDFs, which is the main reason why I bought this second screen to help me write my thesis. Any help is really appreciated. sudo lshw -c video hiram@hiram-linux:~$ sudo lshw -c video *-display description: VGA compatible controller product: nVidia Corporation vendor: nVidia Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:f8000000-f9ffffff memory:d8000000-dfffffff memory:d4000000-d7ffffff ioport:dc00(size=12 memory:fbd80000-fbdfffff

    Read the article

  • Virtualizing an Inline network appliance with VirtualBox (or VMWare)

    - by Tzury Bar Yochay
    My device, which is a Linux based IP in-liner is transparent to the network peripherals, that is, no IP address assigned to any of its interfaces. For the sake of the conversation, let's use ADSL connection as an example, while the device is inspecting the bi-directional traffic, the network is behaving same as if device was not there, attached to the wire (see Physical setup at the attached diagram). I wonder if I can enclosed that "device" within a Windows machine and have it operated virtually so it still seats inline between the ADSL router and the Windows netwroking interface by using virtual NICs, (or whatever their name is in windows), and inspecting the traffic, same as if it was on a separate physical device, the drawing under "Virtual Setup" in the attached diagram show what I am trying to achieve. Reading a bit on the VirtualBox docs, seems like binding the right side is relatively simple, perhaps I should have one network adapter set as Bridge Networking and VirtualBox will connect it to the physical NIC on the host machine, and network packets are exchanged directly, circumventing the host operating system's network stack (WinXP in my case). However, I have no idea how to achieve the left side of my diagram, which requires adding virtual NICs to windows and configure them correctly in a way to make that pipeline possible. I would appreciate any help. by the way, if that is not possible with VirtualBox but with other virtualization solution (e.g. VMWare), I would accept the other as well.

    Read the article

  • GVIM hangs when saving through GVFS' FTP

    - by Lie Ryan
    I loved Gnome's Nautilus and FTP integration and being able to mount a remote FTP directory as a regular bookmark/directory, and double clicking any remote files to open in any unmodified program. I also loved editing text files with GVim. However, if I double clicked file on Nautilus to open a text file in Gvim, then saving a file will take about 10 seconds and GVim will hang for that amount of time. The major irritant is that I cannot continue editing while the text editor is waiting for the write to finish, this delay interrupted my workflow and thought process and saving becomes a painful process. The other problem is that I don't think simply uploading a file should take that much time. I'm aware of GVim's internal FTP support, but they are not as well integrated with Nautilus's FTP. So a few question: Is there a way to make GVim or GVFS to save in background while I continue editing? Why is GVFS so slow? Is there any way to set GVFS to use a single persistent FTP connection instead of creating a new FTP connection each time? I'm on Gentoo Linux x86-64.

    Read the article

  • Start Chrome by command line, but adding some arguments to make it login into your Google account automatically

    - by jim
    Is there a way to start Chrome calling it from the command line (using Linux), but providing it some argument to make it login into some Google account automatically? I'm looking for something like google-chrome -account foo -pass bar that I can easily put in a bash script later. A little background: I have a laptop connected to my TV, which is currently using just a mouse for user interaction. There's no google account logged in by default, and that's the way I want to keep it, so my kids can't come across videos and pictures in google and youtube that they are not supposed to see (e.g.: adult content, or anything marked as not appropriate for kids by the google's safe search filters). The bad thing about this is that there are some music videos in youtube that requires you to be logged in to see, usually those we (the adults) used to sing when playing karaoke... as the only input available is a mouse, I'm looking for a way to start with my google account without having to type the whole thing usin the on-screen keyboard. You may think "Why you can't use the keyboard, if the laptop is right there?". Well, it's in a kind of uncomfortable position - too high for me without a chair or something, as it's right above the furniture in where the TV is located. Is there a way to make this scriptable? If not, do you know any other workaround? Note: using the remember me after logging off or alike options are discarded, as the safe-search chrome version must be always the default version to run.

    Read the article

  • Fedora installed in Legacy mode, how to make it work in UEFI?

    - by TryntaLearn
    I am trying to install a linux distribution on my new laptop. It's an MSI GE40 which comes preinstalled with windows 8. It's a UEFI machine. I have tried installing Ubuntu and Fedora with limited success. I've tried: running it in UEFI, UEFI with CSM mode, with secureboot enabled, ... with secureboot disabeled, ... with secureboot enabled but in user mode. I have had no success with any of these methods. With Ubuntu the grub loader shows up, but when I pick 'try ubuntu', or 'install ubuntu', it's just a blank screen(I've been using liveusb's btw). With Fedora, it'll show me the next screen on which it says 'binary authorised by vendor certificate' or 'Secure boot not enabled' and then stop doing anything. The closest thing to success I reached was switching to legacy mode to install Ubuntu, in which case I was able to get to the ubunutu installer but it wouldn't recognize windows 8 on my computer, so instead of continuing on I rebooted, and removed the USB pendrive to find my computer couldn't find windows 8. After a little dicking about I got it to find windows 8 again. Any ideas on how I should go about trying to install a distro on my computer? UPDATE:- So I ended up installing fedora using Legacy mode. To use both it and Windows at boot, I manually enter automatic repair so I can get to my UEFI settings and switch boot mode to UEFI to boot windows 8. I guess my question needs to be modified as to how do I get all of this to work in UEFI mode, so I can dual boot via selection through a bootloader, and not by repeatedly switching boot mode.

    Read the article

  • No sound out of headphone port on laptop

    - by Thanatos
    I cannot get sound out of the headphone port on a laptop. Headphones are plugged in, and sound comes out of the internal speakers. Windows behaves normally (sound switches to headphones when headphones are inserted). It did work in Linux at one point, but something changed, we're just not sure what. Rebooting doesn't fix. This appears to occur whether or not PulseAudio is running. Things I've tried: Rebooting. No effect. Booting into Windows. It works properly, so probably not a hardware issue. All of alsamixer. My only controls are this: "Master" Volume bar & mutable, unmuted. Controls volume. "PCM" Volume bar only. 100%. "S/PDIF" Mutable only, currently muted, has no effect. "S/PDIF" Default PCM", Mutable only, currently unmuted, has no effect. Killing PulseAudio. No effect. (It also won't stay dead! Something appears to be restarting it, and I can't tell what, but it is annoying as fuck.) alsactl init 0, no effect. sudo rm -f /var/lib/alsa/asound.state, no effect. General system info: Ubuntu 10.04 LTS Toshiba Satellite T135D-S1324 lspci says I have: 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) 01:05.1 Audio device: ATI Technologies Inc RS780 Azalia controller Some edits: Yes, the headphones are in all the way. This works in Windows: You plug headphones in, the internal speakers stop making noise, and noise comes out the head phones. Windows says I only have two sound cards: the HDMI port (which I don't care about) and the "sound card", which it claims is a "Conexant Pebble High Definition SmartAudio" In Windows, both the internal speakers and the headphone jack show up as one soundcard, which in my experience, is typical. (This is a laptop)

    Read the article

  • Looking to get a small server – need web, PHP, PostgreSQL.

    - by Javawag
    Hi all! I'm looking to get a cheap (low end) server to serve web pages (xHTML/PHP), but I also need to be able to set up PostGreSQL on the system too. Ideally the server would have low power consumption, run Linux (I prefer Mac OS X but a Mac Mini, although the size I'm looking for, is too much money!) and be around £100 (~$160US). EDIT: Just to make it clearer, I'm looking to purchase the server hardware myself – but I want something about Mac Mini sized. I don't want to pay for hosting! Also, quick question – if it's to serve web pages from my home (standard ISP connection, no static IP!), what do I need in place to get this working. I'm guessing I would sign up with some service like no-ip, and register a domain to point to my no-ip address (then install the no-ip software on the server to update that with the current IP). I know the idea of running a server behind a normal ISP connection isn't very elegant, but I'd prefer to have the server where I can see it then pay over the odds for a hosting service where I have little to no control over what happens. Also, I could write my own server software for apps/etc to connect to as well. Anyways I'm rambling! What do you guys think?! Javawag

    Read the article

  • Raspberry pi slows down my entire network

    - by gnusouth
    Whenever my Raspberry Pi is connected to the network (via ethernet) the entire network is slowed to a crawl. On my main computer, ping times for google.com go from ~10ms to ~200ms and it takes forever to load web pages. Connections are also slow on the Pi, with an apt-get update showing pathetic speeds in the order of 1KB/s. Turning off the Pi completely removes the drag from the network. I've tried static and dynamic IP addresses for the Pi, but both have the same problems. I'm currently using Raspbian (downloaded today), but also had this problem with Arch Linux. I've checked the connection's duplex with dmesg | grep -i duplex, which shows that the Pi's connection is running at 100Mbps, full-duplex, as expected. My modem/router is a Billion 7404VNPX (an Australian thing); relatively high-end, albeit a bit buggy at times (it will occassionally delete all its firewall settings). It assigns IPs in the range 192.168.1.1 to 192.168.1.20 and has 192.168.1.254 as its own IP. When I assign static IPs I tend to use the 192.168.1.200 area. Does anyone have any idea as to what could be causing this weird slowdown? Or any tests I could try? Thanks

    Read the article

  • Ping with explicit next-hop selection (aka Monitoring multiple default gateways)

    - by Michuelnik
    I have a linux (debian) router with two internet connections (A) and (B). (A) is preferred, (B) is fallback. I want to monitor the internet connection (and not only the availability of the gateways!) and change the default route appropriately. If (A) is not providing internet, switch to (B) If (A) is providing internet again, switch back to (A). Only problem I have is in case (2). My routing table points towards a working internet so I cannot easily detect whether internet is working over link (A) again. I am search for a ping or traceroute (or other diagnosis-tool) which can select the next-hop explicitly. ping -r looks promising, but can only ping a host on the lan. (It only has to write another destination address in the packet, damnit!) traceroute -g gateway looks even more promising and nearly does what I want - but sets source routing options which my next-hops deny. (Not within my administrative boundary...) I just want a $ping, that can: select a source interface (and address) select a next-hop on that interface ping any arbitrary ip address I could do evil trickery with policy-based routing but that would have production impact for all users. I would like to see a side-effect-free solution....

    Read the article

  • Whats the easiest route to trying out mono 2.6?

    - by E J
    We have several web applications built on Microsoft technologies (asp.net+mvc framework, built using VS2008, MS SQL Server). I have recently be playing with Ubuntu (9.10), installed using Wubi, and wanted to see if I can get our apps running on a foss software stack. I have got the hang of the very basics of Postgresql and I have read that there is some support for Linq to SQL in mono (as of 2.6) as well as asp.net/MVC. However I am unsure how to go about getting Mono 2.6 up and running. Here is what I have discovered so far: Ubuntu is not meant for the 'cutting edge' it is designed to be stable hence, it sometimes takes a release cycle or two for new software to make it to the repositories Mono is already installed by default, but it is likely to stay at version 2.4 for at least the 10.4 release You can install paralell environments of Mono, if you know what your doing. I have had a go at setting up parallel environments, but haven't had any luck yet. (And TBH I am not certain that that will do what I think it's gonna do). (tl;dr start here) Is there a distribution of Linux similar enough to Ubuntu, that I wouldn't have to start the learning curve all over again, but that will let me install Mono 2.6, Postgresql, (and possibly mono-develop 2.4)? Or should I persist with Ubuntu?

    Read the article

  • Installing sphinx on a web hosting server

    - by fiftyeight
    I want to install sphinx search on a web-hosting server. I'm on a Linux VPS with hostgator, but I never actually installed anything on a remote server so this will be a first time for me. If there's anyone here who installed sphinx it's really help me I had some problems when using sphinx on my PC with the permissions and the MySQL files, eventually I got it working on my PC. Anyway, I'd me really grateful if anyone can help me with some questions Do i need root access to install sphinx? I have root access to the server but I'd connect to it as a normal user since doing stuff as root is always less secure. can anyone tell me by what user I need to execute the indexer and the search daemon? Should I use root access in order to this? when I did it as a normal user on my PC it gave me some trouble with the PID file and the log files. last time I executed the search deamon I executed it as a normal user and it gave me some trouble, I created the folder /var/log/ for log files and did chmod 777 on it, but still when I executed the search daemon it created the PID file "searchd.pid" file but with no permissions for some reason, any idea why? Thanx in advance. fiftyeight

    Read the article

  • Ubuntu and Windows and Separate HDs, oh my!

    - by LuxuryMode
    Need some major help. Running a Dell XPS/Dimension 630i. It came with "SATA 2 RAID 0 With Dual 500GB Hard Drives." I have installed a new, third non-raided drive and installed Ubuntu on it. So now I have Windows on the original hard drive and Ubuntu Linux on the new HD. When I get to the boot menu where I can select an OS, if I select windows I get an error: "No such drive, no such disk." Also, strangely in the first place, in order to even get to the bootloader menu I have had to disable ALL ports under the RAID config. Unless I do this, I will just get to a never-ending blinking cursor. I have tried every conceivable CMOS config and nothing else works. Tried setting port 3 (the new HD w/ Ubuntu) to first hard disk boot priority. Tried disabling all other ports and enabling the Ubuntu HD port and vice versa. Here's a pic of the error I get when I try to boot to Windows: http://imgur.com/TJ1mS. Also, please note that I can actually access all files from the raided Windows drive through Ubuntu. (Someone suggested just reinstalling windows from installation CD. Agree?)

    Read the article

  • Simple Distributed Disconnected way to sync a directory

    - by Rory
    I want to start regularly backup my home directory on my ubuntu laptop, machine X. Suppose I have access to 2 different remote (linux) servers that I can backup to, machines A & B. Machine X will be the master, and should be synced to A and B. I could just regularly run rsync from X to A and then from X to B. That's all I need. However I'm curious if there's a more bandwidth effecient, and hence faster way to do it. Assuming X is going to be on residential style broadband lines, and since I don't want to soak up the bandwidth, I would limit the transfer from X. A and B will be on all the time, however X, will not be, so I'd also like to reduce the amount of time that X is transfering, potentially allowing A and B to spend more time transfering. Also, X won't be connected all the time. What's the best way to do this? rsync from X to A, then from A to B? Timing that right could be troublesome. I don't want to keep old files around, so if I was to rsync, then the --del option would be used. Could that mean something might get tranfered from A to B, then deleted from B, then transfered from A to B again? That's suboptimal. I know there are fancy distributed filesystems like gluster, but I think that's overkill in this case, and might not fit with the disconnected nature.

    Read the article

  • The plugin of munin is always timed out

    - by haoX
    I want to use munin to make a graph of ttyACM0 in Linux, but munin can not create the graph. I found some information in "munin-node.log". it shows that "Service 'temperature' timed out". So I changed timeout to 60 or 120 in /munin/plugin-conf.d/munin-node, but it does not work. It's also timed out. Here is part of my code: if [ "$1" = "config" ]; then echo 'graph_title Temperature of board' echo 'graph_args --base 1000 -l 0' echo 'graph_vlabel temperature(°C)' echo 'graph_category temperature' echo 'graph_scale no' echo 'graph_info This graph shows the temperature of board' for i in 1 2 3 4 5; do case $i in 1) TYPE="Under PCB" ;; 2) TYPE="HDD" ;; 3) TYPE="PHY" ;; 4) TYPE="CPU" ;; 5) TYPE="Ambience" ;; esac name=$(clean_name $TYPE) if [ "$TYPE" != "NA" ]; then echo "temp_$name.label $TYPE"; fi done exit 0 fi for i in 1 2 3 4 5; do case $i in 1) TYPE="Under PCB" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $1}') ;; 2) TYPE="HDD" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $2}') ;; 3) TYPE="PHY" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $3}') ;; 4) TYPE="CPU" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $4}') ;; 5) TYPE="Ambience" VALUE=$(head -1 /dev/ttyACM0 | awk '{print $5}') ;; esac name=$(clean_name $TYPE) if [ "$TYPE" != "NA" ]; then echo "temp_$name.value $VALUE" fi

    Read the article

  • krenew command not working : Permission Denied

    - by prathmesh.kallurkar
    I am using a Linux server to perform my simulations. The login and the file-system of the server are protected using kerberos. The file-system is supported using NFS. Since my simulations take a lot of time to run, my ssh sessions used to hang regularly. So, I have started running my simulations in byobu (similar to screen). In order to make sure that my kerberos session remains active, I am using the krenew command. I have entered the following command in my .bash_profile file. (I am sure that it is called for every login) killall -9 krenew 2> /dev/null krenew -b -t -K 10 So everytime I ssh to the server, I kill the existing krenew command. Then, I spawn a new krenew command -b (which runs in background), -t (I forgot why I was using this option !), and -K 10 (It must run after every 10 minutes and refresh the kerberos cache). When I run the simulations, It runs for 14 hours and then suddenly, I am getting error for reading file Permission Denied Is the command that I am running incorrect ??

    Read the article

  • Centos repository packages vs latest developer release

    - by fran
    I have started to run a personal server using CentOS and I have noticed that many packages that are available to install from repository are old compared with the latest release from the developer. I know that installing packages from repository is very easy and I guess that the supplied versions are stable and prepared to work without any trouble, but I still find odd having so much software that lags behind the current version. It's my first time with linux and I don't know what is the "normal" thing, should I stick to whatever version the repository supplies, or try to get the latest from the developer? To be more precisely, the repository supplies the apache httpd web server with version 2.2, I wanted to update to 2.4, so I started removing apache and its dependencies packages that come with centos to use the latest ones, but when I was about to remove pcre v6 to replace it with v8, i found out that 132 installed packages depend on it and probably it is not a good idea to remove it, so that made me think twice about getting the latest software instead of using the packages supplied by the official repositories. Should I leave things as they are instead of going on an upgrade rampage? Thanks

    Read the article

  • mysql_query missing during installation

    - by Arsenal
    Hi, I'm trying to install the pdo_mysql extension... I managed to install it succesfully, but ever since I upgraded mysql to 5.1.34 (using rpm packages) it seems to have gone down so I tried to reinstall it. However it seems to crash on ./configure as it gives 'mysql_query not found' error: configure:3961: checking for mysql_query in -lmysqlclient configure:3991: gcc -o conftest -g -O2 -I/usr/local/include/php -Wl,- rpath,/usr/lib/mysql -L/usr/lib/mysql -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc conftest.c -lmysqlclient -rdynamic -L/usr/lib/mysql -lmysqlclient -lz -lcrypt -lnsl -lm - lmygcc >&5 /usr/bin/ld: skipping incompatible /usr/lib/mysql/libmysqlclient.a when searching for -lmysqlclient /usr/bin/ld: skipping incompatible /usr/lib/mysql/libmysqlclient.a when searching for -lmysqlclient /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../libmysqlclient.so when searching for -lmysqlclient /usr/bin/ld: skipping incompatible /usr/lib/libmysqlclient.so when searching for -lmysqlclient /usr/bin/ld: cannot find -lmysqlclient collect2: ld returned 1 exit status configure:3997: $? = 1 configure: failed program was: | /* confdefs.h. */ ... In that file there seems to be a mysql_query(); statement. I'm pretty sure mysql_query works however since all of my websites are running normally. However the current setup is a mess (previous students kind of messed it up) and there are a whole lot of libmysqclient's in /etc: libmysqlclient.so.10.0.0 libmysqlclient.so.12.0.0 libmysqlclient.so.14.0.0 libmysqlclient.so.15.0.0 libmysqlclient.so.16.0.0 libmysqlclient_r.so.10.0.0 libmysqlclient_r.so.12.0.0 libmysqlclient_r.so.14.0.0 libmysqlclient_r.so.15.0.0 libmysqlclient_r.so.16.0.0 And just as much symlinks. Does anyone know how to get this right? Many thanks! (oh, and no pecl install pdo_mysql doesn't get me any further). I'm runnnig CentOS 4 with php 5.2.9 compiled from source and MySQL 5.1.34

    Read the article

  • User-unique .vimrc file for servers as root user

    - by Scott
    I'm getting thrown into an IDE war at the office, where multiple users have root access on our servers, and like to have everything their own way with VIM. Unfortunately, we have our servers locked down enough to where if you want to do anything, you need to have root access. Obviously (although this is obviously frowned upon), we get tired of typing sudo before each command we type, which would require that we constantly type in our wonderfully complex passwords that are mandated on us over and over again, so naturally we all just execute the sudo su - command upon login to avoid all of this. Of course, when it comes to VIM and custom .vimrc files, we are often times stepping on someone else's custom .vimrc file, and we have some whacked out functionality in these files that users have that may overwrite functionality that we have no idea about, much less have the patience to learn either. When as root on a linux box, is there any way for all of us to still maintain our .vimrc file without having to overwrite the file over and over again every time someone wants to use VIM? Ideally, we have many virtual machines all with VIM installed, so a universal solution across all servers would be best, and we do have our Microsoft Windows user specific home directories mounted on the servers under /home/username. Any recommendations for accommodating this?

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >