Search Results

Search found 36619 results on 1465 pages for 'damn small linux'.

Page 281/1465 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • Best practice for scaling a single application source to multiple nodes

    - by Andrew Waters
    I have an application which needs to scale horizontally to cover web and service nodes (at the moment they're all on one) but interact with the same set of databases and source files (both application code and custom assets). Database is no problem, it's handled already with replication in MongoDB. Also, the configuration of the servers are the same (100% linux). This question is literally about sharing a filesystem between machines so that its content is always correct, regardless of the node accessing it. My two thoughts have so far been NFS and SAN - SAN being prohibitively expensive and NFS seeing some performance issues on the second node with regards to glob()ing in PHP. Does anyone have recommended strategies or other techniques that don't involved sharding data across nodes or any potential gotchas in NFS that may cause slow disk seek times? To give you an idea of the scale, the main node initialises it's application modules in ~ 0.01 seconds. The secondary is taking ~2.2 seconds. They're VM's inside a local virtual network in ESXi and ping time between them is ~0.3ms

    Read the article

  • DHCPOFFER delay VLAN

    - by john883
    I have configured two VLANs [ 15 and 16 ] and a trunkport on a Cisco Catalyst 2960. The trunkport is connected to eth2 on a Linux server The server is configured to support VLAN's and the interfaces eth2.15 eth2.16 is configured with ip addresses on two different subnet. dhcp3-server is running on the same server and hands out IP-addresses to the VLANs. When connecting a client to a port that is configured in ex. VLAN 15 and requesting a IP-address, i experience a long delay before recieving a DHCPOFFER, around 30 seconds or so, the client needs to send a DHCPDISCOVER about five times but will always recieve a DHCPOFFER. Any suggestion why this delay is happening?

    Read the article

  • breaking mdadm raid and moving to NTFS

    - by daveyt
    I'm running Ubuntu 8 something and my data is on a mirrored pair of 1TB disks formatted as ext3, and the RAID is via mdadm. I want to move to Windows 7 (yeah yeah I know but Linux aint doing it for me at the moment) and migrate the disks to NTFS. My plan is: Break the MDADM RAID (by failing one disk logically) Format the 'failed' disk as NTFS Copy data from the RAID array to the NTFS disk (dont care about perms) Install Windows, (new separate non RAid disk) and my data disk is available. I've researched this and it seems the easiest way. I dont have another disk to back up to so I think this is my only way. Can anyone see a better/easier way?

    Read the article

  • Virtual Hosting in RHEL5

    - by Kumar P
    We having RHEL5 linux server with few windows xp clients. We providing web development in php. Now my developers as for common local php server for keep their projects in same place. Currently proxy server and samba sharing are running in RHEL5 server. I installed httpd,php,MySQL in server. And i would like to configure virtual hosting too for LAN. What i want do for it ? In server we have 2 Ethernet ports, 1 for local connection and another one for Internet. Internet provided by ADSL provider. (192.168.0.0 series for ADSL modem connection and 10.1.1.0 for LAN connection. If i want to use virtual hosting , am i want to setup local DNS server ? My requirements are, setup php with mySQL server for Local clients with multiple hosting , without disturbing proxy and samba. Help me to solve it.

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks

    Read the article

  • Global hotkeys: songbird on KDE

    - by alpha1
    I'm running songbird on opensuse 11.2 KDE 4.3.1 on my EEE pc. On windows, there is a hotkey thing inside Songbird, so i set META F9,10,11,12 as media keys and it work just fine. On linux, there is not hotkey thing in songbird, and I would like to set those same hotkeys. I've played around with the Amarok Hotkeys, which are now setup that way, and looked in all the KDE shortcuts, but cannot find a way to add a new program and new hot keys. I know its possible, I did it before once, but the KDE shortcut programs have changed and I no longer see the stuff i used to do it before. I'd like to do the same to banshee at some point, but Songbird is the important program. Any Ideas? Any way to set those keys to generic media buttons?

    Read the article

  • Virtual OS using same Wallpaper as Host

    - by Jeff
    Greetings, I'm running a guest Linux OS on top of Windows XP, which rotates its wallpapers using the PowerToy Wallpaper Changer. I'm hoping for a way for my guest OS to somehow detect which wallpaper the host is using, and automatically switch to it. Why? Because if I run my guest OS in seamless mode and have transparent windows, I want the transparent background to match the host OS. It looks nice that way :). A couple tidbits of relevant information: Guest OS is Peppermint Ice (Ubuntu based) Host OS is Windows XP VirtualBox as virtualization software I realize this is somewhat breaking borders between the host and the guest, but I want my pretty rotating wallpaper! I'm guessing there is a way using scripts and shared folders or something similar, if not by means of just querying the host OS.

    Read the article

  • Can't figure out how to make Slitaz USB persistent

    - by Dennis Hodapp
    I installed Slitaz on my USB. However I can't figure out how to make it persistent automatically. There are different sources telling me different ways to make it persistent. One told me to add "slitaz home=usb" to the syslinux.cfg file like this: append initrd=/boot/rootfs.gz rw root=/dev/null vga=normal autologin slitaz home=usb but it didn't work for me. http://www.slitaz.org/en/doc/handbook/liveusb.html gave an example of how to do it manually but I didn't try it and I also want it to happen automatically. custompc.co.uk/features/602451/make-any-pc-your-own-with-linux-on-a-usb-key.html is an older article that also explains how to make the USB persistent but I don't want to try it cause it looks outdated (from 2008) does anyone know the best way to make the USB automatically persistent?

    Read the article

  • How do I configure networking on CentOS 6 running on hyper-v?

    - by LonelyLonelyNetworkN00b
    I'm not using legacy adapters, and i've installed Linux Integration Components 3.2. THe problem i'm facing is that the command 'setup' or 'system-config-network' doesn't list any network interfaces. If i run ifconfig -a i can see both the network cards i've attached. By setting a ip using ifconfig i can get network connectivity. The problem is that it's not persistent after a reboot. I'm a 100% centos newbie, but I figure it has something to do with that the centos installer couldn't see the NICs on install. How can I fix this?

    Read the article

  • df command show no output

    - by user119720
    I'm running the linux distro on my server.When i want to verify the size of the disk, i'm issuing this commnand to get the output. df -h But it does not produce ANY output.Strangely enough when i'm issuing other command such as fdisk -l or du -h it can show output normally. Does anyone now why is this happening?Thanks. edit: here is the output of cat /etc/fstab none /dev/pts devpts rw 0 0 and this is for mount command none on /dev/pts type devpts (rw) none on /proc/sys/fs/binfmt_misc tpe binfmt_misc (rw) edit(2): here is the output of cat /proc/mounts /dev/vzfs / vzfs rw,relatime,usrquota,grpquota 0 0 proc /proc proc rw,relatime 0 0 sysfs /sys sysfs rw,relatime 0 0 none /dev/tmpfs rw,relatime 0 0 none /dev/pts devpts rw,relatime 0 0 none /proc/sys/fs/binfmt_misc binfmt_msc rw,relatime 0 0

    Read the article

  • Wine not finding some files

    - by Levans
    I'm having strange issues with Wine : If I look a C:\windows\system32\drivers\ in wine explorer, the directory looks empty, while the directory ~/.wine/drive_c/windows/system32/drivers is not. Plus, having the H: drive mapped to my home directory, I can look at H:\.wine\drive_c\windows\system32\drivers and it is not empty, the files are here ! Thus it seems Wine has the rights to access these files. So why don't they appear on the C: drive ? Some of my programs need them. I'm using Gentoo Linux, and Wine is version 1.7.0 compiled with these useflags (from eix) : X alsa cups fontconfig gecko jpeg lcms ldap mono mp3 ncurses nls openal opengl perl png prelink run-exes ssl threads truetype udisks xcomposite xinerama xml -capi -custom-cflags -dos -gphoto2 -gsm -gstreamer -odbc -opencl -osmesa -oss -pulseaudio -samba -scanner -selinux -test -v4l ABI_MIPS="-n32 -n64 -o32" ABI_X86="32 64 -x32" ELIBC="glibc" EDIT: I just updated to wine 1.7.4 and nothing changed.

    Read the article

  • setting up bridged adapter for VPN server

    - by B. VB.
    I have an Ubuntu linux Linode server that I am trying to install OpenVPN on. I'm following the tutorials (which, it turns out, are quite incomplete). auto br0 iface br0 inet static address 192.168.0.10 network 192.168.0.0 netmask 255.255.255.0 broadcast 192.168.0.255 gateway 192.168.0.1 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off When I add this chunk in my /etc/network/interfaces, and I restart networking, my eth0 interface does not have an IP and I cannot get on the network (I need to use a buggy, slow, and annoying AJAX term to do damage repair). Why does adding this screw everything up? Any tips on how to set up this bridged adapter?? Thanks in advance!

    Read the article

  • Routing application traffic through specific interface

    - by UnicornsAndRainbows
    Hello All! First question here, so please go easy: I have a debian linux 5.0 server with two public interfaces. I would like to route outbound traffic from one instance of an application via one interface and the second instance through the second interface. There are some challenges: both instances of the application use the same protocol both instances of the application can access the entire internet (can't route based on dest network) I can't change the code of the application I don't think a typical approach to load balancing all traffic is going to work well, because there are relatively few destination servers being accessed in the outbound traffic, and all traffic would really need to be distributed pretty evenly across these relatively few servers. I could probably run two virtualized servers on the box and bind each of them to a different external ip, but I'm looking for a simpler solution, maybe using iproute or iptables? Any ideas for me? Thanks in advance - and I'm happy to answer any questions.

    Read the article

  • Where can I configure wireless to be passed on to the Virtualbox guest?

    - by huahsin68
    I have WinXP install in virtualbox which host in Linux. I have a TP-Link (TP-WN321G) USB wifi adapter and have the driver installed inside WinXP. When I plug-in the wifi adapter, there is an option show "Ralink 802.11g WLAN [0101]" in the virtualbox's USB icon, tick on that option, the Device Manager able to detect the hardware which shows TP-Link, but when look into the properties, it says there is no driver was install. I did try to install Ralink driver but still no luck. Just curious why my wifi adapter is TP-Link, but the option show Ralink? May I know how can I emulate the wireless network inside WindowsXP?

    Read the article

  • Finding proof of server being compromised by Black Hole Toolkit exploit

    - by cosmicsafari
    I recently took over maintenance of a company server. (Just Host, C Panel, Linux server), theres a tonne of websites on it which i know nothing about. It had came to my attention that a client had attempted to access one of the websites hosted on this server and was met with a warning from windows defender. It had blocked access because it said the website had been compromised by the Black Hole Toolkit or something to that effect. Anyway I went in and updated various plugins and deleted some old suspect websites. I have since ran the website in question through a few online malware scanners and its comes up clean everytime. However im not convinced. Do any of you guys know extensive ways i can check that the server isn't still compromised. I have no way to install any malware scanners or anti virus programs on the server as it is horribly locked down by Just Host.

    Read the article

  • PostgreSQL disaster recovery options

    - by Alex
    My customer has quite a large (the total "data" folder size is 200G) PostgreSQL database and we are working on a disaster recovery plan. We have identified three different types of disasters so far: hardware outage, too much load and unintentional data loss due to erroneously executed bad migration (like DELETE or ALTER TABLE DROP COLUMN). First two types seem to be easy to mitigate but we can't elaborate a good mitigation plan for the third type. I proposed to use ZFS and frequent (hourly) snapshots but "ZFS" means "OpenIndiana" these days and our Ops engineers do not have much expertise in it, so using OpenIndiana imposes another risk. Colleagues try to convince me that restoring from PostgreSQL PITR backup can be as fast as restoring from a ZFS snapshot but I highly doubt that replaying, say, 50G of archived WALs can be considered "fast". What other options are we missing? Is ZFS an only viable alternative? Can we get a fast Pg DB restore time in the Linux environment?

    Read the article

  • Remote Control Home PC from Corporate Work PC

    - by muncherelli
    Here is my situation: I am currently on a Windows XP workstation at work. I have an android tablet that I use to splashtop into my home PC. I would like to be able to use my work keyboard and mouse to control my home PC while I am splashtop'd into it using my tablet. My work PC is on a corporate LAN, and not on the same network as my tablet. The company I work for provides wifi for personal devices, but they are not accessable to the internal network. I thought about going the Synergy route, however that would require my home PC to be able to connect to my work PC which isn't really possible. The opposite would work though, if I could reverse connect the server to the client, but the Synergy software doesn't really support that. I do have a couple linux boxes running at home, so I can ssh into my home network and tunnel ports via SSH if needed. With what I have, how can I accomplish seamless keyboard and mouse sharing between my work PC and either my home PC or my android tablet?

    Read the article

  • Plesk directory structure problems

    - by johnnietheblack
    I have an entire website with the following directory structure: /example.com /html (public) /css /js index.php /lib session.php other_lib_files.php /views index.php /models /controllers As illustrated, the html is public, and anything above it is private. My site now needs to upgrade servers, and the new server (Linux w/ Plesk) has the following structure (reduced to the problematic parts below): /myplesksite.com /httpdocs /css /js index.php /private /lib /models /views What I would THINK is that I should be able to put my /lib, /views, /models, etc in the directory directly above /httpdocs, the same way I had it in my previous server. Is that possible? Or do I have to put it in private? I would really love not to have to adjust my internal paths throughout the site if not necessary...

    Read the article

  • What does dd conv=sync,noerror do?

    - by dding
    So what is the case when adding conv=sync,noerror makes a difference when backing up an entire hard disk onto an image file? Is conv=sync,noerror a requirement when doing forensic stuff? If so, why is it the case with reference to linux fedora? Edit: OK, so if I do dd without conv=sync,noerror, and dd encounters read error when reading the block (let's size 100M), does dd just skip 100M block and reads the next block without writing something (dd conv=sync,noerror writes zeros to 100M of output - so what about this case?)? And if is hash of original hard disk and output file different if done without conv=sync,noerror? Or is this only when read error occurred?

    Read the article

  • fail2ban on server with LXC Containers

    - by RoboTamer
    The issue is modprobe and iptables don't work inside an LXC Container. LXC is the userspace control package for Linux Containers, a lightweight virtual system mechanism sometimes described as “chroot on steroids”. iptables error inside the container is: # iptables -I INPUT -s 122.129.126.194 -j DROP > iptables v1.4.8: can't initialize iptables table `filter': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. I am guessing that it can't work because the LXC containers share one kernel, the main server kernel. How do I do fail2ban in this case. modprobe and iptables work in the main server so I could install it there and link to the logfiles somehow, my guess? Any suggestions?

    Read the article

  • Kernel appears to have no modules

    - by George Reith
    Useful info: OS: CentOS 5.8 final Kernel: 2.6.32-042stab056.8 My kernel came prebuilt with the server, I don't know anything about kernels and not a lot about linux however as far as I do know I should have some modules loaded by the kernel. I came across this problem because I am unable to run iscsi as it is expecting certain modules to be loaded. lsmod returns nothing. depmod -a returns: WARNING: Couldn't open directory /lib/modules/2.6.32-042stab056.8: No such file or directory FATAL: Could not open /lib/modules/2.6.32-042stab056.8/modules.dep.temp for writing: No such file or directory I have rebooted and nothing has changed. Does anyone know why this is happening?

    Read the article

  • "Installing" GD for PHP

    - by gbuckingham89
    I'm new to server admin & Linux and have just got a VPS running CentOS 6. Apache, MySQL and PHP all came installed (along with cPanel and WHM), however I'm now also trying to install the GD library. I've run "yum install php-gd" and it installed ok. If I run it again I get "Package php-gd-5.3.2-6.el6_0.1.x86_64 already installed and latest version". However, when I do a phpinfo() or from the command line "php -m" there is no mention of GD. Is there anything else I need to do?

    Read the article

  • Build of expect v5.43 fails with Tcl v8.5.8

    - by E Brown
    Hi, I'm trying to build "expect" from source v5.43, using Tcl built from source v8.5.8 on Redhat Linux. Tcl built fine, but my attempt to build expect fails. I run configure, then make, which gives me the error: `TCL_REG_BOSONLY' undeclared when compiling exp_inter.c. I did some digging around, and found the TCL_REG_BOSONLY value defined in Tcl file tclInt.h, but there is no #include for that in the exp_inter.c file. My question is, can "expect" be built from source with Tcl version 8.5.8, or does it require an earlier version? Version 5.43 is the latest for "expect" that I can find, and the current Tcl version is 8.5.8, but something doesn't seem compatible between the two. Any help appreciated.

    Read the article

  • How useful is mounting /tmp noexec?

    - by Novelocrat
    Many people (including the Securing Debian Manual) recommend mounting /tmp with the noexec,nodev,nosuid set of options. This is generally presented as one element of a 'defense-in-depth' strategy, by preventing the escalation of an attack that lets someone write a file, or an attack by a user with a legitimate account but no other writable space. Over time, however, I've encountered arguments (most prominently by Debian/Ubuntu Developer Colin Watson) that noexec is a useless measure, for a couple potential reasons: The user can run /lib/ld-linux.so <binary> in an attempt to get the same effect. The user can still run system-provided interpreters on scripts that can't be run directly Given these arguments, the potential need for more configuration (e.g. debconf likes an executable temporary directory), and the potential loss of convenience, is this a worthwhile security measure? What other holes do you know of that enable circumvention?

    Read the article

  • Watch Filesystem in Real Time on OS X and Ubuntu

    - by Adrian Schneider
    I'm looking for a CLI tool which will watch a directory and spit out the names of files that change in real time. some_watch_command /path/to/some/folder | xargs some_callback I'm aware of inotify (inotify-tools?) and it seems to be what I need, but I need something that is both Linux (in my case Ubuntu) and OSX compatible. It doesn't need to be lightning fast, but it does need to trigger upon changes (within a second is reasonable). Also, I don't necessarily need the exact CLI program mentioned above. If some underlying tech exists and is easily scriptable on both platforms that would be great too.

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >