Search Results

Search found 9136 results on 366 pages for 'dev drupal'.

Page 319/366 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • OpenVPN access to a private network

    - by Gior312
    There are many similar topics about my issue, however I cannot figure out a solution for myself. There are three hosts. A without a routable address but with an Internet access. Server S with a routable Internet address and host B behind NAT in a private network. What I've managed to do is a OpenVPN connection between A and B via S. Everything works fine so far according to this manual VPN Setup What I want to do is to connect A to Bs private network 10.A.B.x I tried this manual but had no luck. So A has a vpn address 10.9.0.10, B's vpn address is 10.9.0.6 and B's private network is 10.20.20.0/24. When at the Server I try to make a route to Bs private network like this sudo route add 10.20.20.0 netmask 255.255.255.0 gw 10.9.0.6 dev tun0 it says "route: netmask 000000ff doesn't make sense with host route" but I don't know how to tell Server to look for a private network in a different way. Do you know how can I make it right ?

    Read the article

  • 2008 SR2 Server Starts Then Fails to Initialize DNS

    - by ThaKidd
    Got a weird situation going on. Background: Just transferred Active Directory from a 2003 Server to 2008 SR2. Removed the 2003 server from AD but I have not upgraded Active Directory to 08 only yet. After the transition a problem started. Whenever I reboot the server and I log in, the DNS server is "stopping". After a few minutes it finishes and I can restart start it at that point. Once it is restarted, all services come up. Now I did try to install HyperV (this is a dev server btw). Once the reboot for HyperV, everything was screwed as in I could not ping anything. Uninstalled and had the DNS server issue. Screwed with IPv6 settings (which I am not using) and problem was resolved for a bit. Also installed an Intel Pro1000 card and had a bit of success with DNS; then it failed. Weird thing is, outside of an error in syslog stating that the DNS server failed to start, there is no specific error that is generated in either System or DNS Server logs. Ideas are much appreciated! Thanks in advance.

    Read the article

  • krenew command not working : Permission Denied

    - by prathmesh.kallurkar
    I am using a Linux server to perform my simulations. The login and the file-system of the server are protected using kerberos. The file-system is supported using NFS. Since my simulations take a lot of time to run, my ssh sessions used to hang regularly. So, I have started running my simulations in byobu (similar to screen). In order to make sure that my kerberos session remains active, I am using the krenew command. I have entered the following command in my .bash_profile file. (I am sure that it is called for every login) killall -9 krenew 2> /dev/null krenew -b -t -K 10 So everytime I ssh to the server, I kill the existing krenew command. Then, I spawn a new krenew command -b (which runs in background), -t (I forgot why I was using this option !), and -K 10 (It must run after every 10 minutes and refresh the kerberos cache). When I run the simulations, It runs for 14 hours and then suddenly, I am getting error for reading file Permission Denied Is the command that I am running incorrect ??

    Read the article

  • less maximum buffer size?

    - by Tyzoid
    I was messing around with my system and found a novel way to use up memory, but it seems that the less command only holds a limited amount of data before stopping/killing the command. To test, run (careful! uses lots of system memory very fast!) $ cat /dev/zero | less From my testing, it looks like the command is killed after less reaches 2.5 gigabytes of memory, but I can't find anything in the man page that suggests that it would limit it in such a way. In addition, I couldn't find any documentation via the google on the subject. Any light to this quite surprising discovery would be great! System Information: Quad core intel i7, 8gb ram. $ uname -a Linux Tyler-Work 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ less --version less 458 (GNU regular expressions) Copyright (C) 1984-2012 Mark Nudelman less comes with NO WARRANTY, to the extent permitted by law. For information about the terms of redistribution, see the file named README in the less distribution. Homepage: http://www.greenwoodsoftware.com/less $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty

    Read the article

  • Stepmania + KDE4 = sound problem

    - by picca
    I cannot manage to get KDE4 + stepmania working. If I run StepMania I always get: StepMania 3.9 Log starting 2010-12-24 14:52:48 Loading window: gtk OS: Linux ver 020636 Crash backtrace component: x86 custom backtrace Crash lookup component: dladdr Crash demangle component: cxa_demangle Runtime library: glibc 2.11.2 Threads library: NPTL 2.11.2 TLS is available ALSA: Advanced Linux Sound Architecture Driver Version 1.0.23. ALSA Driver: 0: HDA ATI SB [SB], device 0: STAC92xx Analog [STAC92xx Analog], 0/1 subdevices avail ALSA Driver: 0: HDA ATI SB [SB], device 1: STAC92xx Digital [STAC92xx Digital], 1/1 subdevices avail Couldn't load driver ALSA: dsnd_pcm_open(hw:0): Device or resource busy Mixing 0.000000 ahead in 0 Mix() calls Couldn't load driver ALSA-sw: dsnd_pcm_open(hw:0): Device or resource busy Mixing 0.000000 ahead in 0 Mix() calls Couldn't load driver OSS: RageSound_OSS: Couldn't open /dev/dsp: Device or resource busy Language: english Theme: default Error: Couldn't find a sound driver that works I found that in StepMania/Data/StepMania.ini I should add following line: SoundDevice=default That enables me to run StepMania, but I don't have any sound. Which is pretty bad for an application like this one. I'm quite sure that the problem is in phonon that is blocking the audio device to which StepMania needs to access directly. I think that I can fix this if I run other (lighter) window-manager than KDE4. But that is not a solution occasional linux user. Do I have any chance to get StepMania under KDE4 completely working?

    Read the article

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • Difficulty restoring a differential backup in SQL Server, 2 media families are expected or no files are ready for rollforward

    - by digiguru
    I have sql backups copied from server A to server B on a nightly basis. We want to move the sql server from server A to server B without much downtime, but the files are very large. I assumed that performing a differential backup and restore would solve the problem with the databases. Copy full backup from server A to copy to server B (10+gb) Open SQL Server Managment Studio on server B Right mouse on databases Restore Database Type in the new DB-name Choose "From Device" and browse to the backup file Click Okay. This is now resorting the original "full" backup. Test new db with dev application - everything works :) On original database rightmouse on DB Tasks Backup... Backup Type = Differential, Backup to disk, add a new file, and remove the old one (it needs to be a small file to transfer for the smallest amount of outage) Copy the diff backup onto the new db Right mouse on DB Tasks Restore Database This is where I get stuck. If I add both the new differential file, and the original backup to the restore process I get an error The media loaded on "M:\path\to\backup\full.bak" is formatted to support 1 media families, but 2 media families are expected according to the backup device specification. RESTORE HEADERONLY is terminating abnormally. But if I try to restore using just the differential file I get System.Data.SqlClient.SqlError: The log or differential backup cannot be restored because no files are ready to rollforward. (Microsoft.SqlServer.Smo) Any idea how to do it? Is there a better way of restoring backups with limited downtime?

    Read the article

  • What should I encrypt in Debian during install?

    - by ianfuture
    I have seen various guides and recommendations on web about how best to do this but nothing that clearly explains the best way and why. So I understand there is a need for part of Debian during install to be un-encrypted on its own partition to allow it to boot. Most info I have seen is call this /boot and set the boot flag. Next I believe the best approach is to create another partition out of all the rest of the disk space, encrypt this, then on top of that create a LVM and then within the LVM create my various partitions , name them , select size, and file system type. Can I include /swap in the encrypted LVM part ? Is this approach sound? If so what are the partitions I should use (this is going to be a minimal server install with a view to install as and when what I need for a dev server)? Finally how does the installer know what to put in each partition I define ? I appreciate there are more than one question but any help and suggestions would be appreciated. If further clarification is needed please mention in the comments . Thanks.. Ian

    Read the article

  • How to make quicksilver remember custom trigger

    - by corroded
    I am trying to make a custom trigger for my shell/apple script file to run so I can just launch my dev environment at the push of a button. So basically: I have a shell script(and some apple script included) in ~ named start_server.sh which does 3 things: start up solr server start up memcached start up script/server I have a saved quicksilver command(.qs) that opens up start_server.sh(so start_server.sh, then the action is "Run in Terminal") I created a custom trigger that calls this saved qs command. I did that then tested it and it works. I then tried to double check it so I quit quicksilver and when I checked the triggers it just said: "Open (null)" as the action. I set the trigger again and when i restarted QS the same thing happened again. I don't know why but my old custom trigger to open terminal has worked since forever so why doesn't this one work? Here's a screenie of the triggers after I restart QS: http://grab.by/4XWW If you have any other suggestion on how to make a "push button" start for my server then please do so :) Thanks! As an added note, I have already tried the steps on this thread but to no avail: http://groups.google.com/group/blacktree-quicksilver/browse_thread/thread/7b65ecf6625f8989

    Read the article

  • Strange File-Server I/O Spikes - What Is Causing This?

    - by CruftRemover
    I am currently having a problem with a small Linux server that is providing file-sharing services to four Windows 7 32-bit clients. The server is an AMD PhenomX3 with two Western Digital 10EADS (1TB) drives, attached to a Gigabyte GA-MA770T-UD3 mainboard and running Ubuntu Server 10.04.1 LTS. The client machines are taking an extremely long time to access/transfer data on the file server. Applications often become non-responsive while trying to open files located remotely, or one program attempting to open a file but having to wait will prevent other software from accessing network resources at all. Other examples include one image taking 20 seconds or more to open, and in one instance a user waited 110 seconds for Microsoft Word 2007 to save a document. I had initially thought the problem was network-related, but this appears not to be the case. All cables and switches have been tested (one cable was replaced) for verification. This was additionally confirmed when closing down all client machines and rebooting the server resulted in the hard-drive light staying on solid during the startup process. For the first 15 minutes during boot, logon and after logging on (with no client machines attached), the system displayed a load average of 4 or higher. Symptoms included waiting several minutes for the logon prompt to appear, and then several minutes for the password prompt to appear after typing in a user name. After logon, it also took upwards of 45 seconds for the 'smartctl' man page to appear after the command 'man smartctl' was issued. After 15 minutes of this behaviour, the load average dropped to around 0.02 and the machine behaved normally. I have also considered that the problem is hard-drive-related, however diagnostic programs reveal no drive problems. Western Digital DLG, Spinrite and SMARTUDM show no abnormal characteristics - the drives are in perfect health as far as the hardware is concerned. I have thus far been completely unable to track down the cause of this problem, so any help is greatly appreciated. Requested Information: Output of 'free' hxxp://pastebin.com/mfsJS8HS (stupid spam filter) The command 'hdparm -d /dev/sda1' reports: HDIO_GET_DMA failed: Inappropriate ioctl for device (the BIOS is set to AHCI - I probably should have mentioned that).

    Read the article

  • How to improve Samba performance on VirtualBox machine?

    - by ColinM
    I am running a Windows 7 64bit host and Ubuntu 9.04 32bit guest inside of VirtualBox 4.0.0 on a laptop which has internet connectivity via Wifi. The main use is writing code for which I use Netbeans. My dev environment is hte virtual machine and I use Samba on the VM to share the code directory so that I can use Netbeans on the host as my IDE. Unfortunately Netbeans does a lot of disk access and due to the poor Samba performance it makes the IDE hardly usable. How can I improve performance of the Samba share? On my desktop it isn't so bad but I don't know what the difference would be since they are similar setups (Win 7 hosts, cloned guests, SSDs, Vbox guests using SATA in AHCI mode, etc..). With Bridged networking is the performance between the host and guest limited by the physical hardware (Intel 6200 AGN on laptop)? I switched to Host-only and it didn't seem to improve performance at all. To clarify bad performance, I used 7zip to zip a project directory and got 19kbs to 500kbs depending on the size of the files being zipped. On my desktop it was in the ~10mbs range. Any tips for VirtualBox/Samba configuration to get improve the performance? I am using Samba 3.3.2. Hopefully Samba with SMB2 support will be released soon..

    Read the article

  • Audio server with best API?

    - by Wintermute
    I'm a web dev, working in a small studio with a couple of other devs and some crayon-munchers (or, "designers"). Like all the best and trendiest creative studios, we have tunes. Our tunes consists of a set of speakers that whoever wants to can plug into their machine, and DJ their little socks off via iTunes, Spotify, VLC or whatever their music player of choice happens to be. Obviously, this lacks finesse. What we WANT is this: a single, dedicated machine running some sort of audio player (ideally Win-based, but a Linux flavour isn't impossible), that exposes an API. We (ie: me and the other devs) want to write a web-based client onto it, that'll let us remotely do all sorts of funky stuff like generating on-the-fly genre-based playlists, and voting for tracks, and making tea. My question - and please forgive me if this isn't the place for such a question, I was going to ask on Stackoverflow but that didn't seem right either - is this: what's the best player to start with? What can do all of this? I know VLC can function as a streaming server, but know nothing of any API it may have. I'd rather chop my pinky off than use iTunes, but if it does what we want, then... Anyhow, thanks for reading. All comments and suggestions gratefully received.

    Read the article

  • How to set a static route for an external IP address

    - by HorusKol
    Further to my earlier question about bridging different subnets - I now need to route requests for one particular IP address differently to all other traffic. I have the following routing in my iptables on our router: # Allow established connections, and those !not! coming from the public interface # eth0 = public interface # eth1 = private interface #1 (10.1.1.0/24) # eth2 = private interface #2 (129.2.2.0/25) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -m state --state NEW ! -i eth0 -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth0 -o eth2 -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow outgoing connections from the private interfaces iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT iptables -A FORWARD -i eth2 -o eth0 -j ACCEPT # Allow the two private connections to talk to each other iptables -A FORWARD -i eth1 -o eth2 -j ACCEPT iptables -A FORWARD -i eth2 -o eth1 -j ACCEPT # Masquerade (NAT) iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # Don't forward any other traffic from the public to the private iptables -A FORWARD -i eth0 -o eth1 -j REJECT iptables -A FORWARD -i eth0 -o eth2 -j REJECT This configuration means that users will be forwarded through a modem/router with a public address - this is all well and good for most purposes, and in the main it doesn't matter that all computers are hidden behind the one public IP. However, some users need to be able to access a proxy at 192.111.222.111:8080 - and the proxy needs to identify this traffic as coming through a gateway at 129.2.2.126 - it won't respond otherwise. I tried adding a static route on our local gateway with: route add -host 192.111.222.111 gw 129.2.2.126 dev eth2 I can successfully ping 192.111.222.111 from the router. When I trace the route, it lists the 129.2.2.126 gateway, but I just get * on each of the following hops (I think this makes sense since this is just a web-proxy and requires authentication). When I try to ping this address from a host on the 129.2.2.0/25 network it fails. Should I do this in the iptables chain instead? How would I configure this routing?

    Read the article

  • Same native and tagged vlan possible on Redhat?

    - by Chris Phillips
    Hi guys and gals, I'm looking at implementing a systems using a number of tagged and a native vlan connected to a server over a a/p bonded interface. The untagged vlan is for physical machine access, the tagged vlans are connected to bridges and then to QEMU VM's inside the machine. Hopefully this plan is fine, but I'm trying to implement a crippled version of this in a dev environment due to a lack of underlying network config in this location where I just have the same single vlan delivered to the machine on a tag AND plain. I'm nto clear if this is going to work (and that I should just be confident that it will work using different vlans) as I'm seeing odd things like a vm is arping out over the vlan out to the core switch, but the arp reply is coming back on the untagged interface. Now an ARP reply is unicast right? So it's a deliberate thing to send the ARP response on the untagged interface, and not a case that a broadcast response isn't being passed on the tagged side... i.e. there's some underlying logic pushing it that way. Something about the MACs somehow? This is on a CentOS 5.5 machine, vlan's from vconfig. (I've seen reference to the Linux mac-vlan project work, but that's not available here by default.) so 1) Should having the SAME vlan tagged and untagged work? 2) Will different tagged vlans to the untagged interface work nice and easily?

    Read the article

  • Strange ssh login

    - by Hikaru
    I am running debian server and i have received a strange email warning about ssh login It says, that user mail logged in using ssh from remote address: Environment info: USER=mail SSH_CLIENT=92.46.127.173 40814 22 MAIL=/var/mail/mail HOME=/var/mail SSH_TTY=/dev/pts/7 LOGNAME=mail TERM=xterm PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games LANG=en_US.UTF-8 SHELL=/bin/sh KRB5CCNAME=FILE:/tmp/krb5cc_8 PWD=/var/mail SSH_CONNECTION=92.46.127.173 40814 my-ip-here 22 I looked in /etc/shadow and find out, that password for is not set mail:*:15316:0:99999:7::: I found this lines for login in auth.log n 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): getting password (0x00000388) Jun 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): pam_get_item returned a password Jun 3 02:57:09 gw sshd[2091]: pam_winbind(sshd:auth): user 'mail' granted access Jun 3 02:57:09 gw sshd[2091]: Accepted password for mail from 92.46.127.173 port 45194 ssh2 Jun 3 02:57:09 gw sshd[2091]: pam_unix(sshd:session): session opened for user mail by (uid=0) Jun 3 02:57:10 gw CRON[2051]: pam_unix(cron:session): session closed for user root and lots of auth failures for this user. There is no lines with COMMAND string for this user. Nothing was found with "rkhunter" and with "ps aux" process inspection, also there is no suspicious connections was found with "netstat" (as I can see) Can anyone tell me how it is possible and what else should be done? Thanks in advance.

    Read the article

  • Rescue system running TFS that BSODs, into vmware esxi

    - by 3molo
    Hi, After moving to new facilities, one of our old Dell servers running Windows Server 2003 R2 on PowerEdge 2650 HW BSODs with 0x8e. The server runs Team Foundation Server, so we have a few guys dependent on it. No one here knows TFS, so we have no idea how difficult it would be to setup from scratch. We have the MSSQL database(s) backed up, recent and fresh copy. Tried removing/refitting memory modules, but with no success. The system boots into safe mode but hangs occasionally. I booted a linux livecd and did a dd of both c: and d:, so I have all the data in compressed images on a vmware machine. For the guest, I created a 38G (actually it became 40GB) partition to act as C:, and booted a live cd. I then uncompressed the compressed disk image of c: and dd'd it to the new c: using 'gunzip -dc c.img.gz | dd of=/dev/sda1 bs=1M'. The operation ran for about 1000 seconds, and completed successfully. I assumed it would at least try to boot windows (but most likely BSOD due to not having correct drivers), but the Vmware ESXi guest does not seem to recognize it as a bootable disk. We don't have the vmware enterprise license, so the vmware converter cold cloning is not an option. Did I do something wrong in my dd's etc with the ISOs, or why would it not (try to) boot? Am I wasting my time? What other approach is there? Will continue to try to remove services and drivers to make the physical machine at least work reasonably well in safe mode. What do you suggest? 1. Continue to get the dd'd images to the virtual disk and get it to boot. 2. Install a new windows server, get team foundation server and restore from backup. 3. Focus on the old problematic hardware Any help appreciated

    Read the article

  • OpenVPN server will not redirect traffic

    - by skerit
    I set up an OpenVPN server on my VPS, using this guide: http://vpsnoc.com/blog/how-to-install-openvpn-on-a-debianubuntu-vps-instantly/ And I can connect to it without problems. Connect, that is, because no traffic is being redirected. When I try to load a webpage when connected to the vpn I just get an error. This is the config file it generated: dev tun server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt ca ca.crt cert server.crt key server.key dh dh1024.pem push "route 10.8.0.0 255.255.255.0" push "redirect-gateway" comp-lzo keepalive 10 60 ping-timer-rem persist-tun persist-key group daemon daemon This is my iptables.conf # Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011 *raw :PREROUTING ACCEPT [37938267:10998335127] :OUTPUT ACCEPT [35616847:14165347907] COMMIT # Completed on Sat May 7 13:09:44 2011 # Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011 *nat :PREROUTING ACCEPT [794948:91051460] :POSTROUTING ACCEPT [1603974:108147033] :OUTPUT ACCEPT [1603974:108147033] -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE -A POSTROUTING -s 10.8.0.0/24 -o eth1 -j MASQUERADE -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j MASQUERADE COMMIT # Completed on Sat May 7 13:09:44 2011 # Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011 *mangle :PREROUTING ACCEPT [37938267:10998335127] :INPUT ACCEPT [37677226:10960834925] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [35616847:14165347907] :POSTROUTING ACCEPT [35680187:14169930490] COMMIT # Completed on Sat May 7 13:09:44 2011 # Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011 *filter :INPUT ACCEPT [37677226:10960834925] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [35616848:14165347947] -A INPUT -i eth0 -j LOG --log-prefix "BANDWIDTH_IN:" --log-level 7 -A FORWARD -o eth0 -j LOG --log-prefix "BANDWIDTH_OUT:" --log-level 7 -A FORWARD -i eth0 -j LOG --log-prefix "BANDWIDTH_IN:" --log-level 7 -A OUTPUT -o eth0 -j LOG --log-prefix "BANDWIDTH_OUT:" --log-level 7 COMMIT # Completed on Sat May 7 13:09:44 2011

    Read the article

  • Can I autoregister my servers hostname in my local DNS? [on hold]

    - by Christian Wattengård
    We have evaluated a W2k12 server as a domain controller at work. This has the extra benefit of registering every "subordinate" computers name in it's DNS so that I don't have to go around remembering IP's all the time. (And it let's me easily run dhcp also on my "pop-up" dev-servers). We need to rework our work network for several odd reasons, and in this new scenario there was no money for an extra Windows 2012 license. We have at our disposal several old boxes that run linux quite well. Is it possible to set up a DNS-server-"appliance" that somehow autoregisters it's own hostname.. Scenario: Router (N66u) on 172.20.20.1. Runs DHCP on 172.20.20.100-200 range. Server [verdant] of a *nix flavor on 172.20.20.2 Laptop [speedy] of W8 flavor on DHCP assigned Laptop [canary] of W8 flavor on DHCP assigned Desktop [lianyu] of Ubuntu flavor on DHCP assigned What I would like is that all of the above servers (except possibly the router) would be available on verdant.starling.lan and canary.starling.lan and so on. This is how it works right now (except the Ubuntu box... I haven't cracked that one yet) because Windows just does this for you.. I would also be able to do this without any manual labor on the server. When I tell my box it's name is smoak it should "immediately" be available as smoak.starling.lan without any extra configuration on my part. How can I do this in a Linux (Ubuntu) environment?

    Read the article

  • Looking for advice on using dd to backup a dual boot laptop.

    - by AvatarOfChronos
    My questions boils down to this. If I do "dd if=/dev/sda of=usbdrive" can anybody confirm that this will get everything including mbr/partition information/all four partitions and create a drive that I can swap with the failing internal drive without losing anything? If this is done while the computer is running will it still copy everything? At this point I'm afraid to shutdown the computer for fear of it never starting again. Secondly, how tolerant is dd of failing drives? Has anybody used it to recover a half dead drive before that can share any potential pitfalls? Did it get the data ok or is this going to be a hope for the best kind of situation? And lastly, If the usbdrive is larger than the failing internal drive I'll still be able to expand the partitions later so I'm not losing space? this last part seems silly to ask but with my current streak of bad luck I'll end up overwriting some magic bit and forever turning a 640gb hdd into a 500gb hdd. Also if anybody has a better solution to create a complete clone that gets everything I'm all for hearing about it. PostScript: I had been making periodic backups however when whatever miasma that killed the laptop struck it also got the NAS :( Post PostScript: both devices were on a UPS system.

    Read the article

  • backup an existing linux server to a virtualbox virtual machine

    - by user146526
    I have some servers and VPSs to many companies across the world. I want to back them up locally. I have some backup solutions enabled to remote hosts, but I want to have a local backup on a computer at home. What I am thinking is: 1) Create a virtualbox virtual machine, install the same version linux as the server. 2) Use rsync to backup the server to the local virtualbox machine. (something like rsync -av --delete --progress --exclude '/dev/' --exclude '/proc/' root@server_ip:// / ) 3) Repeat the command every few days update files. 4) In case of a hard disk failure, or any other bad event, reverse the rsync command and get the files back and continue my bussiness. I tried it with 2 openvz VPS, the one was a backup of the other. I also tried to transfer normal linux server host to openvz machine and it worked great. That way looks pretty clean and easy to me, this is the kind of solution I am looking for. However I need to be sure that this will work if I am going to do it. The question is, will that work ok ? Does anyone see any problem with that ? Do you have any other suggestions ? Thanks

    Read the article

  • How to install Red Hat Enterprise Linux on Apple Macbook Pro MacBookPro4,1

    - by Todd V. Rovito
    I have a one year old Mac Book Pro that I am trying to get RHEL 5.4 installed on via bootcamp. No matter what I do I can't get the installer to boot. I have tried multiple DVD's and even verified the install works on a new Mac Book Pro. Most of the time the installer simply locks up. I usually use Linux text with all-generic-ide on the boot line. I removed the ide parameter and I just used linux text. The results I get are that a bunch of kernel messages appear then the background turns blue and a thin text box pops up saying its loading ata..... something it disappears too fast for me to read. Then the machine freezes. I pressed the alt function keys to see if I could look at the system log, here is what it says: Alt-f3 says "trying to mount CD device hda" Alt-f4 says status error: hda: lastFailedSense Hda: Failed opcode was: unknown Hda: Lost interrupt Hda: Drive not ready for command Ide-cd: command 0x3 timed out Above this junk it looks like it found the partition because it knew it was 20 GB and listed as /dev/sda3. I think it has something to do with the CD drive, is that possible? Thanks again for the support. PS I posted in the apple support forums ( Apple.com Support Discussions Boot Camp Installation and Storage) and didn't get an answer.

    Read the article

  • Virtual box host-only adapter configuration

    - by Xoundboy
    I have VirtualBox 4 running on Win 7 with a Centos 6 guest VM set up for hosting my dev server. When I'm connected to my home network the guest can be accessed via a static IP address that I configured (192.168.56.2), but not when I'm in the office. I'm guessing that the DHCP server in the office doesn't have a gateway configured for the 192.168.56.x IP range. I read something about the VB host-only adapter that should allow me to set this guest VM up in such a way that I don't need to be on any network to be able to access the guest from the host using a static IP. I've not been able to find out exactly how to configure this though. Can anyone give me an example configuration, thanks. UPDATE: Thanks for your responses. I've now set up a single virtual network adapter in VirtualBox and set it to host-only: C:\Users\Ben>vboxmanage list hostonlyifs Name: VirtualBox Host-Only Ethernet Adapter GUID: d419ef62-3c46-4525-ad2d-be506c90459a Dhcp: Disabled IPAddress: 192.168.56.2 NetworkMask: 255.255.255.0 IPV6Address: fe80:0000:0000:0000:78e3:b200:5af3:2a57 IPV6NetworkMaskPrefixLength: 64 HardwareAddress: 08:00:27:00:94:e8 MediumType: Ethernet Status: Up VBoxNetworkName: HostInterfaceNetworking-VirtualBox Host-Only Ethernet Adapter On the guest I've set up eth0 to use the same IP address as the host-only adapter (192.168.56.2) but when I try to log in using Putty I still get "Network Error : connection refused". VirtualBox DHCP servier is enabled but I can't ping the gateway (192.168.56.1) from either host nor guest. There's no firewall running on either OS. What next?

    Read the article

  • ubuntu's average load never below "0.00 0.01 0.05"

    - by Karma Fusebox
    I have several ubuntu 12.04 VMs running on a ubuntu 12.04 KVM host. Those of the virtual machines that are totally idle with no services (except syslog and the other "small" standard stuff of a fresh installation) show a constant load of "0.00 0.01 0.05" in top/htop as average 1/5/15. When there are "real" applications running, the load averages behave perfectly normal but they never fall below the mentioned values. While this doesn't affect performance at all and could easily be ignored, it screws up the monitoring graphs in a very annoying way: (Notice how load15 behaves nicely if 0.05 for a short time in the right half of the pic) Unfortunately I don't know what diagnostic outputs might be helpful for you, so here's some default stuff: # top top - 16:31:01 up 1:05, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 62 total, 1 running, 61 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 99.2%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019464k total, 73452k used, 946012k free, 6140k buffers Swap: 0k total, 0k used, 0k free, 22504k cached . # free -m total used free shared buffers cached Mem: 995 72 923 0 6 21 -/+ buffers/cache: 43 951 Swap: 0 0 0 . # iostat -x /dev/vda Linux 3.2.0-32-virtual (vm3) 11/15/2012 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 0.25 0.00 0.65 0.20 0.24 98.66 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util vda 0.14 0.12 0.51 0.22 6.74 1.46 22.50 0.02 23.26 20.64 29.30 7.63 0.56 Need something else? Has anyone ever seen this behavior? Might this be a bug in kvm/ubuntu/kernel 3.x in the end? Thanks a lot!

    Read the article

  • Kickstarting an Ubuntu Server 10.04 installation (DHCP fails)

    - by William
    I'm trying to automate the network installation of Ubuntu 10.04 LTS with an anaconda kickstart and everything seems to running except for the initial DHCP autoconfiguration. The installer attempts to configure the install via DHCP but fails on its first attempt. This brings me to a prompt where I can retry DHCP and it seems to always work on the second attempt. My issue is that this is not really automated if I have to hit retry for DHCP. Is there something I can add to the kickstart file so that it will automatically retry or better yet not fail the first time? Thanks. Kickstart: # System language lang en_US # Language modules to install langsupport en_US # System keyboard keyboard us # System mouse mouse # System timezone timezone America/New_York # Root password rootpw --iscrypted $1$unrsWyF2$B0W.k2h1roBSSFmUDsW0r/ # Initial user user --disabled # Reboot after installation reboot # Use text mode install text # Install OS instead of upgrade install # Use Web installation url --url=http://10.16.0.1/cobbler/ks_mirror/ubuntu-10.04-x86_64/ # System bootloader configuration bootloader --location=mbr # Clear the Master Boot Record zerombr yes # Partition clearing information clearpart --all --initlabel # Disk partitioning information part swap --size 512 part / --fstype ext3 --size 1 --grow # System authorization infomation auth --useshadow --enablemd5 %include /tmp/pre_install_ubuntu_network_config # Always install the server kernel. preseed --owner d-i base-installer/kernel/override-image string linux-server # Install the Ubuntu Server seed. preseed --owner tasksel tasksel/force-tasks string server # Firewall configuration firewall --disabled # Do not configure the X Window System skipx %pre wget "http://10.16.0.1/cblr/svc/op/trig/mode/pre/system/Test-D" -O /dev/null # Network information # Start pre_install_network_config generated code # Start of code to match cobbler system interfaces to physical interfaces by their mac addresses # Start eth0 # Configuring eth0 (00:1A:64:36:B1:C8) if ip -o link show | grep -i 00:1A:64:36:B1:C8 then IFNAME=$(ip -o link show | grep -i 00:1A:64:36:B1:C8 | cut -d" " -f2 | tr -d :) echo "network --device=$IFNAME --bootproto=dhcp" >> /tmp/pre_install_ubuntu_network_config fi # End pre_install_network_config generated code %packages openssh-server

    Read the article

  • Server not resolving after restart

    - by DomainSoil
    I restarted our server today, and now cannot for the life of me get anything to resolve... I suspect it has something to do with our routes. I've tried numerous Google results to no avail. Here is as far as I've gotten: [root@www ~]# route -n Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.101 0.0.0.0 255.255.255.255 UH 0 0 0 eth1 0.0.0.0 192.168.1.101 0.0.0.0 UG 0 0 0 eth1 Things you need to know: Our server (CentOS 6.3) runs two virtual machines, one live, and one development. They mirror each other as much as possible, but I can't find where I've went wrong with the live server. The dev server works fine. [root@www ~]# ifconfig eth1 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx inet6 addr: xxxx:xxx:xxxx:xxxx:xxxx/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:118206 errors:0 dropped:0 overruns:0 frame:0 TX packets:165 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7825749 (7.4 Mib) TX bytes:7146 (69.2 KiB) Interrupt:28 [root@www ~]# /etc/init.d/network status Configured devices: lo Auto eth0 Currently active devices: lo eth1 If there is any other information you need, please don't hesitate to ask!

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >