Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 337/1051 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Can I completely remove the Windows DNS in favour of BIND9 in an AD network?

    - by Vinícius Ferrão
    I would like to remove the DNS feature of Windows Domain Controllers and point the DNS servers to our BIND9 servers. I know it's possible to setup coexistence but this requires a number of extra Windows DNS Servers equals to the number of Domain Controllers in the network. Active Directory expects the _msdcs zone and other things like _tcp, _udp; etc. The main question is: how to make BIND9 takes care of all this AD specific data? And with dynamic updating to make AD even more happier. Thanks, PS: Making BIND9 points to the Windows DNS Servers to resolve the Active Directory specific zones isn't an option. We already do this... EDIT: As today, I'm running without Windows DNS. I'm writing up a guide on how to do this, and I'll update this topic.

    Read the article

  • Protect Gnome Screen Saver Settings

    - by Jared Brown
    By default in Gnome standard users can access their screensaver preferences and change settings such as the idle time and whether or not it locks the screen. I desire to set the screensaver settings as the root user for each user and only allow the root user to adjust them. What is the best (read: simplest + fool proof) way to accomplish this?

    Read the article

  • "mv: cannot stat file" in for loop

    - by F.C.
    I wanted to rename a lot of files with a pattern so I tried this for loop: $ for f in *; do mv \""$f"\" \""HouseMD-S06E${f#*Episode }"\"; done But I got this error: mv: cannot stat `"House MD Season 6 Episode 01 - Broken (Parts 1 & 2).avi"': No such file or directory So what I did was echo the mv commands to a file like this: $ for f in *; do echo mv \""$f"\" \""HouseMD-S06E${f#*Episode }"\">>mv.txt; done And the run the file with source. Any ideas why the first for didn't work and how can I fix it?

    Read the article

  • mysql is not connecting after data directory change

    - by user123827
    I've changed data directory in /etc/my.cnf. datadir=/data/mysql socket=/data/mysql/mysql.sock I also moved mysql folder from /var/lib/mysql/ to /data/mysql Now when i connect to mysql i get following error: [root@youradstats-copy mysql]# mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) also when i see /var/logs/msqld.log i get following messages in that: InnoDB: Setting log file /data/mysql/ib_logfile0 size to 512 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 300 400 500 120704 7:43:31 InnoDB: Log file /data/mysql/ib_logfile1 did not exist: new to be created InnoDB: Setting log file /data/mysql/ib_logfile1 size to 512 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 300 400 500 InnoDB: Cannot initialize created log files because InnoDB: data files are corrupt, or new data files were InnoDB: created when the database was started previous InnoDB: time but the database was not shut down InnoDB: normally after that. 120704 7:43:36 [ERROR] Plugin 'InnoDB' init function returned error. 120704 7:43:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. I shut down mysql properly before doing these changes and then started it properly but dont know why getting these messages. please help to solve issue as i have changed socket path in my.cnf but still its pointing to old path...

    Read the article

  • pfSense + DDoS Protection

    - by Jeremy
    I run a gaming community on a colo with a 100Mbps port. I want to buy a very cheap 35 dollar server with the same 100Mbps port, and run pfSense to use as a hardware firewall. I'm dealing with a bunch of 14 year old kids that have access to botnets, so it can become a bit necessary to get something like this. My overall question, is using pfSense on a cheap identical datacenter/port speed server worth it to actually block DDoS attacks? A bit more into detail since I assume you will ask this, the attacks we receive are normally around 1Gbps. We currently run CentOS using CSF Firewall, and even when using a software firewall, we block 500Mbps UDP floods, or just generic attacks pretty easily. Thanks, - Necro

    Read the article

  • droid cam makefile understanding and error

    - by nerorevenge
    I tried installing the droid cam on my fedora 19 (64 bit) . Link to the droid cam application is here and whenever I try to install it , the Makefile which is as follows is invoked obj-m := v4l2loopback-dc.o all: make -C /lib/modules/`uname -r`/build M=`pwd` test: gcc test.c -o test clean: make -C /lib/modules/`uname -r`/build M=`pwd` clean insmod: sudo insmod v4l2loopback-dc.ko width=320 height=240 rmmod: sudo rmmod v4l2loopback-dc.ko and here is the error -- INSTALL: Webcam parameters: '320' and '240' -- INSTALL: Building v4l2loopback-dc.ko make -C /lib/modules/`uname -r`/build M=`pwd` make: *** /lib/modules/3.9.5-301.fc19.x86_64/build: No such file or directory. Stop. make: *** [all] Error 2 -- INSTALL: v4l2loopback-dc.ko not built.. Failure build happens to be a symbolic link.I was wondering what exactly is the makefile trying to and why is it failing?

    Read the article

  • Best option for storage clustering

    - by sam
    I'm working on an application that requires a large amount of storage space and I want to handle storage 'in-house' (Much cheaper than, say, S3) so we will have multiple servers (Initially 4) with large amounts of storage (6TB each). The storage will need to be very flexible and configurable, each piece of data should be replicated on at least 2 servers and must be easily readable/writable from ether an API of a UNIX device/file/folder like a normal drive, I don't mind which. We must also be able to easily offload content to our HTTP CDN (Edgecast), it doesn't need to have built in HTTP support but if it doesn't I'm going to have to write something to get the files onto HTTP so they can be pulled by the CDN. I've looked at a lot of solutions including Eucalyptus Walrus OpenStack Object Storage MogileFS and some others which I can't remember All the servers will be running RHEL 6, they have 4x1.5TB drives which will be RAID1'd into a single partition. All the servers have 1GB/s connections between them and 100MB/s connections to the internet with unlimited bandwidth. They have 2x2.66ghz processors. I understand there isn't a single, perfect answer but it would be nice to get some pointers.

    Read the article

  • Machine check events logged

    - by GoldenNewby
    In /var/log/messages, this error occurred: Sep 19 13:18:15 wdc kernel: [2772302.630416] Machine check events logged Shortly there after, the entire server became unresponsive. This is in the log of the Dom0 for a Xen Server (running the latest version on Debian Squeeze). Can anyone shed some light on what this error means? Should I be ordering new hardware? Edit: Also, it seems to imply it logged something, where can I find that?

    Read the article

  • Squid Authentication & streaming

    - by Steve Butler
    I've got squid setup using Kerberos authentication. I'm also using squidguard as an URL redirector to block out the usual nastiness of the web. There are some sites though that we allow certain users to, and others not. This all works well, assuming I'm not using any streaming. From what i can determine from the squid logs and the wireshark traces I've done, when the initial request to stream is sent, everything is good, the authenticated username is sent with the request to squidguard. The problem is that on subsequent traffic the username is not sent to squidguard, causing it to be blocked based on default policy. I've tried using the squid built-in allow/deny stuff, but its relatively clunky, and so far squidguard has been pretty easy and fast. Here comes the question(s): How do i get Squid to pass username on all requests? (something tells me this isn't the best way) How do i get squidguard to see traffic is authenticated to a specific user even when a username isn't passed? Is there any other way of accomplishing this? A few details that may be of importance: I'm using a list of users stored in a text file for squidguard to compare against. I'm using full kerberos auth with Squid. CentOS 6.0 Squid 3.1.4 Squidguard 1.3

    Read the article

  • Cron stopped working, partially working.

    - by Robi
    Our cron script stopped working in different dates in August. What can be the possible reasons? We did not change anything. Our hosting showed us a log where we can see that cron is executing our scripts. But, nothing is happening in our scripts. If we manually execute the scripts, we're getting correct results like before. I showed the commands to hosting and they showed me that the commands are working. What should I tell my hosting? what should I do? They are php scripts which are executed by CRON and they just post to facebook and twitter. They don't execute any hard or huge things. I even asked my hosting if we broke any rules.

    Read the article

  • Bash Script Exits su or ssh Session Rather than Script

    - by Russ
    I am using CentOS 5.4. I created a bash script that does some checking before running any commands. If the check fails, it will simply exit 0. The problem I am having is that on our server, the script will exit the su or ssh session when the exit 0 is called. #!/bin/bash # check if directory has contents and exit if none if [ -z "`ls /ebs_raid/import/*.txt 2>/dev/null`" ]; then echo "ok" exit 0 fi here is the output: [root@ip-10-251-86-31 ebs_raid]# . test.sh ok [russ@ip-10-251-86-31 ebs_raid]$ as you can see, I was removed from my sudo session, if I wasn't in the sudo session, it would have logged me out of my ssh session. I am not sure what I am doing wrong here or where to start.

    Read the article

  • Setting up Windows SBS 2008 network on Xen

    - by samyboy
    I'm trying to install a Windows SBS 2008 server in a Xen environment. The OS is booting fine. Unfortunately I can't figure out how to set up the network settings. Dom0 is a Debian Lenny hosting around 10 virtual servers. Here are the settings I'm using in the hosted Windows SBS: IP address: 10.20.0.8 Network mask: 255.255.0.0 Gateway: 10.20.0.1 Note that during the installation stage, Windows set the net mask at 255.255.255.0 without letting me choose. Gross. Windows SBS tells me I have a "limited connection". I can't ping the gateway nor any other IP except localhost and it's own IP (10.20.0.8). Here is the Xen config file: kernel = '/usr/lib/xen-3.2-1/boot/hvmloader' builder = 'hvm' memory = '4096' device_model='/usr/lib/xen-3.2-1/bin/qemu-dm' acpi=1 apic=1 pae=1 vcpus=1 name = 'winexchange' # Disks disk = [ 'phy:/dev/wnghosts/exchange-disk,ioemu:hda,w', 'file:/mnt/freespace/ISO/DVD1_Installation.iso,ioemu:hdc:cdrom,r' ] # Networking vif = [ 'mac=00:16:3E:0A:D0:1B, type=ioemu, bridge=xenbr0'] # video stdvga=0 serial='pty' ne2000=0 # Behaviour boot='c' sdl=0 # VNC vfb = [ 'type=vnc' ] vnc=1 vncdisplay=1 vncunused=1 usbdevice='tablet' This config is working with others Windows XP domU's. I tried to change the ne2000 values with 0 and 1 with no effect. I am far from having good Windows administration skills so I guess I definitely need some help on this case. Thanks.

    Read the article

  • What's the best way of handling permissions for apache2's user www-data in /var/www ?

    - by gyaresu
    Has anyone got a nice solution for handling files in /var/www/ ? We're running Name Based Virtual Hosts and the apache2 user is 'www-data' We've got two regular users & root. So when messing with files in /var/www ,rather than having to... chown -R www-data:www-data ...all the time, what's a good way of handling this? Supplementary question. How hardcore do you then go on permissions? This one has always been a problem in collaborative development environments. Cheers.

    Read the article

  • Determine the time difference between two linux servers

    - by Paul
    I am troubleshooting a latency network issue on a network. It is probably a nic or cabling issue, but while I was going through the process of figuring it out, I was looking at the timings of a ping packet leaving a network card and arriving at another server. Both linux. So I have tcpdump running on both, and I issue a ping from one to the other, and back again, and looking at the timing differences might have shed light on where the latency is coming from. It is an academic exercise now, as I need to eliminate some more fundamental causes, but I was curious as to how this could be achieved. Given that ntpd is installed and running on two servers, how can I confirm the current time discrepency between the two servers, to whatever level of accuracy is possible - given that we are talking about latency on a local lan, which is ideally a millisecond or so. NTP itself is accurate to a couple of ms under good conditions, and as both servers are in the same environment, they should (presumably) achieve a similar level of accuracy, and so should have a time discrepency between them of a only few ms - but how can I check this?

    Read the article

  • IPv6 static routes

    - by user98651
    I am looking to configure a few hosts with IPv6 on my network. The router (running CentOS 5) is configured with an Hurricane Electric (HE) tunnel which works fine on that host. However, I would like to statically add a few additional hosts on the same LAN to have IPv6 through this tunnel. No, I don't want radvd or dhcpv6 to do the work for me in this case. I already have IPv6 forwarding enabled in sysctl.conf. I am looking for help with the next steps (statically adding the routes). Lets say the IP addresses are as follows: Router: 2001:470:1b07:1:: Host1: 2001:470:1b07:2:: How would I go about making them see each other? Thanks in advance for the help.

    Read the article

  • How to upgrade XBMC Live from 9.04.1 to 9.11 via command line?

    - by sunpech
    I've been unable to do a fresh install of XBMC Live 9.11 to my hard drive. Everytime it fails at the Install System step. But I am able to get XBMC Live 9.04.1 to install successfully. How do I upgrade XBMC Live 9.04.1 to 9.11? I understand that Ctrl+Shift+F2 brings up the command line, but what are the next set of commands to run?

    Read the article

  • difference between compiled and installed via rpm (zypper)

    - by cherouvim
    In an openSUSE 11.1 I download, compile and install ImageMagick via: wget ftp://.../pub/graphics/ImageMagick/ImageMagick-6.7.7-0.zip unzip ImageMagick-6.7.7-0.zip cd ImageMagick-6.7.7-0 ./configure --prefix=/usr/local/ImageMagick make make install Everything works nicelly until I discover that JPG is not supported: identify -list format | grep -i jpg [nothing related to JPG returned] So I reconfigure and recompile using: ./configure --prefix=/usr/local/ImageMagick --with-jpeg=yes --with-jp2=yes make make install But that changes nothing. I end up uninstalling: make uninstall and installing via zypper: zypper install ImageMagick This installed version 6.4.3 and now it does support JPG: identify -list format | grep -i jpg JPG* JPEG rw- Joint Photographic Experts Group JFIF format Any idea on what is going on here? What is a possible reason that this capability of ImageMagick was not there when compiled from source but was there when installed from rpm? Note that I don't necessarily care a lot about ImageMagick (since it now works), but generally about his kind of behaviour, becase in one way or another I've seen this happen in other ocasions as well.

    Read the article

  • needing storage integrity (write/read) test - for BASH

    - by Mr. Bash
    In need of shell scripts / bash commands to verify data integrity of local harddrives, usb-drives, etc, ... Like the famous www.heise.de/download/h2testw; or something that is at least common within repositories. (h2testw writes a specific datastring over and over onto the medium, then reads it again to verify if it was written correctly and displays write/read time/speed.) please no dd if=/dev/random of=/dev/sdx bs=1k && dd if=/dev/sdx of=/dev/null bs=1k since it won't verify if everything was written correctly. It is only a test if read/write is successful to the device. So far, I'm not too happy with badblocks -w -v /dev/sdx1 either, since it seems rather slow and I don't know what it exactly writes, and if it considers wear-leveling on flash media. There is also a program named F3 http://oss.digirati.com.br/f3/ that needs to be compiled. Designed after h2testw, the concept sounds interesting, i'd just rather have it as a ready to go bash script.

    Read the article

  • Increasing file descriptor limit on Debian does not work! Help!

    - by Aco
    I am running Debian 6 and I am trying to increase the file descriptor limit but it does not want to work. This is what I have done: I edited /etc/sysctl.conf by adding fs.file-max = 64000 at the end and applied the changes using sysctl -p. I then edited /etc/security/limits.conf and added the following lines: * soft nofile 64000 and * hard nofile 64000. Now when I execute ulimit -Hn and ulimit -Sn I still see 1024. I rebooted the server and I still get the same result. What have I failed to do?

    Read the article

  • sendmail redhat

    - by lepricon123
    For some reason even after providing the sender's from adress my mails are not being delivered as from is missing as below maillog. Any suggestions? May 8 20:08:43 tawq02 sendmail[13443]: o4938hJD013443: ruleset=check_mail, arg1=<{}, relay=localhost.localdomain [127.0.0.1], reject=553 5.5.4 <{}... Domain name required for sender address {} May 8 20:08:43 tawq02 sendmail[13443]: o4938hJD013443: from=<{}, size=0, class=0, nrcpts=0, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1]

    Read the article

  • KVM machine does not start ssh, network is started, used to work

    - by lleto
    have been searching an pulling my hear out for the last 6 hours. I have a virtual machine that has been running fine for the last six months. I was happy ssh'ing into it and it was running a database and some small apps. Tonight ssh stopped working, so I decided to reboot the machine. I now have the following situation: virsh list --all states machine as running I can ping the machine and get a reply When I ssh to the machine I see "ssh: connect to host [myserver] port 22: Connection refused" nmap does not show port 22 as open I have tried to: - reboot the machine once more (no luck) - mount the filesystem and check /etc/ssh/sshd.conf (has not changed since working situation) - install virsh console, however this does not seem to work When I mount the fs directly using losetup the strange thing is that file dates seem to be frozen in /var/log/ around the time of the crash. If I look in /var/run/ I can see an sshd.pid, but the time is 6 hours ago (and numerous reboots). My virsh xml looks like this: <domain type='kvm' id='21'> <name>myserver</name> <uuid>09678c8d-a99b-1d18-a7af-88d027cc8f93</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/dev/disk01/myserver'/> <target dev='hda' bus='ide'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:e3:13:86'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/> </graphics> <video> <model type='cirrus' vram='9216' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='apparmor' relabel='yes'> <label>libvirt-09678c8d-a99b-1d18-a7af-88d027cc8f93</label> <imagelabel>libvirt-09678c8d-a99b-1d18-a7af-88d027cc8f93</imagelabel> </seclabel> </domain> I'm sort of lost as to where I can look to get the machine up and running again. On the same instance of kvm I have another server running which is working fine. Both are Ubuntu 12.04. All help is welcome....

    Read the article

  • How to restore a dd overwritten disk partition?

    - by DairyKnight
    First of all, I admit I'm stupid and I didn't run proper backup of my data, but you know crap happens... So, I've used dd to overwrite the first 2GB of my 750GB NTFS partition with a FAT32 partition. I've run Photorec and EasyRecovery but all I can restore is the 2GB FAT32 partition and the files on that. Is there a way to "roll back" to the NTFS paritition, and recover - at least - some part of the 750GB data? Thanks.

    Read the article

  • Rsyslog stops sending data to remote server after log rotation

    - by Vincent B.
    In my configuration, I have rsyslog who is in charge of following changes of /home/user/my_app/shared/log/unicorn.stderr.log using imfile. The content is sent to another remote logging server using TCP. When the log file rotates, rsyslog stops sending data to the remote server. I tried reloading rsyslog, sending a HUP signal and restarting it altogether, but nothing worked. The only ways I could find that actually worked were dirty: stop the service, delete the rsyslog stat files and start rsyslog again. All that in a postrotate hook in my logrotate file. kill -9 rsyslog and start it over. Is there a proper way for me to do this without touching rsyslog internals? Rsyslog file $ModLoad immark $ModLoad imudp $ModLoad imtcp $ModLoad imuxsock $ModLoad imklog $ModLoad imfile $template WithoutTimeFormat,"[environment] [%syslogtag%] -- %msg%" $WorkDirectory /var/spool/rsyslog $InputFileName /home/user/my_app/shared/log/unicorn.stderr.log $InputFileTag unicorn-stderr $InputFileStateFile stat-unicorn-stderr $InputFileSeverity info $InputFileFacility local8 $InputFilePollInterval 1 $InputFilePersistStateInterval 1 $InputRunFileMonitor # Forward to remote server if $syslogtag contains 'apache-' then @@my_server:5000;WithoutTimeFormat :syslogtag, contains, "apache-" ~ *.* @@my_server:5000;SyslFormat Logrotate file /home/user/shared/log/*.log { daily missingok dateext rotate 30 compress notifempty extension gz copytruncate create 640 user user sharedscripts post-rotate (stop rsyslog && rm /var/spool/rsyslog/stat-* && start rsyslog 2&1) || true endscript } FYI, the file is readable for the rsyslog user, my server is reachable and other log files which do not rotate on the same cycle continue to be tracked properly. I'm running Ubuntu 12.04.

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >