Search Results

Search found 41598 results on 1664 pages for 'segmentation fault'.

Page 318/1664 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • How can I prevent OpenVPN from clobbering local route?

    - by ataylor
    I have a local network on 192.168.1.0 with netmask 255.255.255.0. When I connect to a VPN though OpenVPN (as a client), it pushes a route for 192.168.1.0 that clobbers the existing one, making my local network inaccessible. I don't to access anything on 192.168.1.0 on the remote machine; I'd like to just ignore it, while accepting the other routes that are pushed. My client is Ubuntu 10.10. How can I skip the one offending route?

    Read the article

  • Time not propagating to machines on Windows domain

    - by rbeier
    We have a two-domain Active Directory forest: ourcompany.com at the root, and prod.ourcompany.com for production servers. Time is propagating properly through the root domain, but servers in the child domain are unable to sync via NTP. So the time on these servers is starting to drift, since they're relying only on the hardware clock. WHen I type "net time" on one of the production servers, I get the following error: Could not locate a time-server. More help is available by typing NET HELPMSG 3912. When I type "w32tm /resync", i get the following: Sending resync command to local computer The computer did not resync because no time data was available. "w32tm /query /source" shows the following: Free-running System Clock We have three domain controllers in the prod.ourcompany.com subdomain (overkill, but the result of a migration - we haven't gotten rid of one of the old ones yet.) To complicate matters, the domain controllers are all virtualized, running on two different physical hosts. But the time on the domain controllers themselves is accurate - the servers that aren't DCs are the ones having problems. Two of the DCs are running Server 2003, including the PDC emulator. The third DC is running Server 2008. (I could move the PDC emulator role to the 2008 machine if that would help.) The non-DC servers are all running Server 2008. All other Active Directory functionality works fine in the production domain - we're only seeing problems with NTP. I can manually sync each machine to the time source (the PDC emulator) by doing the following: net time \\dc1.prod.ourcompany.com /set /y But this is just a one-off, and it doesn't cause automated time syncing to start working. I guess I could create a scheduled task which runs the above command periodically, but I'm hoping there's a better way. Does anyone have any ideas as to why this isn't working, and what we can do to fix it? Thanks for your help, Richard

    Read the article

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

  • Exchange Transport Service Started but not working

    - by Philippe
    Good day, Here is the problem : We are hosting a Microsoft Exchange Server. Everything working fine until recently, where the mail transport seems to go wrong. We almost have to restart the service every morning. The thing is that the transport service is started, but the mail are not delivered to the users and senders to our server get a delayed delivery notification. When we restart the service, all the mail is then delivered to the users and we're good to go for a day or two. Things I've noticed : The store service is growing to around 6 Gb of used RAM, and the w3wp.exe service is hanging around 700mb RAM. Is there a way to schedule a restart of the transport role every 4 hours or something while I'm solving the issue so I don't have to worry when I leave for the week-end? And most of all...does anyone have any idea on how to solve this issue? Thanks, Philippe

    Read the article

  • rsync unicode filename error

    - by Mirage
    I am getting this error while using rsync Could not convert filename to Unicode: 'H20 dinkus_.pdf': Invalid or incomplete multibyte or wide character Could not convert filename to Unicode: 'ANT0012 H20 Brochure_OFFSET_paths_.pdf': Invalid or incomplete multibyte or wide character ntfs_mst_post_read_fixup: magic: 0x00000000 size: 1024 usa_ofs: 0 usa_count: 65535: Invalid argument What should i do

    Read the article

  • What is this in error_log ? Invalid method in request \x16\x03\x01

    - by valter
    Hello. I found this line Invalid method in request \x16\x03\x01 on error_log file , and some other similiar lines like: [Wed Oct 27 23:16:37 2010] [error] [client 187.117.240.164] Invalid URI in request x\xb2\xa1:SMl\xcc{\xfd"\xd1\x91\x84!d\x0e~\xf6:\xfbVu\xdf\xc3\xdb[\xa9\xfe\xd3lpz\x92\xbf\x9f5\xa3\xbbvF\xbc\xee\x1a\xb1\xb0\xf8K\xecE\xbc\xe8r\xacx=\xc7>\xb5\xbd\xa3\xda\xe9\xf09\x95"fd\x1c\x05\x1c\xd5\xf3#:\x91\xe6WE\xdb\xadN;k14;\xdcr\xad\x9e\xa8\xde\x95\xc3\xebw\xa0\xb1N\x8c~\xf1\xcfSY\xd5zX\xd7\x0f\vH\xe4\xb5(\xcf,3\xc98\x19\xefYq@\xd2I\x96\xfb\xc7\xa9\xae._{S\xd1\x9c\xad\x17\xdci\x9b\xca\x93\xafSM\xb8\x99\xd9|\xc2\xd8\xc9\xe7\xe9O\x99\xad\x19\xc3V]\xcc\xddR\xf7$\xaa\xb8\x18\xe0f\xb8\xff Apache did a graceful restart a few seconds after the first error...

    Read the article

  • Munin "Disk usage" is too high?

    - by f-aminov
    I've recently installed munin on my VMware client server and saw that the Disk usage shows about 80-90%. Everything else (cpu load, ram, etc.) seems to be running fine. I have only two virtual hosts on my server with 1000 users/day in total, so I don't think that's too much. Here is the graph for the disk usage. Server info: Debian Lenny, CPU 510Mhz, RAM 512MB Is it bad? What could possibly cause this? Thank you for any suggestions.

    Read the article

  • How can ShadowProtect SBS backup to alternating external drives?

    - by detly
    I am trying to configure ShadowProtect SBS (v. 4.1.5.10129) in Windows Server 2003 SBS to backup my server hard drives to two alternating external drives. What I want is to be able to swap one drive for another every Friday, and have ShadowProtect continue on the same schedule. Ideally, this would require absolutely no user interaction whatsoever, apart from physically unplugging one drive and reconnecting the other. The trouble is, Windows Server 2003 does not allow you to assign the same drive letter to two different devices. So if I plug in drive #1 and assign it drive letter "X:", the next week when I unplug it and plug in drive #2, it gets some other letter. But since ShadowProtect is set to backup to "X:\", it can't find it and the backup fails. The drives are Samsung STORY Station 3.0 2TB drives. How can I configure things so I can just swap the drives over every week and not worry about having to reconfigure drive letters every time?

    Read the article

  • Persuading openldap to work with SSL on Ubuntu with cn=config

    - by Roger
    I simply cannot get this (TLS connection to openldap) to work and would appreciate some assistance. I have a working openldap server on ubuntu 10.04 LTS, it is configured to use cn=config and most of the info I can find for TLS seems to use the older slapd.conf file :-( I've been largely following the instructions here https://help.ubuntu.com/10.04/serverguide/C/openldap-server.html plus stuff I've read here and elsewhere - which of course could be part of the problem as I don't totally understand all of this yet! I have created an ssl.ldif file as follows; dn:cn=config add: olcTLSCipherSuite olcTLSCipherSuite: TLSV1+RSA:!NULL add: olcTLSCRLCheck olcTLSCRLCheck: none add: olcTLSVerifyClient olcTLSVerifyClient: never add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/ldap_cacert.pem add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/my.domain.com_slapd_cert.pem add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/my.domain.com_slapd_key.pem and I import it using the following command line ldapmodify -x -D cn=admin,dc=mydomain,dc=com -W -f ssl.ldif I have edited /etc/default/slapd so that it has the following services line; SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///" And everytime I'm making a change, I'm restarting slapd with /etc/init.d/slapd restart The following command line to test out the non TLS connection works fine; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldap://mydomain.com" "cn=roger*" But when I switch to ldaps using this command line; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldaps://mydomain.com" "cn=roger*" This is what I get; ldap_url_parse_ext(ldaps://mydomain.com) ldap_create ldap_url_parse_ext(ldaps://mydomain.com:636/??base) ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP mydomain.com:636 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 127.0.0.1:636 ldap_pvt_connect: fd: 3 tm: -1 async: 0 TLS: can't connect: A TLS packet with unexpected length was received.. ldap_err2string ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) Now if I check netstat -al I can see; tcp 0 0 *:www *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:https *:* LISTEN tcp 0 0 *:ldaps *:* LISTEN tcp 0 0 *:ldap *:* LISTEN I'm not sure if this is significant as well ... I suspect it is; openssl s_client -connect mydomain.com:636 -showcerts CONNECTED(00000003) 916:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188: I think I've made all my certificates etc OK and here are the results of some checks; If I do this; certtool -e --infile /etc/ssl/certs/ldap_cacert.pem I get Chain verification output: Verified. certtool -e --infile /etc/ssl/certs/mydomain.com_slapd_cert.pem Gives "certtool: the last certificate is not self signed" but it otherwise seems OK? Where have I gone wrong? Surely getting openldap to run securely on ubuntu should be easy and not require a degree in rocket science! Any ideas?

    Read the article

  • How to set per user mail quota for postfix using policyd v2?

    - by ACHAL
    I have configured cluebringer 2.0.7 mysql httpd and all services are running well . But now i want to set per user mail quota for outgoing mails and want to restrict for a fix number of mail. I have tried to setup a quota for my host r10.4reseller.org but not working Quota List Policy:- Default Outbound Name:-Default Outbound Track:-Sender:user@domain Period:-60 verdict:-REJECT Data:- Disabled:- no Quota Limits Type:- MessageCount Counter Limit:- 1 Disabled:-no Do I need to do anymore settings for quota ?

    Read the article

  • Scan all domain workstations for specific registry key/environmental variable

    - by Trevor
    I'm looking for scripts or software that can scan workstations on a domain for a particular environmental variable (for interest, it was used to store the SOE build version) and generate a report. Accuracy is key, I don't want any workstations skipped or missed. And considering workstations will need to be powered on for anything to remotely read from the registry (and there's no guarantee they will be), that means something that can sit and run continuously for a while, updating its own records as it goes. Does anyone know of such a beast?

    Read the article

  • Ubuntu Server 12.04 LTS on Hyper-V 2012

    - by user137533
    I have the following scenario: Hyper-V 2012 server core installation. On top of this i created a virtual machine on which i tried installing Ubuntu Server 12.04 which should not have any compatibility issues according to what Microsoft and Ubuntu are saying (although it is not officially supported). I start, run the installation and everything is ok, no problems detecting the network device or the hard drive (unlike debian which didn't even detect the hard drive). Once the installation is complete it asks me to reboot, it unmounts the "dvd drive" and reboots. Once it tries to start again i get the following error: Boot failure. Reboot and Select proper Boot device or Insert Boot Media in the selected Boot device. It seems to not be booting up from the virtual hard drive. The hard drive is set up in SCSI mode, nothing mounted on the IDE controller (no iso image or anything else. Does anyone have any ideas on what i can do to solve this?

    Read the article

  • Stress test speed on a gateway?

    - by TheLQ
    I'm interested in stress testing my gateway server but am lost on how. Most of the stress testing applications I've seen only see how much load an app like Apache can handle, but not this. Essentially I want to send as many packets I can into this box with one computer on one card and see how many come out the other in another computer just to get an idea of what kind of load this can handle. I'm also interested how Snort will perform. I'm not really sure how to do this though. What tools could you recommend that could do this?

    Read the article

  • gentoo install error

    - by alleria
    i installed gentoo by the handbook from official site , when i got into the step 7.b. Installing the Sources , the book says :Code Listing 2.2: Viewing the kernel source symlink, When you take a look in /usr/src you should see a symlink called linux pointing to your kernel source. but ,in my virtualbox, there is no such file! ,only a linux-3..3.38-gentoo directory in the src and when i tried to use cd linux-3.3.38-gentoo and make menuconfig , an error occured , init/Kconfig:389: can't open file "kernel/irq/Kconfig" how can i solve the problem?

    Read the article

  • nginx static files caching doesn't work

    - by user74344
    here is my conf file: usr/local/nginx/sites-available/default server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|swf)$ { expires 30d; } but it doesn't cache static files, how should I fix it? thanks a lot

    Read the article

  • How to get Faster RDP

    - by Jay
    We are using Win 2008 R2 and connecting to it from Win7 boxes. The RDP is faster then before but we want to know if their are any other solutions that will make it faster. Due to licensing issue we got a software that multiple people need to access (not all at the same time) and the only way i can find is to install on a server and give RDP access to users. Is their a faster remote desktop tool then RDP ? Thanks Jay

    Read the article

  • Kunagi LDAP configuration problems

    - by Willem de Vries
    We recently started with Scrum at our company and we wanted to start using Kunagi to test and see how it works. So I installed the kunagi_0.23.2.deb packet that I downloaded from their website, on my Ubuntu 11.04 running in tomcat6 using openjdk-6-jre. everything works fine except I can't get the LDAP to work. I have one AD server and one LDAP at my disposal for testing. For the LDAP I use the following info: -uri: ldap://192.168.1.11:389 -user: some_tested_user -passwd: the_pass -DN: dc=colosa,dc=net -LDAP Filter: (&(objectClass=user)) I tested various LDAP Filters, I don't know if I have the right one. However I get an erro when clicking "test LDAP". The error refers to the DN: Server service call error Calling service TestLdap failed. java.lang.RuntimeException: InvalidNameException: [LDAP: error code 34 - invalid DN] With the AD server I get no error while testing, yet I am not able to login I get: "Login faild" every time. I don't know if this is because of the LDAP Filter I entered, yet I can't get it to work. I have read this http://kunagi.org/iss652.html stating that I need to create my accounts inside Kunagi before I can login. So I did this with no effect. So basically my question is, what causes this DN string error (I am sure mine is right), and what LDAP Filter should i use? Any help would be highly appreciated.

    Read the article

  • How to Change Sharepoint Port

    - by Jack Levin
    I have a Sharepoint web application that I want to change the port it is running on. I am not sure how to do that as It seems that it is not enough to just go to the IIS console and change the web application port. I guess I need to do certain changes in the sharepoint Central administration console as well.

    Read the article

  • Linux buffer cache effect on IO writes?

    - by Patrick LeBoutillier
    I'm copying large files (3 x 30G) between 2 filesystems on a Linux server (kernel 2.6.37, 16 cores, 32G RAM) and I'm getting poor performance. I suspect that the usage of the buffer cache is killing the I/O performance. To try and narrow down the problem I used fio directly on the SAS disk to monitor the performance. Here is the output of 2 fio runs (the first with direct=1, the second one direct=0): Config: [test] rw=write blocksize=32k size=20G filename=/dev/sda # direct=1 Run 1: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/205M /s] [0/6K iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4667 write: io=20,480MB, bw=199MB/s, iops=6,381, runt=102698msec clat (usec): min=104, max=13,388, avg=152.06, stdev=72.43 bw (KB/s) : min=192448, max=213824, per=100.01%, avg=204232.82, stdev=4084.67 cpu : usr=3.37%, sys=16.55%, ctx=655410, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 250=99.50%, 500=0.45%, 750=0.01%, 1000=0.01% lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=199MB/s, minb=204MB/s, maxb=204MB/s, mint=102698msec, maxt=102698msec Disk stats (read/write): sda: ios=0/655238, merge=0/0, ticks=0/79552, in_queue=78640, util=76.55% Run 2: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/0K /s] [0/0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4733 write: io=20,480MB, bw=91,265KB/s, iops=2,852, runt=229786msec clat (usec): min=16, max=127K, avg=349.53, stdev=4694.98 bw (KB/s) : min=56013, max=1390016, per=101.47%, avg=92607.31, stdev=167453.17 cpu : usr=0.41%, sys=6.93%, ctx=21128, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 20=5.53%, 50=93.89%, 100=0.02%, 250=0.01%, 500=0.01% lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.12% lat (msec): 100=0.38%, 250=0.04% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=91,265KB/s, minb=93,455KB/s, maxb=93,455KB/s, mint=229786msec, maxt=229786msec Disk stats (read/write): sda: ios=8/79811, merge=7/7721388, ticks=9/32418456, in_queue=32471983, util=98.98% I'm not knowledgeable enough with fio to interpret the results, but I don't expect the overall performance using the buffer cache to be 50% less than with O_DIRECT. Can someone help me interpret the fio output? Are there any kernel tunings that could fix/minimize the problem? Thanks a lot,

    Read the article

  • Joining Samba to Active Directory with local user authentication

    - by Ansel Pol
    I apologise that this is somewhat incoherent, but hopefully someone will be able to make enough sense of this to understand what I'm trying to achieve and provide pointers. I have a machine with two network interfaces connected to two different networks (one of which it's providing several other services for, such as DNS), running two separate instances of Samba, one bound to each interface. One of the instances is just a workgroup-style setup using share-level authentication, which is all working fine. The problem is that I'm looking to join the other instance to an MS Active Directory domain (provided by MS Windows Small Business Server 2003) to enable a subset of the domain users to access the shares from Windows machines on the other network. The users who need access from the domain environment have accounts (whose names are all-lowercase versions of their domain usernames) on the machine running Samba, but I'm not sure about how to map the UIDs and everything I've read concerns authenticating accounts on that machine against either AD or another LDAP server. To clarify: I only want the credentials for AD users accessing the non-workgroup Samba instance to be authenticated against AD, not the accounts on the machine running Samba. I hope this is sufficiently clear. EDIT: In addition to being able to access the Samba shares from AD, I do also need to be able to access a share on the domain from the machine running Samba but would still like everything non-Samba-related to authenticate locally.

    Read the article

  • "postgres blocked for more than 120 seconds" - is my db still consistent?

    - by nn4l
    I am using an iscsi volume on an Open-E storage system for several virtual machines running on a XenServer host. Occasionally, when there is a very high disk I/O load on the virtual machines (and therefore also on the storage system), I got this error message on the vm consoles: [2594520.161701] INFO: task kjournald:117 blocked for more than 120 seconds. [2594520.161787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162194] INFO: task flush-202:0:229 blocked for more than 120 seconds. [2594520.162274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162801] INFO: task postgres:1567 blocked for more than 120 seconds. [2594520.162882] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. I understand this error message is caused by the kernel to inform that these processes haven't been run for 120 seconds, most likely because a disk access to the storage system has not yet been processed. But what is the effect on the processes. For example, will the postgres process eventually write its data when the storage system is idle again after a few minutes, so that all data is still consistent? Or will it abort the write, leaving some tables in an inconsistent state? I certainly expect that the former should be the case - if the disk access is slow, postgres (or any other affected process) should just wait as long as it takes. I can live with the application hanging for a few minutes. But if there is a chance for data corruption then any of these errors is really bad news. Please advise what to do here.

    Read the article

  • Hyper-V Manager: right-clicking on remote VM crashes MMC snap-in

    - by Greg Bray
    I have a Windows Server 2008 R2 Enterprise SP1 machine that I log into and use to manage virtual machines running on multiple Hyper-V servers on our domain. Sometimes, when I right-click on a remote VM, the Hyper-V Manager will crash and display the following error message: If I use the Actions menu on the lower right, it works just fine, but for some reason right-clicking causes MMC to stop working. Is there any way to fix this issue? Here are the full details of the error message. Description: Stopped working Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: mmc.exe Problem Signature 02: 6.1.7600.16385 Problem Signature 03: 4a5bc808 Problem Signature 04: Microsoft.Virtualization.Client Problem Signature 05: 6.1.0.0 Problem Signature 06: 4ce7c9e3 Problem Signature 07: 342 Problem Signature 08: 1f Problem Signature 09: System.OverflowException OS Version: 6.1.7601.2.1.0.274.10 Locale ID: 1033 Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt

    Read the article

  • backup of KVM VM's running on Ubuntu 12.4.1 precise edition from a remote machine

    - by Dr. Death
    I am creating a library API which will take the backup of all the VM's running on KVM hypervisor. My VM's can be of any type. I am taking this backup from a remote machine and need to put the backup at remote server. I have KVM, Libvirt installed on my system. Some of my VM's are LVM based and some are normal VM's running on KVM. I research and found out an excellent perl script for taking the backup http://pof.eslack.org/2010/12/23/best-solution-to-fully-backup-kvm-virtual-machines/ but since I am developing this library in C++ I cannot use it however it has given me a good understanding of how it will work. One thing I didnot able to sort out is if my VM's are not created using virt-manager or are created using any other tool them virsh system list command does not give them in the list of running VM's however they are running perfectly on my KVM server. Is there a way to list these VM's in my system list anyhow? secondly, when I am taking backup from the remote machine I am getting out of my ssh mode as soon as my libvirt command finishes and for every command I need to ssh again, Is there a way that I do not need to ssh each and every time? I have already used the rsa key for ssh but when once my command finishes my control moves to the remote machine again and try to find out my source VM location in remote machine's local drives which in turn fails it. here is the main problem I am facing. also for the LVM based VM I am able to take the live backup but for non LVM based my machines are getting suspended and not been able to take the live backup. Since my library will work on the remote machine only I might not know the VM's configruation on the KVM server. so need to make it consistent for all the VM's. Please share any thing related to this issue so that I may be able to take the live backup of the non lvm vm's also. I'll update my working and any research findings time to time to all of you. Thanks in advance for your suggestions in these regards.

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >