Search Results

Search found 5885 results on 236 pages for 'finally'.

Page 58/236 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Multi-account Google Apps Calendar Free-busy status?

    - by Andrew Bolster
    I have 3 google apps domain accounts and a personal google account, and until recently, there was little need for the google app's calendars to have any real use; Now however the powers that be finally discovered the smart re-scheduler and the other google tools for managing meetings and schedules; unfortunately I'm now in a position where I've got events notifications all over the place and because each of the calendars do not know about each other, I'm losing all of the advantages of rescheduler / free-busy status. TL;DR, 4 calendars, unified 'free busy' status without having to manually copy every event please.

    Read the article

  • Best network tuning variables for a Linux proxy

    - by smarthall
    What are the best settings to tune so that Linux can handle a very large amount of TCP connections such as would be seen by a proxy server or a webserver? I'm using Centos6 and squid and am seeing a large amount of TIME_WAIT connections backing up until finally the machine stops responding. The machine isn't loaded at the time, and is having trouble making ingoing and outgoing connections. I've had several suggestions of tuning /proc/sys/net/ipv4/tcp_tw_reuse and /proc/sys/net/ipv4/tcp_tw_reuse but they mention bad interactions with load balancers and NAT both of which are used in my situation.

    Read the article

  • Slow write speeds on new Gigabit home file server

    - by Ryan Holder
    So I finally got all my parts delivered to setup a home file/backup server this week. It's currently running Ubuntu Server and I'm using Samba to share files on my network. The server currently has a 2TB WD Green drive in it connected to a Asus M5A78L-M This is then connected via CAT6a to my new Gigabit switch (TP-Link TL-SG1005D). My home desktop is then also connected to this switch and again also through CAT6a cable. Currently when transfering files I will get a perfect 100MB/s read from the server to my Windows machine. When copying from my Windows machine to the server I get around 30/38MB/s. I know this drive is capable is faster speeds so would anybody have an idea of where the bottleneck is? Any help would be greatly appreciated :) EDIT: I have found ftp's write speed is much closer to what my Samba read speed is so I'm going to give it a guess that is a software problem rather than hardware

    Read the article

  • How determine keyboard variation when manufacturer changes it

    - by Maksee
    When I decided to purchase Toshiba Z830, I specially noticed at photos that the keyboard was good for me (wide Enter, Left Shift, Backspace), you can query it at images.google.com, on most photos they're all wide. When I finally bought it (Z830-A2S), the keyboard was different, the Enter is narrow and the left Shift is "split" into Shift and backslash keys (probably 5% of photos at images.google.com). Is it normal for manufacturers to change this during the production cycle or this can be variations from different contractors? But the main point, is it possible to determine this from the full model name or somewhere else without visiting a store?

    Read the article

  • Not able to safely remove external disk after having mounted and unmounted a VHD on it

    - by Agnel Kurian
    I am using Windows 7 SP 1. I have an external hard disk (Seagate 500GB) which I am able to use without problems most of the time. I am able to plug it in, use it and then safely unmount it via the "Eject USB Mass Storage Device" option in the taskbar tray. However, if I attach a VHD file located on this disk using "Disk Management", then detach the VHD and finally try to safely disconnect the disk via the system tray, I get an error which says: "Problem Ejecting USB Mass Storage Device: Windows can't stop your 'Generic volume' device because a program is still using it. Close any programs that might be using the device, and then try again later." How do I avoid this problem? Which process could still be accessing the device (even after I have closed the "Disk Management" application) ?

    Read the article

  • Host forwarding fails, server is up, domain name tests ambiguous

    - by jayunit100
    I have a domain name registered with http://www.registryrocket.com/ The "main" site, which is called rudolfcode.net, is registered under godaddy, and forwards to a heroku site (rudolfcode.herokuapp.com). I have found that the main site, rudolfcode.net works, but the hostgator forwarding has stopped working (firefox simply fails when you point to http://www.rudolflabs.com, which is the domain name registered by hostgator). How can I debug this issue ? Finally, I have tried to run some DNS tests, and here are the results : Im not sure what the failures mean .... But Im pretty sure that "Conecting to WWW Home Page" failed is a pretty bad sign ! Thanks.

    Read the article

  • Advanced ID3 tags handling and audio files ordering

    - by Juhele
    Some of my files do not have complete ID3 tags and some have typos or small differences in writing – so finally, my portable player sees “Mr. President” as different artist from “Mr President” and so on. I would need some tool which could search similar tags and then allow me to correct the typos or for example override artist in all selected files by manually entered text. The same with empty tag items – sometimes, the track name, album etc. is OK, but the artist is missing etc. I'd like to do this without touching the audio quality, of course (but this should be no problem, I think). I already tried tools like: Winamp Songbird other players Tagscanner – the most advanced free tool I tried. However, it is not able to to solve the problem with similar tags. Do you know such tool? Preferably free and for Windows, if possible. However, if you know some commercial app able to do this, please let me know.

    Read the article

  • Deactivate SYN flooding mechanism

    - by mlaug
    I am running a server that is running a service on port 59380. There are more than 1000 machines out there connecting to that service. Once I need to restart the service all those machines are connecting at the same time. That made some trouble as I have seen that log entry in kern.log TCP: Possible SYN flooding on port 59380. *Sending cookies*. Check SNMP counters. So I changed sysctl net.ipv4.tcp_syncookies to 0 because the endpoints to not handle tcp syn cookies correctly. Finally I restarted my network to get the changes in production Next time I had to restart the service, the following message was logged TCP: Possible SYN flooding on port 59380. *Dropping request*. Check SNMP counters. How can I prevent the system for doing such actions? All necessary counter measures are done by iptables...

    Read the article

  • The speed of copying a file from a PC to a USB Flash drive started at 30MB/s and decreased to 5.8 MB

    - by Jian Lin
    If I copy a 8GB file from the PC to a USB Flash Drive, the speed will start at around 30 MB/s... maybe 28 MB/s, and then gradually, after a minute, it will go down to 15 MB/s and finally settle down at 5.8 MB/s. But I thought if it is a hard drive, then probably there is the RAM cache and also the internal hard drive cache, and will make the copying of file from PC to hard drive appear fast at first. But for a USB Flash drive, there should be no internal cache for the USB Flash drive itself. Is there a RAM cache for it, so that's why the initial copying seems so fast?

    Read the article

  • Easily manage vsftpd virtual users?

    - by Phil
    I have a vsftpd server configured with many virtual users. logins are stored in a Berkeley DB file One configuration file exists for each user to define his permissions (read-only or read-write, home directory, etc.). To do that, I use the user_config_dir parameter (set in vsftpd.conf). I am wondering if it would be possible to manage these virtual users from a simple GUI (such as web interface). I have found some tools but they are limited to generic vsftpd configuration, not virtual users management. Otherwise, PAM-MySQL seems to be a good way to manage users efficiently but only username/password and logs can be stored in database, not permissions. Finally, I've found this thread, but the solution is a bit awkward... Is there any way to easily manage the vsftpd users ?

    Read the article

  • What naming pattern can I use for sequential file naming (photos) when only their relative sequence is known?

    - by Juhele
    I got some old scanned photos and I want to put them in correct order. Unfortunately, I have no possibility to find out the exact order, only relative one like: "hm, this photo was surely taken after this one" and organize them step-by-step by manually changing numbering again and again. Is there any program (best free or opensource), where could I interactively put the photo in correct order straightaway (maybe by changing the order by dragging with mouse) and finally apply some file renaming to keep the file order? thank you in advance PS: running Windows (XP and 7), but if you know something for linux, let me kno too, please

    Read the article

  • Why is Windows 7 so slow to log-on when my Wireless USB dongle is plugged in?

    - by Sam
    I'm using an Edimax Wireless USB dongle (model EW-7711UTN) on Windows 7 64-bit. The dongle works brilliantly (it was truly plug-and-play) except for one thing. I've noticed that if the dongle is plugged in whilst I'm logging on to Windows it introduces a massive delay - after I type in my password and hit enter, Windows shows the "Welcome" screen, then the screen goes black for 2 minutes or so (with just the cursor showing) before finally displaying my desktop. If I unplug the dongle before I enter my password, Windows goes straight to my desktop. Even though the dongle didn't appear to need drivers for Windows 7, I did try downloading the latest ones from the Edimax website, but that made no difference.

    Read the article

  • No blocked ports on internal interface of ASA

    - by blsub6
    I have a cisco ASA 5505 with three interfaces: Internal (100), DMZ (50) and External (0). The internal has a IPSEC VPN tunnel to my internal network I couldn't log in to my domain because of all of the port restrictions and such. I tried monitoring the traffic through the interface, seeing what it's blocking and then unblocking those ports but even then it didn't work completely correctly I finally just added a rule to permit any ip traffic from any network to any network on the internal interface and, of course, it worked fine But is that good security practice? Should I be blocking ports on an interface that's internal and over a VPN with the highest security level?

    Read the article

  • Postgres backup

    - by Abbass
    Hello, I have a Bacula script that does an automatic backup of a Postgres Database. The script makes two backups using (pg_dump) of the data base : The schema only and the data only. /usr/bin/pg_dump --format=c -s $dbname --file=$DUMPDIR/$dbname.schema.dump /usr/bin/pg_dump --format=c -a $dbname --file=$DUMPDIR/$dbname.data.dump The problem is that I can't figure out how to restore it with pg_restore. Do I need to create the database and the users before then restore the schema and finally the data. I did the following : pg_restore --format=c -s -C -d template1 xxx.schema.dump pg_restore --format=c -a -d xxx xxx.data.dump This first restore creates the database with emtpy tables but the second gives many error like this one : pg_restore: [archiver (db)] COPY failed: ERROR: insert or update on table "Table1" violates foreign key constraint "fkf6977a478dd41734" DETAIL: Key (contentid)=(1474566) is not present in table "Table23". Any ideas?

    Read the article

  • Simplification of Apache+Subversion multidirectory configuration

    - by Reinderien
    Hello. With your excellent advice, I've finally pieced together this functional Apache configuration for my Subversion service: # Macro to make an SVN repo set <Macro SVNDir $user> <Location /svn/$user> # Mandatory HTTPS, log in using Active Domain SSLRequireSSL AuthPAM_Enabled on AuthType Basic AuthBasicAuthoritative off AuthName "PAM" Require user AD\$user # Needed to squash spurious error messages AuthUserFile /dev/null # SVN stuff DAV svn SVNParentPath /var/www/svn/$user </Location> </Macro> # List of accounts Use SVNDir user1 Use SVNDir user2 # ... It works, but it isn't optimal. I'd like to somehow redo this so that it can just scan the list of directories in /var/www/svn and automatically do this for each of them. Is that possible? Thanks.

    Read the article

  • Mail sent from local Postfix marked as "possible phishing" in Outlook

    - by leo grrr
    Hi folks, Sorry for the newbie question--this is not my area of expertise by a long shot. I work at a small development shop and we finally got around to doing code reviews. (Yay!) I set up an instance of Review Board -- an open-source code review tool -- on one of our local servers but it doesn't seem to like talking to our hosted Exchange server to send notification emails. I decided to just install Postfix on that same box and send mail from localhost, which is working much more reliably, but Outlook disables all links in the email announcements and marks it as possible phishing. What is making these emails look suspicious and what can I change? Would the best thing be to figure out how to relay to Exchange from Postfix? Thanks!

    Read the article

  • USB Format Error

    - by Dan Finan
    I'm having a real headache trying to reformat a USB drive. Initially it had a 200mb EFI partition and it caused the drive to disappear altogether. Since then I ran the CMD and wiped the disk using 'diskpart'. It took a few attempts but it finally cleaned the drive. Since then it has reappeared under (:E) however I am unable to access the drive and Windows is preventing me from reformatting it. I am just presented with 'Windows was unable to complete the format'. It's now acting like a CD drive instead of removable storage. I've tried going through Disk Management and I'm presented with the same error. I've removed the USB controllers from Device Manager - when the drive is connected again it re-installs the drivers and acts the same way. Any help given will be greatly appreciated, thank you. (Windows 7 machine)

    Read the article

  • GDI+ & Delphi, PNG resource, DrawImage, ColorConversion -> Out of Memory

    - by Paul
    I have started to toy around with GDI+ in Delphi 2009. Among the things that I wanted to do was to load a PNG resource and apply a Color Conversion to it when drawing it to the Graphics object. I am using the code provided in http://www.bilsen.com/gdiplus/. To do that I just added a new constructor to TGPBitmap that uses the same code found in <www.codeproject.com>/KB/GDI-plus/cgdiplusbitmap.aspx (C++) or <www.masm32.com>/board/index.php?topic=10191.0 (MASM) converted to Delphi. For reference, the converted code is as follows: constructor TGPBitmap.Create(const Instance: HInst; const PngName: String; dummy : PngResource_t); const cPngType : string = 'PNG'; var hResource : HRSRC; imageSize : DWORD; pResourceData : Pointer; hBuffer : HGLOBAL; pBuffer : Pointer; pStream : IStream; begin inherited Create; hResource := FindResource(Instance, PWideChar(PngName), PWideChar(cPngType)); if hResource = 0 then Exit; imageSize := SizeofResource(Instance, hResource); if imageSize = 0 then Exit; pResourceData := LockResource(LoadResource(Instance, hResource)); if pResourceData = nil then Exit; hBuffer := GlobalAlloc(GMEM_MOVEABLE, imageSize); if hBuffer <> 0 then begin try pBuffer := GlobalLock(hBuffer); if pBuffer <> nil then begin try CopyMemory(pBuffer, pResourceData, imageSize); if CreateStreamOnHGlobal(hBuffer, FALSE, pStream) = S_OK then begin GdipCheck(GdipCreateBitmapFromStream(pStream, FNativeHandle)); end; finally GlobalUnlock(hBuffer); pStream := nil; end; end; finally GlobalFree(hBuffer); end; end; end; The code seems to work fine as I am able to draw the loaded image without any problems. However, if I try to apply a Color Conversion when drawing it, then I get a lovely error: (GDI+ Error) Out of Memory. If I load the bitmap from a file, or if I create a temporary to which I draw the initial bitmap and then use the temporary, then it works just fine. What bugs me is that if I take the C++ project from codeproject, add the same PNG as resource and use the same color conversion (in other words, do the exact same thing I am doing in Delphi in the same order and with the same function calls that happen to go to the same DLL), then it works. The C++ code looks like this: const Gdiplus::ColorMatrix cTrMatrix = { { {1.0, 0.0, 0.0, 0.0, 0.0}, {0.0, 1.0, 0.0, 0.0, 0.0}, {0.0, 0.0, 1.0, 0.0, 0.0}, {0.0, 0.0, 0.0, 0.5, 0.0}, {0.0, 0.0, 0.0, 0.0, 1.0} } }; Gdiplus::ImageAttributes imgAttrs; imgAttrs.SetColorMatrix(&cTrMatrix, Gdiplus::ColorMatrixFlagsDefault, Gdiplus::ColorAdjustTypeBitmap); graphics.DrawImage(*pBitmap, Gdiplus::Rect(0, 0, pBitmap->m_pBitmap->GetWidth(), pBitmap->m_pBitmap->GetHeight()), 0, 0, pBitmap->m_pBitmap->GetWidth(), pBitmap->m_pBitmap->GetHeight(), Gdiplus::UnitPixel, &imgAttrs); The Delphi counterpart is: const cTrMatrix: TGPColorMatrix = ( M: ((1.0, 0.0, 0.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 0.0, 0.5, 0.0), (0.0, 0.0, 0.0, 0.0, 1.0))); var lImgAttrTr : IGPImageAttributes; lBitmap : IGPBitmap; begin // ... lImgAttrTr := TGPImageAttributes.Create; lImgAttrTr.SetColorMatrix(cTrMatrix, ColorMatrixFlagsDefault, ColorAdjustTypeBitmap); aGraphics.DrawImage ( lBitmap, TGPRect.Create ( 0, 0, lBitmap.Width, lBitmap.Height ), 0, 0, lBitmap.Width, lBitmap.Height, UnitPixel, lImgAttrTr ); I am completely clueless as to what may be causing the issue, and Google has not been of any help. Any ideas, comments and explanations are highly appreciated.

    Read the article

  • Gitosis installation of public key not working...

    - by user29600
    I've been following this tutorial to install and setup git on Ubuntu Server 10.04 using Windows 7 as a client. However, after finally figuring out how it works (executed gitosis-init a bunch of times on the wrong key), I copied the id_rsa.pub file over to the server in /tmp folder and ran it again. Unfortunately it still doesn't work and when I execute git clone [email protected]:gitosis-admin.git it asks for gitosis's password rather than the RSA Passphrase. I'm assuming is the same problem this guy is having here... however, after following his instructions: Purge git-core and gitosis and manually remove the /srv/gitosis folder and following the instructions again (with the proper id_rsa.pub file this time), I'm still having the same issue. Anyone know what I'm doing wrong? Is there any way to probe for more information that might help in solving this?

    Read the article

  • New folder doesn't appear in Finder

    - by rxt
    Many times I have the following problem. I create a new folder, but it doesn't appear in the Finder. Right now I have this problem with a folder created in Eclipse. After a while it appears, but I cannot tell when. I can see the folder in the terminal. A similar problem happens when I rename a folder or file. Then I have to move out of the parent folder, open another folder, go back, and most of the time it's there. (Finally the new folder appeared after creating another folder in the finder itself.) I'm using 10.8 Mountain Lion now, but this is not a problem of this release. I have this problem for several years now. Maybe I'm the only one with this problem? Is there a way to get this working normally?

    Read the article

  • Intermittent graphics issue

    - by Rob
    An older (~3 year old) PC I have passed on to my younger brother has a Sony 17" LCD monitor and uses on-board Intel GMA 3100 graphics (supports Aero under Windows 7 and no more is required). Recently he has complained to me twice that after booting up the screen looks all 'fuzzy'. Initially I thought maybe he messed up the ClearType settings by mistake, but resetting that did nothing. The only thing that finally worked was changing the resolution to something else temporarily, then setting it back to the original native (highest supported) resolution. The second time this happened the same 'fix' worked. This is happening only intermittently as of now and is not predictable/reproducible. I suspect either the monitor or on-board graphics is dying. How can I check and confirm which of the two it is (or maybe something else you guys can think of)? If it's the on-board graphics, would I be able to keep using the same motherboard with the addition of a cheap PCI-E graphics card?

    Read the article

  • Justification of Amazon EC2 Performance

    - by Adroidist
    I have a .jar file that represents a server which receives over TCP an image in bytes (of size at most 500 kb) and writes it file. It then sobels this image and sends it over TCP socket to the client side. I ran it on my laptop and it was very fast. But when I put it on Amazon EC2 server m1.large instance, i found out it is very slow - around 10 times slower. It might be the inefficiency in the code algorithm but in fact my code is nothing but receive image (like any byte file) run the sobel algorithm and send. I have the following questions: 1- Is it normal performance of Amazon EC2 server- I have read the following links link1 and link2 2- Even if the code is not that efficient, the server is finally handling a very low load (just one client), does the "inefficient" code justify such performance? 3- My laptop is dual core only...Why would the amazon ec2 server have worse performance that my laptop? How is this explained? Excuse me for my ignorance.

    Read the article

  • non blocking client server chat application in java using nio

    - by Amith
    I built a simple chat application using nio channels. I am very much new to networking as well as threads. This application is for communicating with server (Server / Client chat application). My problem is that multiple clients are not supported by the server. How do I solve this problem? What's the bug in my code? public class Clientcore extends Thread { SelectionKey selkey=null; Selector sckt_manager=null; public void coreClient() { System.out.println("please enter the text"); BufferedReader stdin=new BufferedReader(new InputStreamReader(System.in)); SocketChannel sc = null; try { sc = SocketChannel.open(); sc.configureBlocking(false); sc.connect(new InetSocketAddress(8888)); int i=0; while (!sc.finishConnect()) { } for(int ii=0;ii>-22;ii++) { System.out.println("Enter the text"); String HELLO_REQUEST =stdin.readLine().toString(); if(HELLO_REQUEST.equalsIgnoreCase("end")) { break; } System.out.println("Sending a request to HelloServer"); ByteBuffer buffer = ByteBuffer.wrap(HELLO_REQUEST.getBytes()); sc.write(buffer); } } catch (IOException e) { e.printStackTrace(); } finally { if (sc != null) { try { sc.close(); } catch (Exception e) { e.printStackTrace(); } } } } public void run() { try { coreClient(); } catch(Exception ej) { ej.printStackTrace(); }}} public class ServerCore extends Thread { SelectionKey selkey=null; Selector sckt_manager=null; public void run() { try { coreServer(); } catch(Exception ej) { ej.printStackTrace(); } } private void coreServer() { try { ServerSocketChannel ssc = ServerSocketChannel.open(); try { ssc.socket().bind(new InetSocketAddress(8888)); while (true) { sckt_manager=SelectorProvider.provider().openSelector(); ssc.configureBlocking(false); SocketChannel sc = ssc.accept(); register_server(ssc,SelectionKey.OP_ACCEPT); if (sc == null) { } else { System.out.println("Received an incoming connection from " + sc.socket().getRemoteSocketAddress()); printRequest(sc); System.err.println("testing 1"); String HELLO_REPLY = "Sample Display"; ByteBuffer buffer = ByteBuffer.wrap(HELLO_REPLY.getBytes()); System.err.println("testing 2"); sc.write(buffer); System.err.println("testing 3"); sc.close(); }}} catch (IOException e) { e.printStackTrace(); } finally { if (ssc != null) { try { ssc.close(); } catch (IOException e) { e.printStackTrace(); } } } } catch(Exception E) { System.out.println("Ex in servCORE "+E); } } private static void printRequest(SocketChannel sc) throws IOException { ReadableByteChannel rbc = Channels.newChannel(sc.socket().getInputStream()); WritableByteChannel wbc = Channels.newChannel(System.out); ByteBuffer b = ByteBuffer.allocate(1024); // read 1024 bytes while (rbc.read(b) != -1) { b.flip(); while (b.hasRemaining()) { wbc.write(b); System.out.println(); } b.clear(); } } public void register_server(ServerSocketChannel ssc,int selectionkey_ops)throws Exception { ssc.register(sckt_manager,selectionkey_ops); }} public class HelloClient { public void coreClientChat() { Clientcore t=new Clientcore(); new Thread(t).start(); } public static void main(String[] args)throws Exception { HelloClient cl= new HelloClient(); cl.coreClientChat(); }} public class HelloServer { public void coreServerChat() { ServerCore t=new ServerCore(); new Thread(t).start(); } public static void main(String[] args) { HelloServer st= new HelloServer(); st.coreServerChat(); }}

    Read the article

  • How can I make the storage of C++ lambda objects more efficient?

    - by Peter Ruderman
    I've been thinking about storing C++ lambda's lately. The standard advice you see on the Internet is to store the lambda in a std::function object. However, none of this advice ever considers the storage implications. It occurred to me that there must be some seriously black voodoo going on behind the scenes to make this work. Consider the following class that stores an integer value: class Simple { public: Simple( int value ) { puts( "Constructing simple!" ); this->value = value; } Simple( const Simple& rhs ) { puts( "Copying simple!" ); this->value = rhs.value; } Simple( Simple&& rhs ) { puts( "Moving simple!" ); this->value = rhs.value; } ~Simple() { puts( "Destroying simple!" ); } int Get() const { return this->value; } private: int value; }; Now, consider this simple program: int main() { Simple test( 5 ); std::function<int ()> f = [test] () { return test.Get(); }; printf( "%d\n", f() ); } This is the output I would hope to see from this program: Constructing simple! Copying simple! Moving simple! Destroying simple! 5 Destroying simple! Destroying simple! First, we create the value test. We create a local copy on the stack for the temporary lambda object. We then move the temporary lambda object into memory allocated by std::function. We destroy the temporary lambda. We print our output. We destroy the std::function. And finally, we destroy the test object. Needless to say, this is not what I see. When I compile this on Visual C++ 2010 (release or debug mode), I get this output: Constructing simple! Copying simple! Copying simple! Copying simple! Copying simple! Destroying simple! Destroying simple! Destroying simple! 5 Destroying simple! Destroying simple! Holy crap that's inefficient! Not only did the compiler fail to use my move constructor, but it generated and destroyed two apparently superfluous copies of the lambda during the assignment. So, here finally are the questions: (1) Is all this copying really necessary? (2) Is there some way to coerce the compiler into generating better code? Thanks for reading!

    Read the article

  • How to setup VM in KVM? Qcow or LVM etc.

    - by JohnAdams
    Finally, after quite a bit of this vs that, I have chosen to virtualize a couple of my servers with KVM. I did do a test setup as well, but I have a few questions about setting VM's in KVM. Would appreciate pointers. What is the best storage to use - Qcow2 or LVM? I like the fact that I can copy the VM file easily with a Qcow2 but what about LVM, how do I take a backup or make copy on a development server to play with? I know I can clone a LVM, but how do I bring to my development server? How do I setup the guest partitioning? For example, when setting up Ubuntu inside Ubuntu, do I choose LVM for that VM or regular fdisk partitioning? Can I increase the partition size then later, if I need a bigger disk?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >