Search Results

Search found 25946 results on 1038 pages for 'cost based optimizer'.

Page 634/1038 | < Previous Page | 630 631 632 633 634 635 636 637 638 639 640 641  | Next Page >

  • What patterns book for iOS development contains this specific information? [closed]

    - by Brett Ryan
    I've read several books on iOS development and Objective-C, however what a lot of them teach is how to work with interfaces and all contain the model inside the view controller, i.e. a UITableViewController based view will simply have an NSArray as it's model. I'm interested in what the best practices are for designing the structure of an application. Specifically I'm interested in best practices for the following: How to separate a model from the view controller. I think I know how to do this by simply replacing the NSArray style example with a specific model object, however what I do not know how to do is alert the view when the model changes. For example in .NET I would solve this by conforming to INotifyPropertyChanged and databinding, and similarly with Java I would use PropertyChangeListener. How to create a service model for my domain objects. For example I want to learn the best way to create a service for a hypothetical Widget object to manage an internal DB and also services for communicating with remote endpoints. I need to learn the best ways to do this in a way that interface components can subscribe to events such as widgetUpdated. These services should be singleton classes and some how dependency injected into model/controller objects. Books I've read so far are: Programming in Objective-C (4th Edition) Beginning iOS 5 Development: Exploring the iOS SDK The iOS 5 Developer's Cookbook: Expanded Electronic Edition: Essentials and Advanced Recipes for iOS Programmers Learn Objective-C on the Mac: For OS X and iOS I've also purchased the following updated books but not yet read them. The Core iOS 6 Developer's Cookbook (4th edition Programming in Objective-C (5th Edition) I come from a Java and C# background with 15 years experience, I understand that many of the ways I would do things in these languages may not fit to the ObjC way of developing applications. Would someone be able to provide me with the book on this topic containing this specific subject matter?

    Read the article

  • MS Marketing Strategy

    - by Aaron Kowall
    I found this week’s Windows Phone 8 event interesting.  Not just because it looks like some fantastic new features in the new OS but because of the wait for release.  If I were a Nokia shareholder (which I am not) I’d be very unhappy with MS announcing that Windows Phone 8 will NOT work with current hardware.  So, there are some very nice Lumia devices that are now end-of-life that have arrived relatively recently at carriers and retailers. I understand that MS needs to demonstrate progress against iOS and Android and that there is some Windows 8 tie-in that they are trying to capitalize (and MS IS still all about Windows).  However, it’s a bit of a kick to partners that have invested in the platform with pretty decent devices (Samsung, HTC and of course Nokia). Personally, I’m still using a Samsung Foucs.  I was seriously considering upgrading to a Lumia 900 (we just got Lync mobile available) but will now wait it out until new devices arrive with Windows 8.  If MS had waited to announce, I would happily have upgraded to the Lumia and when I found out it couldn’t be upgraded then that would be a gamble I took and lost and I’d live with it.  Now, however, I can see the future and know that waiting is the better option for me so that is 1 sale Nokia will miss out on.  Based on some chats I’ve seen on mobile forums I’m certainly far from the only one. I’m sure glad I’m not in charge of marketing at MS.  There are tough decisions to be made there and I’m pretty sure you piss somebody off regardless. Technorati Tags: WP8,Lumia,Nokia,Samsung

    Read the article

  • A space-efficient filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

  • How to make TortoiseHg pull certain branch only?

    - by mark
    I have cloned the default branch of a big repository and now I wish to pull from the server using the TortoiseHg client. However, TortoiseHg proposes to pull from all the branches. Is it possible to instruct it to pull from the current branch only? So far I have seen suggestions to: Setup a hook on the client side to reject pulls from unwanted branches Check incoming revisions in TortoiseHg and only pull the ones belonging to the current branch Use the Mercurial ACL extension to deny access to all the branches, but the current one. I dislike all of these solutions, since all of them are client based. In all of them TortoiseHg actually pulls all of the branches (even in the second, where the pulled revisions are arranged into a bundle presented in the incoming revisions view) Is there an hg pull -b BRANCH equivalent in TortoiseHg? Thanks. EDIT I know how to do all of this using the Mercurial command line client - hg.exe. This question is specifically about the TortoiseHg GUI client.

    Read the article

  • gtk2/mate apps are choppy on debian testing [migrated]

    - by b0ti
    I have recently upgraded to a core-i7 system and now all the apps from mate (GTK2 based) are very slow. Basically when I switch to a workspace that has a couple mate-terminals open, they are redrawn as if these were sent over the network. GTK3 apps and firefox work properly and are refreshed instantly. I'm running Debian testing. I have tried to reinstall xorg and related packages and everything seems to work fine except for this. Here is my xorg.log. Any hints?

    Read the article

  • Choosing a monitoring system for a dynamically scaling environment: Nagios v. Zabbix

    - by wickett
    When operating in the cloud and scaling boxes automatically, there are certain monitoring issues that one experiences. Sometimes we might be monitoring 10 boxes and sometimes 100. The machines will scale up and down based on a demand. Right now, I think the best solution to this is to choose a monitoring solution that will instantiation of targets via calls to an API. But, is this really the best? I like the idea of dynamic discovery, but that is also a problem in the cloud seeing that the targets are not all in the same subnet. What monitoring solutions allow for a scaling environment like this? Zabbix currently has a draft API but I have been unable to fund a similar API for Nagios. Is there a similar API for Nagios? Anyone have any alternate suggestions besides Nagios and Zabbix?

    Read the article

  • people_dl_import shows millions of records

    - by amit lohogaonkar
    We have a situation now on prod in sharepoint 2007 based intranet platform and it shows thousands of records under people_dl_import category with format spsimport://?$$dl$$/domain1/domain2/domain3/ Also import was not stopping and added millions of records in database and was on verge of disk full. On other servers like dev we have very less data in this category and format is also like spsimport://doaminname?$$dl$$?... which is good and has only 6000 rows and in prod its 2 millions Crawled under people_dl_import category. I need to know the cause of this garbage data and how to fix it. I tried resetting content source and I will do full import in this weekend to see if this garbage data gets cleared. Any idea on cause for thiss issue?

    Read the article

  • Mutt: apply command to all tagged messages

    - by mrucci
    From the mutt manual: Once you have tagged the desired messages, you can use the tag-prefix operator, which is the ; (semicolon) key by default. When the tag-prefix operator is used, the next operation will be applied to all tagged messages if that operation can be used in that manner. But it seems that I can only execute commands that are already bound to a specific keyboard shortcut. For example I can use ;d to delete all selected messages. What if I want to apply an "unbound" command (such as purge-message)? I have also tried using something based on :exec tag-prefix or :push tag-prefix without success.

    Read the article

  • UDK - How to make sure a PhysicalMaterial mask actually works?

    - by tomacmuni
    Hello, I have been reading the documentation for UDK about physical materials and masks. I have my 1bit BMP mask, and the two physical material assets I want to shoot off in the black and white channels. I have applied my material to both a rigid body and to a skeletal mesh and neither apparently uses the mask. If I assign a regular physical material (one that doesn't use a mask) then it will work fine, but this defeats the point because it gives only one hit reaction. In the documentation it states that it is possible to extend a class on which we want to use a physical material based on the KActor class's usage. How to do that? Here is the quote: "The following properties [ie, ImpactEffect - Particle system to spawn at the point of impact + ImpactSound - Sound to play when an impact occurs] allow you to attach sounds and effects to physical collisions. These only work on classes which support them, which at the moment is only KActor. By looking at the implementation in KActor though, you can add this functionality to other classes (or you can subclass KActor)." Essentially, how to make sure a PhysicalMaterial mask actually works? What code could be added to a skeletal mesh class perhaps, to get it going? Any help appreciated.

    Read the article

  • Prompt for user group when logging into OSX domain

    - by mattdwen
    When a user is a member of more than one group, when logging in to a 10.6 machine, it shows a prompt asking for what group to apply settings for. We're using the groups to mount different shares, e.g. Production and Accounts, based on user membership. Often, a user is a member of more than one group, and needs all the drives available. The Open Directory server is running 10.6 also. Is there a way to skip this prompt and apply settings for all groups. I can foresee that there may be conflicts between group settings, but perhaps a priority can be set too? Or is this totally the wrong way to go about this?

    Read the article

  • Build vs Buy Webcast: November 8, 2012

    - by TammyBednar
    Date: Thursday, November 8, 2012, 1:00 PM EST You have a choice. Do you build your own database platform or buy a pre-engineered database appliance? Building a high-availability database platform presents unique challenges. Combining servers, storage, networking, OS, firmware, and database is complicated and raises important concerns: Will coordination between multiple SME’s delay deployment? Will it be reliable? Will it scale? Will routine maintenance consume precious IT-staff time? Ultimately, will it work? Enter the Oracle Database Appliance, a complete package of software, server, storage, and networking that’s engineered for simplicity. It saves time and money by simplifying deployment, maintenance, and support of database workloads. Plus, it’s based on Intel Xeon processors to ensure a high level of performance and scalability. Attend this Webcast to hear customer stories and discover how the Oracle Database Appliance: Increases ROI by reducing capital and operational expenses Frees IT staff by reducing deployment and management time from weeks to hours Takes the worry out of supporting mission critical application workloads Register For this WebCast today!

    Read the article

  • Mark packets across computers?

    - by eudemo
    I use Transmission on Ubuntu and I'm having this issue, which basically says that QoS is broken because there is no way to limit which outgoing ports uses. I was thinking of doing a dirty and ugly hack and create an interface alias and define QoS based on source address, but was wondering if is there another way. Is it possible to mark the packets on the original machine in some way, using the owner and mark modules of iptables and sending this to the router who does the QoS? From what I understand, mark on iptables only applies to the local machine, so this will not work, but is there another way?

    Read the article

  • How do I create an ami for openSUSE in Amazon ec2?

    - by jgaa
    I need to scale some services from local servers to Amazon ec2. The current production-environment is based on the latest openSUSE. In order to keep things simple, I want to run the instances in ec2 in the same environment. However, I'm unable to find any public SUSE ami's or even howto's on this subject. I've seen a few similar questions in different forums, without any resolutions. So actually, I have two questions. 1) Is this at all doable? 2) If, so, is there any documentation / howto's available somewhere? Jarle

    Read the article

  • Why isn't Adobe software multilingual?

    - by Takowaki
    I work in a design studio with several non-native English speakers (in this case, into Japanese and Chinese). I have installed the latest Creative Suite (CS5) on our mac stations and was once again disappointed that unlike so many modern software packages there is still no option to change the language of the software. Most of the team has been good enough to work on their English, but it would be much more helpful for them to work in their native language. Why does Adobe continue to require separate licenses based on language? Are they operating under the assumption that only a single language is ever spoken in any given country? Are there any other third party options or does Adobe at least have some sort of statement regarding this policy?

    Read the article

  • Java and C# in web development [on hold]

    - by azalut
    I am wondering whether C# development(ASP.NET) is rather kind of "rapid development" or something "big" like JavaEE/Spring? We all know, that RoR or Django are really rapid-development frameworks - and so - is C# closer to Java "long-timed-development" or to frameworks like the two above - Django, RoR? I am, for now, an amateur Java programmer and sometimes I get annoyed with the amount of code that have to be written to create even a short CRUD app. We need a lot of skills to create at least a small app. I want some change, at least for some time and learn something new. I tried (just few hours) first: RoR, then Django and now I am writing in C#. It seems to be like Java but a little bit extended. In respect of future work as a professional coder - Is it profitable to know both competitive technologies like Java (and its frameworks) and C# with .NET(ASP.NET for example)? Maybe better choice is Python? Or just stop being stupid and still work with Java but with another framework(and master my Java skills) or JavaScript, jQuery to be better at web-development? Actually this question depends on your own opinions that is why I know that this question could be blocked by admins. But main question is in the top of the post I mean: is C# web-development rapid or closer to Java? I am afraid, that if I don't try, I will regret in the future, when I awake and think: oh my god, how could I not get familiar with (another_technology_or_language) Thanks for your attention :) ps I had asked the same question on stackoverflow, but it was hold because of being opinion based. Hope it fits here ;)

    Read the article

  • Getting the total number of processors a computer has (c#)

    - by mbcrump
    Here is a code snippet for getting the total number of processors a computer has without using Environment.ProcessorCount. I found out that Environment.ProcessorCount is not necessary returning the correct value on some Intel based CPU’s.   using System; usingSystem.Collections.Generic; usingSystem.Linq; usingSystem.Text; usingSystem.Globalization; usingSystem.Runtime.InteropServices; namespaceConsoleApplication4 {     classProgram    {         static voidMain(string[] args)         {             int c = ProcessorCount;             Console.WriteLine("The computer has {0} processors", c);             Console.ReadLine();         }         private static classNativeMethods        {             [StructLayout(LayoutKind.Sequential)]             internal struct SYSTEM_INFO            {                 public ushort wProcessorArchitecture;                 public ushort wReserved;                 public uint dwPageSize;                 publicIntPtr lpMinimumApplicationAddress;                 publicIntPtr lpMaximumApplicationAddress;                 publicUIntPtr dwActiveProcessorMask;                 public uint dwNumberOfProcessors;                 public uint dwProcessorType;                 public uint dwAllocationGranularity;                 public ushort wProcessorLevel;                 public ushort wProcessorRevision;             }             [DllImport("kernel32.dll", CharSet = CharSet.Auto, ExactSpelling = true)]             internal static extern voidGetNativeSystemInfo(refSYSTEM_INFOlpSystemInfo);         }         public static int ProcessorCount         {             get            {                 NativeMethods.SYSTEM_INFOlpSystemInfo = newNativeMethods.SYSTEM_INFO();                 NativeMethods.GetNativeSystemInfo(reflpSystemInfo);                 return(int)lpSystemInfo.dwNumberOfProcessors;             }         }     } }

    Read the article

  • Booting off a ZFS root in 14.04

    - by RJVB
    I've been running a Debian derivative (LMDE) on a ZFS root for half a year now. It was created by cloning a regular ext4-based install with all the necessary packages onto a ZFS pool, chrooting into that pool and recreating a grub menu and bootloader. The system uses an ext-3 dedicated /boot partition. I would like to do the same with Ubuntu 14.04, but have encountered several obstacles. There is no Trusty zfs-grub package The default grub package doesn't have ZFS support built in. I found a small bug in the build system responsible for that (report with patch created) and built my own grub packages. The built-in ZFS support is dysfunctional, it does not add the proper arguments to the kernel command line I thus installed the ZoL grub package I also use on my LMDE system, which does give me a correct grub.cfg However, even with that correct grub.cfg, the boot process apparently doesn't retrieve the bootfs parameter from the ZFS pool; instead the variable that's supposed to receive the value remains empty. As a result, initrd tries to load the default pool ("rpool"), which fails of course. I can however import the pool by hand, and complete the process by hand. If memory serves me well, I also had to disable apparmor, to avoid the boot process from blocking after importing the pool. Am I overlooking something? Just for comparison, I installed the Ubuntu 3.13 kernel on my LMDE system, and that works just fine (i.e. the identical kernel and grub binaries allow successful booting without glitches on LMDE but not on Ubuntu).

    Read the article

  • Client/Server game even in solo: any big problem?

    - by Klaim
    I'm making a game which have strong basic design based on multiplayer but also should provide a really interesting and self-sufficient solo game. A bit like a real-time strategy game. The events and actions taken shouldn't be as massive and immediate as in a FPS, so you can also think the networking like for an RTS. It's a PC game, targetting Windows, MacOSX and Linux (Ubuntu & Fedora). It's programmed in C++, using a variety of open source libraries, so I have great (potential) control over the performances. So far I always considered that just making the game work with two applications, client & server, even in solo mode was ok. However, as I'm in the process of starting the network code I'm having doubts about if it's a good idea. I'm not a specialist so I might be missing something in my analysis. I see these pros and cons: Pros: The game works only one way so if I fix a bug it should apply on all game modes, whatever the distance with the server is; Basic networking issues would be detected early, including behaviour with the protection softwares (firewall) installed (i am not specialist so this might be wrong); Cons: I suppose that even if it should be really fast enough, networking client and server on the same computer would still be slower than no networking and message passing in (one) process memory. Maybe debugging would be more difficult? I don't have experience in this case but so far I assume that debugging with Visual Studio allows me to debug multiple process so it shouldn't be really different. Also, remote debugging. My question is: is there a big disadvantage that I missed? Or maybe there are advantages that I missed and that should encourage me to just continue with only client-server game sessions?

    Read the article

  • Issue updating domain name servers from BlueHost to AWS

    - by cowls
    I am trying to migrate my site hosting from bluehost to AWS cloud based service. I have the site up and running on AWS with an elastic IP configured, it loads fine when I specify the IP address in the browser. I have gone into Route 53 on the AWS console and created a "hosted zone" for the domain. I then created a new record set of type "A" using the IP address as the value. I have a domain name registered with bluehost. Ive logged into the bluehost account and updated the domain name servers to point to those specified in Route 53 in the AWS console. When I hit the IP address directly the site loads, however it doesn't load when using the domain name (I get a google chrome oops error page saying page is not found) I've tried using this site: http://dns.squish.net/ to debug but it seems to be giving me the correct results. fizaclegems.com 300 IN A 107.20.209.78 Where 107.20.209.78 matches the elastic IP configured in the AWS console. This is the result it gives for all 4 name servers. Am I missing a step here? Does anyone know what else I should be doing or looking for?

    Read the article

  • CUDA 4.1 Particle Update

    - by N0xus
    I'm using CUDA 4.1 to parse in the update of my Particle system that I've made with DirectX 10. So far, my update method for the particle systems is 1 line of code within a for loop that makes each particle fall down the y axis to simulate a waterfall: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); In my .cu class I've created a struct which I copied from my particle class and is as follows: struct ParticleType { float positionX, positionY, positionZ; float red, green, blue; float velocity; bool active; }; Then I have an UpdateParticle method in the .cu as well. This encompass the 3 main parameters my particles need to update themselves based off the initial line of code. : __global__ void UpdateParticle(float* position, float* velocity, float frameTime) { } This is my first CUDA program and I'm at a loss to what to do next. I've tried to simply put the particleList line in the UpdateParticle method, but then the particles don't fall down as they should. I believe it is because I am not calling something that I need to in the class where the particle fall code use to be. Could someone please tell me what it is I am missing to get it working as it should? If I am doing this completely wrong in general, the please inform me as well.

    Read the article

  • A Dozen USB Chargers Analyzed; Or: Beware the Knockoffs

    - by Jason Fitzpatrick
    When it comes to buying a USB charger one is just as good as another so you might as well buy the cheapest one, right? This interesting and detailed analysis of name brand, off-brand, and counterfeit chargers will have you rethinking that stance. Ken Shirriff gathered up a dozen USB chargers including official Apple chargers, counterfeit Apple chargers, as well as offerings from Monoprice, Belkin, Motorola, and other companies. After putting them all through a battery of tests he gave them overall rankings based on nine different categories including power stability, power quality, and efficiency. The take away from his research? Quality varied widely between brands but when sticking with big companies like Apple or HP the chargers were all safe. The counterfeit chargers (like the $2 Apple iPad charger knock-off he tested) proved to be outright dangerous–several actually melted or caught fire in the course of the project. Hit up the link below for his detailed analysis including power output readings for the dozen chargers. A Dozen USB Chargers in the Lab [via O'Reilly Radar] 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8

    Read the article

  • What's a good tool for collecting statistics on filesystem usage?

    - by Kamil Kisiel
    We have a number of filesystems for our computational cluster, with a lot of users that store a lot of really large files. We'd like to monitor the filesystem and help optimize their usage of it, as well as plan for expansion. In order to this, we need some way to monitor how these filesystems are used. Essentially I'd like to know all sorts of statistics about the files: Age Frequency of access Last accessed times Types Sizes Ideally this information would be available in aggregate form for any directory so that we could monitor it based on project or user. Short of writing something up myself in Python, I haven't been able to find any tools capable of performing these duties. Any recommendations?

    Read the article

  • OpenXML error “file is corrupt and cannot be opened.”

    - by nmgomes
    From time to time I ear some people saying their new web application supports data export to Excel format. So far so good … but they don’t tell the all story … in fact almost all the times what is happening is they are exporting data to a Comma-Separated file or simply exporting GridView rendered HTML to an xls file. Ok … it works but it’s not something I would be proud of. So … yesterday I decided to take a look at the Office Open XML File Formats Specification (Microsoft Office 2007+ format) based on well-known technologies: ZIP and XML. I start by installing Open XML SDK 2.0 for Microsoft Office and playing with some samples. Then I decided to try it on a more complex web application and the “file is corrupt and cannot be opened.” message start happening. Google show us that many people suffer from the same and it seems there are many reasons that can trigger this message. Some are related to the process itself, others with encodings or even styling. Well, none solved my problem and I had to dig … well not that much, I simply change the output file extension to zip and extract the zip content. Then I did the same to the output file from my first sample, compare both zip contents with SourceGear DiffMerge and found that my problem was Culture related. Yes, my complex application sets the Thread.CurrentThread.CurrentCulture  to a non-English culture. For sample purposes I was simply using the ToString method to convert numbers and dates to a string representation but forgot that XML is culture invariant and thus using a decimal separator other than “.” will result in a deserialization problem. I solve the “file is corrupt and cannot be opened.” by using Convert.ToString(object, CultureInfo.InvariantCulture) method instead of the ToString method. Hope this can help someone.

    Read the article

  • Multitenancy in SQL Azure

    - by cibrax
    If you are building a SaaS application in Windows Azure that relies on SQL Azure, it’s probably that you will need to support multiple tenants at database level. This is short overview of the different approaches you can use for support that scenario, A different database per tenant A new database is created and assigned when a tenant is provisioned. Pros Complete isolation between tenants. All the data for a tenant lives in a database only he can access. Cons It’s not cost effective. SQL Azure databases are not cheap, and the minimum size for a database is 1GB.  You might be paying for storage that you don’t really use. A different connection pool is required per database. Updates must be replicated across all the databases You need multiple backup strategies across all the databases Multiple schemas in a database shared by all the tenants A single database is shared among all the tenants, but every tenant is assigned to a different schema and database user. Pros You only pay for a single database. Data is isolated at database level. If the credentials for one tenant is compromised, the rest of the data for the other tenants is not. Cons You need to replicate all the database objects in every schema, so the number of objects can increase indefinitely. Updates must be replicated across all the schemas. The connection pool for the database must maintain a different connection per tenant (or set of credentials) A different user is required per tenant, which is stored at server level. You have to backup that user independently. Centralizing the database access with store procedures in a database shared by all the tenants A single database is shared among all the tenants, but nobody can read the data directly from the tables. All the data operations are performed through store procedures that centralize the access to the tenant data. The store procedures contain some logic to map the database user to an specific tenant. Pros You only pay for a single database. You only have a set of objects to maintain and backup. Cons There is no real isolation. All the data for the different tenants is shared in the same tables. You can not use traditional ORM like EF code first for consuming the data. A different user is required per tenant, which is stored at server level. You have to backup that user independently. SQL Federations A single database is shared among all the tenants, but a different federation is used per tenant. A federation in few words, it’s a mechanism for horizontal scaling in SQL Azure, which basically uses the idea of logical partitions to distribute data based on certain criteria. Pros You only have a single database with multiple federations. You can use filtering in the connections to pick the right federation, so any ORM could be used to consume the data. Cons There is no real isolation at that database level. The isolation is enforced programmatically with federations.

    Read the article

  • Cloning from a given point in the snapshot tree

    - by Fat Bloke
    Although we have just released VirtualBox 4.3, this quick blog entry is about a longer standing ability of VirtualBox when it comes to Snapshots and Cloning, and was prompted by a question posed internally, here in Oracle: "Is there a way I can create a new VM from a point in my snapshot tree?". Here's the scenario: Let's say you have your favourite work VM which is Oracle Linux based and as you installed different packages, such as database, middleware, and the apps, you took snapshots at each point like this: But you then need to create a new VM for some other testing or to share with a colleague who will be using the same Linux and Database layers but may want to reconfigure the Middleware tier, and may want to install his own Apps. All you have to do is right click on the snapshot that you're happy with and clone: Give the VM that you are about to create a name, and if you plan to use it on the same host machine as the original VM, it's a good idea to "Reinitialize the MAC address" so there's no clash on the same network: Now choose the Clone type. If you plan to use this new VM on the same host as the original, you can use Linked Cloning else choose Full.  At this point you now have a choice about what to do about your snapshot tree. In our example, we're happy with the Linux and Database layers, but we may want to allow our colleague to change the upper tiers, with the option of reverting back to our known-good state, so we'll retain the snapshot data in the new VM from this point on: The cloning process then chugs along and may take a while if you chose a Full Clone: Finally, the newly cloned VM is ready with the subset of the Snapshot tree that we wanted to retain: Pretty powerful, and very useful.  Cheers, -FB 

    Read the article

< Previous Page | 630 631 632 633 634 635 636 637 638 639 640 641  | Next Page >