Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 528/1338 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • Best way to address this Magento issue

    - by robgt
    I am in need of some advice/pointers on how best to attack a problem I am faced with in Magento 1.4.0.1. Here is the scenario: Consider a product catalog of multiple thousands of products in which there are many and various retail markup percentages. There is a need to sell some of those products to other retailers, and I need a way to easily categorise them into percentrage markup groups, so that the other traders can easily purchase items from us without needing to call and confirm pricing. All products prices are added to the magento database including VAT - so the actual retail value is the starting point. I think this needs to be done by adding an attribute on the products that determines the maximum discount level that can be applied to any given product. Obviously, this would entail a massive amount of work to update every product. Is there a way to achieve what we need that I don't yet know about (within Magento)? Can single attribute values be mass-populated somehow, without affecting other attributes such as name/description/etc?

    Read the article

  • ADF Mobile Application in the Apple AppStore

    - by Joe Huang
    Hi, everyone: I would like to announce that there is now an iPhone App in the Apple AppStore that's built using Oracle ADF Mobile.  This app is the Hudson Mobile Monitor application for the iPhone, which allows you to monitor build status for the Hudson Continuous Integration server.  By default it points to the production instance of the Hudson CI server hosted at Oracle.  One of the key goals for ADF Mobile in general is to ensure an application built using ADF Mobile is compliant with Apple iOS SDK terms.  We designed the framework to ensure applications can go through the approval process - of course there are still things that can be done which would cause issues during the approval process, and we will be publishing more best practices in the coming articles. Thanks, ADF Mobile Product Management Team.

    Read the article

  • Why are software schedules so hard to define?

    - by 0A0D
    It seems that, in my experience, getting us engineers to accurately estimate and determine tasks to be completed is like pulling teeth. Rather than just giving a swag estimate of 2-3 weeks or 3-6 months... what is the simplest way to define software schedules so they are not so painful to define? For instance, customer A wants a feature by 02/01/2011. How do you schedule time to implement this feature knowing that other bug fixes may be needed along the way and take up additional engineering time?

    Read the article

  • What is the state of the art in OOP?

    - by Ollie Saunders
    I used to do a lot of object-oriented programming and found myself reading up a lot on how to do it well. When C++ was the dominant OOP language there was a very different set of best practices than have emerged since. Some of the newer ideas I know of are BDD, internal DSLs, and the importing of ideas from functional programming. My question is: is there any consensus on the best way to develop object-oriented software today in the more modern languages such as C#, Ruby, and Python? And what are those practices? For instance, I rather like the idea of stateless objects but how many are actually using that in practice? Or, is the state of the art to deemphasize the importance of OOP? This might be the case for some Python programmers but would be difficult for Rubyists.

    Read the article

  • Motivating developers in a project perceived as **dull** ?

    - by Fanatic23
    As a manager, I can't always end up generating work that'd be cutting edge. Some of the projects do run on maintenance mode, and generate a healthy free cash flow for the company. As a developer what would it take for you to stick around in this project? I have been thinking of re-branding the work, but I could do with a lot of help here. Appreciate a single response per post. Please don't suggest an increased pay-packet, this creates more problems than it solves.

    Read the article

  • Best language or tool for automating tedious manual tasks

    - by Jon Hopkins
    We all have tasks that come up from time to time that we think we'd be better off scripting or automating than doing manually. Obviously some tools or languages are better for this than others - no-one (in their right mind) is doing a one off job of cross referencing a bunch of text lists their PM has just given them in assembler for instance. What one tool or language would you recommend for the sort of general quick and dirty jobs you get asked to do where time (rather than elegance) is of the essence? Background: I'm a former programmer, now development manager PM, looking to learn a new language for fun. If I'm going to learn something for fun I'd like it to be useful and this sort of use case is the most likely to come up.

    Read the article

  • Where are the LXDE sound preferences?

    - by Uri Herrera
    I'm confused,how can I change the sound output in LXDE? There's not even a single option anywhere unlike other environments. I've tried doing anything with alsamixerbut still nothing, the output device is the HDMI audio of a Radeon HD card. Edit: As it Turns out i was able to change the ouput device, but i had to use Gnome for that and its Gnome Sound applet, that is log out and log in in Gnome change the device, log out and log back in LXDE. Now the question becomes Why can't i do this in LXDE?

    Read the article

  • Microsoft and Joyent Announce Node.js Windows Port

    With the Node.js command line tool, developers can type ?node my_app.js.' to run JavaScript programs. It gives developers a JavaScript application programming interface (API) supplies access for the network and file system as well. One instance where Node.js often comes in handy is in the creation of scalable networked programs that emphasize high concurrency and low response times. Developers who wish to use Node.js use on Windows at this time must do so running a virtual machine with Linux. Claudia Caldato, Principal Program Manager of Microsoft's Interoperability Strategy Team, offered...

    Read the article

  • How should I select continuous integration tool?

    - by DeveloperDon
    I found this cool comparison table for integration servers on Wikipedia, but I am a little uncertain how to rank the tools vs. my needs and interests. The chart itself seems to have a lot of boxes marked unknown, so if you are comfortable updating it on Wikipedia, that could be great too. Are there a few top performing products so I can quickly narrow down to four or five options? Which products seems to have the largest user communities and most ongoing enhancements and integration with new tools? Are the open source offerings best, or are there high quality tools that can be a great deal for a single user at home? Will use of multiple systems (primary desktop, local only home network server, personal and work notebooks, multiple virtual machines spread across all) create problems and how can they be managed?

    Read the article

  • Impact of variable-length loops on GPU shaders

    - by Will
    Its popular to render procedural content inside the GPU e.g. in the demoscene (drawing a single quad to fill the screen and letting the GPU compute the pixels). Ray marching is popular: This means the GPU is executing some unknown number of loop iterations per pixel (although you can have an upper bound like maxIterations). How does having a variable-length loop affect shader performance? Imagine the simple ray-marching psuedocode: t = 0.f; while(t < maxDist) { p = rayStart + rayDir * t; d = DistanceFunc(p); t += d; if(d < epsilon) { ... emit p return; } } How are the various mainstream GPU families (Nvidia, ATI, PowerVR, Mali, Intel, etc) affected? Vertex shaders, but particularly fragment shaders? How can it be optimised?

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Getting My Head Around Immutability

    - by Michael Mangold
    I'm new to object-oriented programming, and one concept that has been taking me a while to grasp is immutability. I think the light bulb went off last night but I want to verify: When I come across statements that an immutable object cannot be changed, I'm puzzled because I can, for instance, do the following: NSString *myName = @"Bob"; myName = @"Mike"; There, I just changed myName, of immutable type NSString. My problem is that the word, "object" can refer to the physical object in memory, or the abstraction, "myName." The former definition applies to the concept of immutability. As for the variable, a more clear (to me) definition of immutability is that the value of an immutable object can only be changed by also changing its location in memory, i.e. its reference (also known as its pointer). Is this correct, or am I still lost in the woods?

    Read the article

  • How can I add the version of a file to the file name with Tortoise-SVN?

    - by Eric Belair
    I would like to start giving unique names to "cache-able" files - i.e. *.css and *.js - in order to prevent caching, without requiring changes to the web-server settings (as is currently done in IIS). For instance, let's I have a JavaScript file called global.js. Going forward I would like it to have the name global.123.js when revision 123 is checked in. This would also require the following: The previous version of the file - perhaps it was global.115.js - is removed when the file is deployed. All references to the file are updated with the new file name How do I go about doing this? What concerns do I need to consider?

    Read the article

  • Using JDBC to asynchronously read large Oracle table

    - by Ben George
    What strategies can be used to read every row in a large Oracle table, only once, but as fast as possible with JDBC & Java ? Consider that each row has non-trivial amounts of data (30 columns, including large text in some columns). Some strategies I can think of are: Single thread and read table. (Too slow, but listed for clarity) Read the id's into ConcurrentLinkedQueue, use threads to consume queue and query by id in batches. Read id's into a JMS queue, use workers to consume queue and query by id in batches. What other strategies could be used ? For the purpose of this question assume processing of rows to be free.

    Read the article

  • Suggest-a-Session for Oracle Develop 2010: Last chance to get your paper submitted.

    - by olaf.heimburger
    While working with Oracle Technologies at customer projects we all come across solutions and ideas that are worth to share with a greater audience. When you missed the Call For Paper for Oracle OpenWorld and Oracle Develop you have the chance to get in. The Oracle Mix Community provides a tool called Suggest-a-Session for submitting and voting the sessions you would like to attend. My Suggestions When you pass by, do not forget to vote for my sessions. These are: Real-World Single Sign-On and ADF Security The Personal Newsletter Generator: Implement Cool Applications with ADF Faces Thank you for your support.

    Read the article

  • Basic AI FSM - Handling state transition

    - by Galvanize
    I'm starting to study on how to implement game AI, and it seems to me that a very simple FSM for my Pong demo would be a nice way to start. My vision on implementing this would be to have a basic state interface and a class for each state, then the NPC would have an instance of the current state. The class should have an update method and directions on wich state to go next, depending on the event received. The question is: How do I handle this event? Should I have a regular addEventListener and a costum event system? Or should I check on update for the things that could change the current state? I'm feeling a bit lost, I feel I have a good grasp on the FSM concept but a good implementation seems tricky, thanks in advance.

    Read the article

  • Google analytics: how many visitors have visited n times?

    - by Riley
    I'm trying to guess how many loyal users I have by counting the number of people that have visited the site 10 times. How can I answer this question with Google Analytics? "Visitor Loyalty" is a tempting answer, but the label for loyalty is "Visits that were the visitor's nth visit," and I want something more like "Visitors that visited n times." For example, we have 40 visits in the "51-100" visit range, but I think that could be a single user who visited 91 times. Or two users who visited 71 times each. The whole chart makes a good logic puzzle (I wonder if there's a unique solution) but doesn't easily answer the question I have.

    Read the article

  • How does a BSP tree work for Z sorting?

    - by Jenko
    I'm developing a 3D engine in software, and so I must compute Z sorting manually. I'm currently using the painters algorithm to sort triangles and then drawing them back-to-front. This causes artifacts that I'm trying to correct. Would using a dynamic BSP-tree ensure "correct Z sorting" of triangles? Why? Because the bounding volumes of triangles would be similar? Since I would have a single "world" BSP tree, would I have to remove and re-add any moved/scaled/rotated object into the tree? Is it possible to add triangles into a BSP tree without the expensive cutting process? Why do you need to cut triangles on the axis planes anyway? Is it faster to traverse a BSP tree from any angle, than to sort all tris each draw like the painters algorithm?

    Read the article

  • Skip the first RenderTarget when writing to MRT with Opaque blending

    - by cubrman
    I am writing to three rendertargets and whant to know how to tell a GPU not to write to the first RT. When you write a shader you can simply output less data than you have RTs (like output a single float4 when writing to three RTs) and only the first RTs will be affected, but you cannot specify to output this data anywhere else but to COLOR0, then 1, etc. Is there a way to write to several RTs but skip the first target? If I output zeroes, the data in the target will become zeroes, but I need it to remain untuched in the first target and only change in the specified ones. The reason I need this is to prevent data loss when calling SetRendertarget() with DiscardContents RTs. I write to all the RTs at one point and I need to write to only the specified ones afterwards. It must be the first texture as I have a depth buffer linked to it (XNA 4.0). Thanks.

    Read the article

  • It's Time to Chart Your Course with Oracle HCM Applications - Featuring Row Henson

    - by jay.richey
    Total human capital costs average nearly 70% of operating expenses. There's never been more pressure on HR professionals to deliver mission-critical programs to retain rising stars, develop core performers, and cut costs from workforce operations. Join Row Henson, Oracle HCM Fellow, and Scott Ewart, Senior Director of Product Marketing, to find out: What real-world strategies HR practitioners and experts are prioritizing today to optimize their investment in people Why nearly two-thirds of PeopleSoft and E-Business Suite HCM customers have chosen to upgrade to the latest releases and where Oracle's strategic product roadmaps are headed How Fusion HCM will introduce a new standard for work and innovation - not only for the HR professional, but for every single employee and manager in your business Date: January 20th, 2011 at 9:00 PST / 12:00 EST Register now!

    Read the article

  • How To Run Two Windows 8 Apps At the Same Time With the Snap Feature

    - by Chris Hoffman
    Windows 8’s Modern interface includes support for running two Windows 8 apps side-by-side. This feature, named “Snap,” isn’t explained in the tutorial – you’ll have to know it exists to make use of it. While the multitasking may be limited compared to Windows desktop multitasking, it’s more flexible than iPad and Android tablets, which can only have a single app on the screen at a time. Note: Snap only works on monitors that are at least 1366 pixels wide. 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • MDW Reports–New Source Code ZIP File Available

    - by billramo
    In my MDW Reports series, I attached V1 of the RDL files in my post - May the source be with you! MDW Report Series Part 6–The Final Edition. Since that post, Rachna Agarwal from MSIT in India updated the RDL files that are ready to go in a single ZIP. The reports assume that they will ne uploaded to the Report Manager’s root folder and use a shared data source named MDW. The reports also integrate with the new Query Hash Statistics reports. You can download them from my SQLBlog.com download site.

    Read the article

  • How To Easily Send Emails From The Windows Task Scheduler

    - by Chris Hoffman
    The Windows Task Scheduler can automatically send email at a specific time or in response to a specific event, but its integrated email feature won’t work very well for most users. Instead of using the Task Scheduler’s email feature to send emails, you can use the SendEmail utility. It allows you to construct a single-line command that authenticates with an SMTP server and sends an email. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Modifying the Default Shipped Template

    - by The Old Toxophilist
    Having installed your Exalogic Virtual environment by default you have a single template which can be used to create your vServers. Although this template is suitable for creating simple test or development vServers it is recommended that you look at creating your own custom vServers that match the environment you wish to build and deploy. Therefore this Tea Time Snippet will take you through the simple process of modifying the standard template. Before You Start To edit the template you will need the Oracle ModifyJeos Utility which can be downloaded from the eDelivery Site. Once the ModifyJeos Utility has been downloaded we can install the rpms onto either an existing vServer or one of the Control vServers. rpm -ivh ovm-modify-jeos-1.0.1-10.el5.noarch.rpm rpm -ivh ovm-template-config-1.0.1-5.el5.noarch.rpm Alternatively you can install the modify jeos packages on a none Exalogic OEL installation or a VirtualBox image. If you are doing this, assuming OEL 5u8, you will need the following rpms. rpm -ivh ovm-modify-jeos-1.0.1-10.el5.noarch.rpm rpm -ivh ovm-el5u2-xvm-jeos-1.0.1-5.el5.i386.rpm rpm -ivh ovm-template-config-1.0.1-5.el5.noarch.rpm Base Template If you have installed the modify onto a vServer running on the Exalogic then simply mount the /export/common/images from the ZFS storage and you will be able to find the el_x2-2_base_linux_guest_vm_template_2.0.1.1.0_64.tgz (or similar depending which version you have) template file. Alternatively the latest can be downloaded from the eDelivery Site. Now we have the Template tgz we will need the extract it as follows: tar -zxvf  el_x2-2_base_linux_guest_vm_template_2.0.1.1.0_64.tgz This will create a directory called BASE which will contain the System.img (VServer image) and vm.cfg (VServer Config information). This directory should be renamed to something more meaning full that indicates what we have done to the template and then the Simple name / name in the vm.cfg editted for the same reason. Modifying the Template Resizing Root File System By default the shipped template has a root size of 4 GB which will leave a vServer created from it running at 90% full on the root disk. We can simply resize the template by executing the following: modifyjeos -f System.img -T <New Size MB>) For example to imcrease the default 4 GB to 40 GB we would execute: modifyjeos -f System.img -T 40960) Resizing Swap We can modify the size of the swap space within a template by executing the following: modifyjeos -f System.img -S <New Size MB>) For example to increase the swap from the default 512 MB to 4 GB we would execute: modifyjeos -f System.img -S 4096) Changing RPMs Adding RPMs To add RPMs using modifyjeos, complete the following steps: Add the names of the new RPMs in a list file, such as addrpms.lst. In this file, you should list each new RPM in a separate line. Ensure that all of the new RPMs are in a single directory, such as rpms. Run the following command to add the new RPMs: modifyjeos -f System.img -a <path_to_addrpms.lst> -m <path_to_rpms> -nogpg Where <path_to_addrpms.lst> is the path to the location of the addrpms.lst file, and <path_to_rpms> is the path to the directory that contains the RPMs. The -nogpg option eliminates signature check on the RPMs. Removing RPMs To remove RPM s using modifyjeos, complete the following steps: Add the names of the RPMs (the ones you want to remove) in a list file, such as removerpms.lst. In this file, you should list each RPM in a separate line. The Oracle Exalogic Elastic Cloud Administrator's Guide provides a list of all RPMs that must not be removed from the vServer. Run the following command to remove the RPMs: modifyjeos -f System.img -e <path_to_removerpms.lst> Where <path_to_removerpms.lst> is the path to the location of the removerpms.lst file. Mounting the System.img For all other modifications that are not supported by the modifyjeos command (adding you custom yum repositories, pre configuring NTP, modify default NFSv4 Nobody functionality, etc) we can mount the System.img and access it directly. To facititate quick and easy mounting/unmounting of the System.img I have put together the simple scripts below. MountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Export for later i.e. during unmount export LOOP=`losetup -f` export SYSTEMIMG=/mnt/elsystem # Make Temp Mount Directory mkdir -p $SYSTEMIMG # Create Loop for the System Image losetup $LOOP System.img kpartx -a $LOOP mount /dev/mapper/`basename $LOOP`p2 $SYSTEMIMG #Change Dir into mounted Image cd $SYSTEMIMG UnmountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Assume the $LOOP & $SYSTEMIMG exist from a previous run on the MountSystemImg.sh umount $SYSTEMIMG kpartx -d $LOOP losetup -d $LOOP Packaging the Template Once you have finished modifying the template it can be simply repackaged and then imported into EMOC as described in "Exalogic 2.0.1 Tea Break Snippets - Importing Public Server Template". To do this we will simply cd to the directory above that containing the modified files and execute the following: tar -zcvf <New Template Directory> <New Template Name>.tgz The resulting.tgz file can be copied to the images directory on the ZFS and uploadd using the IB network. This entry was originally posted on the The Old Toxophilist Site.

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >