Search Results

Search found 12786 results on 512 pages for 'micro management'.

Page 8/512 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • C++ and SDL resource management for 2D game

    - by KuruptedMagi
    My first question is about stateManagers. I do not use the singleton pattern (read many random posts with various reasons not to use it), I have gameStateManager which runs the pointer cCurrentGameState-render(), etc. I want to make a transitioning game, this engine should ideally cover both a platformer and a bird's eye RPG (with some recoding, I just mean the base engine), both of which will load different levels and events, such as world map, dungeon, shops, etc. So I then thought, rather then having to store all this data within all the states, I would break the engine into gameStates, and playStates... when gameState reaches gameStatePlay(), gameStatePlay simply runs the usual handleInput, logic, and render for the playStates, just as the low level gameStateManager does. This lets me store all the player data within the base playstate class without storing useless data in the gameStates. Now I have added a seperate mapEditor, which uses editorStates from gameStateEditor. Is this too much usage of the gameState concept? It seems to work pretty well for me, so I was wondering if I am too far off a common implementation of this. My second question is on image resources. I have my sprite class with nothing but static members, mainly loadImage, applySurface, and my screen pointer. I also have a map pairing imageName enums with actual SDL_Surface pointers, and one pairing clipNumber enums with a wrapper class for a vector of clips, so that each reference in the map can have different amounts of clips with different sizes. I thought it would be better to store all these images, and screen within one static body, since 20 different goblins all use the same sprite sheet, and all need to print to the same screen, and of course, this way I do not need to pass my screen reference to every little entity. The imageMap seems to work very well, I can even add the ability to search through the map at creation of entity type to see if a particular image at creation, creating if it doesnt exist, and destroying the image if the last entity that needs it was just destroyed. The vectored clip map however, seems to take too long to initialize, so if i run past the state that initializes them to fast, the game crashes <. Plus, the clip map call is half of this line =P SPRITE::applySurface( cEditorMap.cTiles[x][y].iX, cEditorMap.cTiles[x][y].iY, SPRITE::mImages[ IMAGE_TILEMAP ], SPRITE::screen, SPRITE::mImageClips[IMAGE_TILEMAP]->clips.at( cEditorMap.cTiles[x][y].iTileType ) ); Again, do I have the right idea? I like the imageMap, but am I better off with each entity storing its own clips? My last question is about collision detection. I only grasp the basics, will look at per-pixel and circular soon, but how can I determine which side the collision comes from with just the basic square collision detection, I tried breaking each entity into 4 collision zones, but that just gave me problems with walking through walls and the like <. Also, is per-pixel color collision a good way to decide what collision just occured, or is checking multiple colors for multiple entities too taxing each cycle?

    Read the article

  • Project Management Techniques (high level)

    - by Sam J
    Our software dev team is currently using kanban for our development lifecycles, and, from the reasonably short experience of a few months, I think it's going quite well (certainly compared to a few months ago when we didn't really have a methodology). Our team, however, is directed to do work defined by project managers (not software project managers, just general business), and they're using the PMBOK methodology. Question is, how does a traditional methodology like PMBOK, Prince2 etc fit with a lean software development methodology like kanban or scrum? Is it just wasting everyone's time as all the requirements are effectively drawn up to start with (although inevitably changed along the way)?

    Read the article

  • Master Data Management

    - by Logicalj
    I am looking for a very flexible, easy to integrate and dynamic application with as many features as possible for Master Data Management. As Master Data Management is used to Manage Operational Data, Analytical Data and Master Data so, I want guidance about "What is exactly expected from Master Data Management and What are the Basic and Challenging Scenarios to be covered or resolved in Master Data Management". Please guide me with all the possible aspects of Master Data Management like Data Cleansing, Data Management and Start Data Analyzing, etc.

    Read the article

  • Power Management - Sleep / Wake up Server when accessed

    - by KP65
    I have a headless HP Proliant Microserver with ubuntu installed. This machine has samba shares on it serving media and I usually rdp or ssh into it. Now my issue is I want the machine to go into sleep mode(so the state is saved from ram to the harddrive) and it will seem like it is turned off after an hour of idling. If there is any attempt to access the samba share through LAN I would like it to wake up. Now my motherboard supports this function, can anyone point me in the right direction for achieving this easily? Thanks

    Read the article

  • Game Asset Management

    - by user964123
    I am making my first small mobile game in C# XNA. Lets say I have 3 screens, the main menu, options and game screen. A single game session usually lasts for 1 min, so the user will alternate frequently between the main menu and game screen. Therefore, once I load the textures for either screen, I want to keep them in memory to avoid frequent reloading. Both screens share some assets like their background textures, but differ in others. The first solution I came up with is making 2 texture factory classes, MainScreenAssetFactory and GameScreenAssetFactory, each with their own content manager, and ill store them in a globally accessible point so that they persist after either screen is destroyed. There is also a OptionsScreenAssetFactory, but that I dont want to cache it since the options screen is rarely visited. A typical Factory would look something like this public class MainScreenAssetFactory { private readonly ContentManager contentManager; public MainScreenAssetFactory(IServiceProvider serviceProvider, string rootDirectory) { contentManager = new ContentManager(serviceProvider) { RootDirectory = rootDirectory }; } public Texture2D ListElementBackground { get { return return contentManager.Load<Texture2D>("UserTab"); } } public Texture2D ListElementBulletPoint { get { return return contentManager.Load<Texture2D>("TabIcon"); } } public Texture2D LoggedOutUser { get { return return contentManager.Load<Texture2D>("LoggedOutUser"); } } } Since both Main, Options and Game Screen share some common resources, instead of loading them more than once, I created another class CommonAssetTexFactory which holds the common stuff and stays in-memory during the app lifetime. For example, this class gets passed to the options screen when it is created. However, given my small game with its few assets, I am already finding this solution cumbersome and inflexible. Changing anything would require looking to see if its already in the common factory, and if not, modifying existing factories and so on. And this is just considering textures currently, i didnt add sound files yet. I cant imagine bigger games with thousands of resources using this approach. A better idea must exist. Would someone please enlighten me?

    Read the article

  • Comprehensive solution for managing patches, event viewing, change management, inventory, etc

    - by Holocryptic
    I'm looking for a solution that incorporates most or all of the following: Patch Management, Server event viewing/tracking, AD change management, ticketing and internal/external kb, remote access - ability to shadow user sessions or create new ones, imaging, and inventory. Our environments contains Windows Servers and ESXi Hosts (We're not completely virtual, but we're moving that direction). Various Cisco and Linksys switches and firewalls. This is a tall order, and I don't know if it can be done on a reasonable budget. I've looked and found some questions on SF that deal with some of this: http://serverfault.com/questions/72015/active-directory-management-tools-for-medium-sized-forest-less-than-1000-users http://serverfault.com/questions/4021/are-there-any-tools-to-do-change-management-with-active-directory-group-policy http://serverfault.com/questions/21752/what-is-a-good-patch-update-management-server What I'm ideally looking for is a reasonably cheap solution that integrates the features into a central interface. We're a non-profit, so money is a limiting factor (the cheaper, the better; but we have a max of $15k). What we are trying to avoid is having to deal with multiple vendors, while maintaining scalability (we're creating more sites that we'll have to manage). Is this possible, or will we have to cobble together something to make it work for us?

    Read the article

  • Microsoft SQL Server Management Studio not working after installing .Net Framework 4

    - by smith watson
    Yesterday I installed .Net Framework 4 as it was required by Liquid XML Studio, but just after installing it, My SQL Server Management Studio stopped working. As soon as open the IDE I get package microsoft sql management studio package failed to load I tried pretty much all the solution posted on the internet but could not get it working, today when I removed the .Net Framework 4, SQL server management studio started working again. Then just to test, I installed the .Net Framework 4 back ... and I started getting the same problem in management studio. I want both on my machine ! how can i do it ? PS : OS is Windows 7

    Read the article

  • Windows 7 Computer Management Not Accessible

    - by David
    I am having issues trying to access Computer Management from the start menu. The following is a screenshot of exactly how I want to go to computer management. I know that there are other ways to get to computer management, but I would like to go the easier way through the start menu. I tried restarting the computer and I tried patching the computer. No error message comes up or anything. There is no response from the computer.

    Read the article

  • Case Management API by Koen van Dijk

    - by JuergenKress
    Case Management is a new addition to Oracle BPM in release 11.1.1.1.7 (PS6). This new release contains the Case Management engine, see blog Léon at  http://leonsmiers.blogspot.nl/ for more details.  However, currently this release does not contain a case portal. The case management API's, just like the already existing Oracle BPM API's, help in developing a portal page with relative ease. This blog will use some real life examples from the EURent casemanagement application and portal application developed by Oracle. The Oracle BPM Case Management API is a Java Based API that enables developers to programmatically access the new Case Management functionalities. It is an elaborate API that can access all the functionalities of Oracle Case Management. I will describe two of those functionalities in this blog: retrieving case data as DOM (http://www.w3.org/DOM/) and attaching a document to a case. Libraries First of all when creating a Case Management project you will need to attach the following libraries: These contain all the classes that are in the Case Management API. Service client To do anything with the BPM CaseManagement API in general it is necessary to create a CaseManagementServiceClient Object. The Case Management service client is the central piece of the Case Management API. It can be used to retrieve two different types of services. The first is the case stream service and the case service. The case stream service contains functionality to upload and download documents to and from a case. The second one is the CaseService. This service contains all the other functionality acting upon a case including but not limited to: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: ACM API,adaptive Case Management,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Oracle Leader in Transportation Management

    - by John Murphy
    Oracle Named a Leader in the Transportation Management Systems Market by Leading Analyst Firm Redwood Shores, Calif. – October 15, 2012 News Facts Gartner, Inc. has placed Oracle Transportation Management in the Leaders Quadrant of its 2012 report, “Magic Quadrant for Transportation Management Systems (TMS).” (1) Gartner Magic Quadrants position vendors within a particular market segment based on their completeness of vision and ability to execute on that vision. According to the report, “Multiple subcomponents make up a comprehensive TMS across planning (for example, load consolidation, routing, mode selection and carrier selection) and execution (for example, tendering loads to carriers, shipment track and trace, and freight audit and payment).” Built on modern, flexible, Internet based architecture, Oracle Transportation Management is a global transportation and logistics operations system that allows companies to minimize cost, optimize service levels, support sustainability initiatives, and create flexible business process automation within their transportation and logistics networks. With a share of 26% of worldwide software revenue for 2011, Oracle is also number one in TMS vendor share according to Gartner’s report, “Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016.” (2) Supporting Quote “Shippers and logistics service providers face increasingly complex challenges as they try to reduce costs, secure capacity and improve overall freight efficiency,” said Derek Gittoes, vice president, logistics product strategy, Oracle. “We believe our high standing in both Gartner reports is a reflection of Oracle’s commitment to addressing these challenges by delivering the industry’s broadest and deepest transportation management platform. With a flexible and modern platform, we are able to support customers with both basic transportation needs, as well as those with highly complex logistics requirements.” Supporting Resources Magic Quadrant for Transportation Management Systems Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016 Oracle Transportation Management (1) Gartner, Inc., “Magic Quadrant for Transportation Management Systems,” by C. Dwight Klappich, August 23, 2012 (2) Gartner, Inc., “Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016,” by Chad Eschinger and C. Dwight Klappich, September 24, 2012. About Oracle Applications Over 65,000 customers worldwide rely on Oracle's complete, open and integrated enterprise applications to achieve superior results. Oracle provides a secure path for customers to benefit from the latest technology advances that improve the customer software experience and drive better business performance. Oracle Applications Unlimited is Oracle's commitment to customer choice through continuous investment and innovation in current applications offerings. Oracle's next-generation Fusion Applications build upon that commitment, and are designed to work with and evolve Oracle's Applications Unlimited offerings. Oracle's lifetime support policy helps ensure customers will continue to have a choice in upgrade paths, based on their enterprise needs. For more information on the latest Oracle Applications releases go towww.oracle.com/applications About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NASDAQ:ORCL), visit www.oracle.com. Trademarks Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. ###   Karen [email protected] Simon JonesBlanc & [email protected]

    Read the article

  • How to do the transition from project manager to product manager? [on hold]

    - by E. Topp
    I'm working as project manager / head of software for a small software company and was working on my own previously to this position. I want to however make the transition to product manager from my current position. You could ask about position differences, pitfalls of using project management processes and decision making as a product manager. What skill sets you need for the product manager job What are the position differences? What are the pitfalls of using project management processes and decision making as a product manager? What skill set is required for the product manager job? Is the transition easier for a project manager?

    Read the article

  • How can we unify business goals and technical goals?

    - by BAM
    Some background I work at a small startup: 4 devs, 1 designer, and 2 non-technical co-founders, one who provides funding, and the other who handles day-to-day management and sales. Our company produces mobile apps for target industries, and we've gotten a lot of lucky breaks lately. The outlook is good, and we're confident we can make this thing work. One reason is our product development team. Everyone on the team is passionate, driven, and has a great sense of what makes an awesome product. As a result, we've built some beautiful applications that we're all proud of. The other reason is the co-founders. Both have a brilliant business sense (one actually founded a multi-million dollar company already), and they have close ties in many of the industries we're trying to penetrate. Consequently, they've brought in some great business and continue to keep jobs in the pipeline. The problem The problem we can't seem to shake is how to bring these two awesome advantages together. On the business side, there is a huge pressure to deliver as fast as possible as much as possible, whereas on the development side there is pressure to take your time, come up with the right solution, and pay attention to all the details. Lately these two sides have been butting heads a lot. Developers are demanding quality while managers are demanding quantity. How can we handle this? Both sides are correct. We can't survive as a company if we build terrible applications, but we also can't survive if we don't sell enough. So how should we go about making compromises? Things we've done with little or no success: Work more (well, it did result in better quality and faster delivery, but the dev team has never been more stressed out before) Charge more (as a startup, we don't yet have the credibility to justify higher prices, so no one is willing to pay) Extend deadlines (if we charge the same, but take longer, we'll end up losing money) Things we've done with some success: Sacrifice pay to cut costs (everyone, from devs to management, is paid less than they could be making elsewhere. In return, however, we all have creative input and more flexibility and freedom, a typical startup trade off) Standardize project management (we recently started adhering to agile/scrum principles so we can base deadlines on actual velocity, not just arbitrary guesses) Hire more people (we used to have 2 developers and no designers, which really limited our bandwidth. However, as a startup we can only afford to hire a few extra people.) Is there anything we're missing or doing wrong? How is this handled at successful companies? Thanks in advance for any feedback :)

    Read the article

  • Company wants to write custom project management tool, rather then use third party product.

    - by Jason Evans
    At the company I work, we are really wanting to get into the agile methodology for developing software. One thing that I'm not excited about is the fact that management wants us to build a custom project management feature inside the company's Intranet. I think this is a total waste of time. There are many great third party tools available (e.g. Axosoft OnTime) that can do everything we need, and more. For how much development time it would cost us to build our own project management module, we could buy numerous licences for a third party product. One concern is that, whilst we are writing code for a client, and using our custom Intranet project management module, we find bugs in the module that need fixing ASAP. That means having to stop work on the client code to fix the Intranet. That just puts shivers down my spine. Another worry I have is lack of functionality. This custom module is going to be so basic, that it will just feel really crap to use. That might sound a bit snooty, but for goodness sake, many third party tools are so feature rich, that the idea of having to write our own tool makes feel very uneasy. In fact, I can't be bothered. What do you guys think? I'm going to raise this issue with my boss, since I feel it's such an important topic to talk about. EDIT: Thanks for the great responses, much appreciated. To summarize some of them: Money Naturally my boss does want to save money, by not forking out a few hundred £'s for licences. However, for us to write a custom tool, it will take x number of days, multiplied by approx £500, which is our costs. I don't see the business value in this. Management have mentioned that they want to sell the Intranet as a product in the future, but it's so custom to our needs (and downright basic), that in order to give it to another client, I can see us having to fork a version of the code and rebuild the majority of it anyway. So it's not like we're gaining anything there in reuse. Features Having our own custom module means not feature bloat - only the functionality we require will be in the product. My issue is that there are plenty of free, open-source project management tools out there with minimal features already. So even if cost is an issue, we could look into open-source. Again it all boils down to the fact that I don't see the point in writing a project management tool in this day and age. It's a bit like writing your own web browser - why?, what's the point? Although management are asking for this tool, just because they are, it does not mean I'm going to please them and do it just because they asked for it. If something does not make sense, then I will raise it as a concern. At the end of the day, it's the developers who write the code, it's the developers who make money for a business. Thus, as far I'm concerned, the devs have a very big role in deciding how a company should manage projects and what tools are used. "I am Spartan, argh!" :) Hmm, I've not been able to make this question a wiki for some reason, thus I'm going to have to pick an answer to accept. Cheers. Jas.

    Read the article

  • What's New in Database Lifecycle Management in Enterprise Manager 12c Release 3

    - by HariSrinivasan
    Enterprise Manager 12c Release 3 includes improvements and enhancements across every area of the product. This blog provides an overview of the new and enhanced features in the Database Lifecycle Management area. I will deep dive into specific features more in depth in subsequent posts. "What's New?"  In this release, we focused on four things: 1. Lifecycle Management Support for new Database12c - Pluggable Databases 2. Management of long running processes, such as a security patch cycle (Change Activity Planner) 3. Management of large number of systems by · Leveraging new framework capabilities for lifecycle operations, such as the new advanced ‘emcli’ script option · Refining features such as configuration search and compliance 4. Minor improvements and quality fixes to existing features · Rollback support for Single instance databases · Improved "OFFLINE" Patching experience · Faster collection of ORACLE_HOME configurations Lifecycle Management Support for new Database 12c - Pluggable Databases Database 12c introduces Pluggable Databases (PDBs), the brand new addition to help you achieve your consolidation goals. Pluggable databases offer unprecedented consolidation at database level and native lifecycle verbs for creating, plugging and unplugging the databases on a container database (CDB). Enterprise Manager can supplement the capabilities of pluggable databases by offering workflows for migrating, provisioning and cloning them using the software library and the deployment procedures. For example, Enterprise Manager can migrate an existing database to a PDB or clone a PDB by storing a versioned copy in the software library. One can also manage the planned downtime related to patching by  migrating the PDBs to a new CDB. While pluggable databases offer these exciting features, it can also pose configuration management and compliance challenges if not managed properly. Enterprise Manager features like inventory management, topology associations and configuration search can mitigate the sprawl of PDBs and also lock them to predefined golden standards using configuration comparison and compliance rules. Learn More ... Management of Long Running datacenter processes - Change Activity Planner (CAP) Currently, customers resort to cumbersome methods to create, execute, track and monitor change activities within their data center. Some customers use traditional tools such as spreadsheets, project planners and in-house custom built solutions. Customers often have weekly sync up meetings across stake holders to collect status and updates. Some of the change activities, for example the quarterly patch set update (PSU) patch rollouts are not single tasks but processes with multiple tasks. Some of those tasks are performed within Enterprise Manager Cloud Control (for example Patch) and some are performed outside of Enterprise Manager Cloud Control. These tasks often run for a longer period of time and involve multiple people or teams. Enterprise Manger Cloud Control supports core data center operations such as configuration management, compliance management, and automation. Enterprise Manager Cloud Control release 12.1.0.3 leverages these capabilities and introduces the Change Activity Planner (CAP). CAP provides the ability to plan, execute, and track change activities in real time. It covers the typical datacenter activities that are spread over a long period of time, across multiple people and multiple targets (even target types). Here are some examples of Change Activity Process in a datacenter: · Patching large environments (PSU/CPU Patching cycles) · Upgrading large number of database environments · Rolling out Compliance Rules · Database Consolidation to Exadata environments CAP provides user flows for Compliance Officers/Managers (incl. lead administrators) and Operators (DBAs and admins). Managers can create change activity plans for various projects, allocate resources, targets, and groups affected. Upon activation of the plan, tasks are created and automatically assigned to individual administrators based on target ownership. Administrators (DBAs) can identify their tasks and understand the context, schedules, and priorities. They can complete tasks using Enterprise Manager Cloud Control automation features such as patch plans (or in some cases outside Enterprise Manager). Upon completion, compliance is evaluated for validations and updates the status of the tasks and the plans. Learn More about CAP ...  Improved Configuration & Compliance Management of a large number of systems Improved Configuration Comparison:  Get to the configuration comparison results faster for simple ad-hoc comparisons. When performing a 1 to 1 comparison, Enterprise Manager will perform the comparison immediately and take the user directly to the results without having to wait for a job to be submitted and executed. Flattened system comparisons reduce comparison setup time and reduce complexity. In addition to the previously existing topological comparison, users now have an option to compare using a “flattened” methodology. Flattening means to remove duplicate target instances within the systems and remove the hierarchy of member targets. The result are much easier to spot differences particularly for specific use cases like comparing patch levels between complex systems like RAC and Fusion Apps. Improved Configuration Search & Advanced EMCLI Script option for Mass Automation Enterprise manager 12c introduces a new framework level capability to be able to script and stitch together multiple tasks using EMCLI. This powerful capability can be leveraged for lifecycle operations, especially when executing a task over a large number of targets. Specific usages of this include, retrieving a qualified list of targets using Configuration Search and then using the resultset for automation. Another example would be executing a patching operation and then re-executing on targets where it may have failed. This is complemented by other enhancements, such as a better usability for designing reusable configuration searches. IN EM 12c Rel 3, a simplified UI makes building adhoc searches even easier. Searching for missing patches is a common use of configuration search. This required the use of the advanced options which are now clearly defined and easy to use. Perform “Configuration Search” using the EMCLI. Users can find and execute Configuration Searches from the EMCLI which can be extremely useful for building sophisticated automation scripts. For an example, Run the Search named “Oracle Databases on Exadata” which finds all Database targets running on top of Exadata. Further filter the results by refining by options like name, host, etc.. emcli get_targets -config_search="Databases on Exadata" –target_name="exa%“ Use this in powerful mass automation operations using the new emcli script option. For example, to solve the use case of – Finding all DBs running on Exadata and housing E-Biz and Patch them. Create a Python script with emcli functions and invoke it in the new EMCLI script option shell. Invoke the script in the new EMCLI with script option directly: $<path to emcli>/emcli @myPSU_Patch.py Richer compliance content:  Now over 50 Oracle Provided Compliance Standards including new standards for Pluggable Database, Fusion Applications, Oracle Identity Manager, Oracle VM and Internet Directory. 9 Oracle provided Real Time Monitoring Standards containing over 900 Compliance Rules across 500 Facets. These new Real time Compliance Standards covers both Exadata Compute nodes and Linux servers. The result is increased Oracle software coverage and faster time to compliance monitoring on Exadata. Enhancements to Patch Management: Overhauled "OFFLINE" Patching experience: Simplified Patch uploads UI to improve the offline experience of patching. There is now a single step process to get the patches into software library. Customers often maintain local repositories of patches, sometimes called software depots, where they host the patches downloaded from My Oracle Support. In the past, you had to move these patches to your desktop then upload them to the Enterprise Manager's Software library through the Enterprise Manager Cloud Control user interface. You can now use the following EMCLI command to upload multiple patches directly from a remote location within the data center: $emcli upload_patches -location <Path to Patch directory> -from_host <HOSTNAME> The upload process filters all of the new patches, automatically selects the relevant metadata files from the location, and uploads the patches to software library. Other Improvements:  Patch rollback for single instance databases, new option in the Patch Plan to rollback the patches added to the patch plans. Upon execution, the procedure would rollback the patch and the SQL applied to the single instance Databases. Improved and faster configuration collection of Oracle Home targets can enable more reliable automation at higher level functions like Provisioning, Patching or Database as a Service. Just to recap, here is a list of database lifecycle management features:  * Red highlights mark – New or Enhanced in the Release 3. • Discovery, inventory tracking and reporting • Database provisioning including o Migration to Pluggable databases o Plugging and unplugging of pluggable databases o Gold image based cloning o Scaling of RAC nodes •Schema and data change management •End-to-end patch management in online and offline modes, including o Patch advisories in online (connected with My Oracle Support) and offline mode o Patch pre-deployment analysis, deployment and rollback (currently only for single instance databases) o Reporting • Upgrade planning and execution of the upgrade process • Configuration management including • Compliance management with out-of-box content • Change Activity Planner for planning, designing and tracking long running processes For more information on Enterprise Manager’s database lifecycle management capabilities, visit http://www.oracle.com/technetwork/oem/lifecycle-mgmt/index.html

    Read the article

  • Chock-full of Identity Customers at Oracle OpenWorld

    - by Tanu Sood
      Oracle Openworld (OOW) 2012 kicks off this coming Sunday. Oracle OpenWorld is known to bring in Oracle customers, organizations big and small, from all over the world. And, Identity Management is no exception. If you are looking to catch up with Oracle Identity Management customers, hear first-hand about their implementation experiences and discuss industry trends, business drivers, solutions and more at OOW, here are some sessions we recommend you attend: Monday, October 1, 2012 CON9405: Trends in Identity Management 10:45 a.m. – 11:45 a.m., Moscone West 3003 Subject matter experts from Kaiser Permanente and SuperValu share the stage with Amit Jasuja, Snior Vice President, Oracle Identity Management and Security to discuss how the latest advances in Identity Management are helping customers address emerging requirements for securely enabling cloud, social and mobile environments. CON9492: Simplifying your Identity Management Implementation 3:15 p.m. – 4:15 p.m., Moscone West 3008 Implementation experts from British Telecom, Kaiser Permanente and UPMC participate in a panel to discuss best practices, key strategies and lessons learned based on their own experiences. Attendees will hear first-hand what they can do to streamline and simplify their identity management implementation framework for a quick return-on-investment and maximum efficiency. CON9444: Modernized and Complete Access Management 4:45 p.m. – 5:45 p.m., Moscone West 3008 We have come a long way from the days of web single sign-on addressing the core business requirements. Today, as technology and business evolves, organizations are seeking new capabilities like federation, token services, fine grained authorizations, web fraud prevention and strong authentication. This session will explore the emerging requirements for access management, what a complete solution is like, complemented with real-world customer case studies from ETS, Kaiser Permanente and TURKCELL and product demonstrations. Tuesday, October 2, 2012 CON9437: Mobile Access Management 10:15 a.m. – 11:15 a.m., Moscone West 3022 With more than 5 billion mobile devices on the planet and an increasing number of users using their own devices to access corporate data and applications, securely extending identity management to mobile devices has become a hot topic. This session will feature Identity Management evangelists from companies like Intuit, NetApp and Toyota to discuss how to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. CON9491: Enhancing the End-User Experience with Oracle Identity Governance applications 11:45 a.m. – 12:45 p.m., Moscone West 3008 As organizations seek to encourage more and more user self service, business users are now primary end users for identity management installations.  Join experts from Visa and Oracle as they explore how Oracle Identity Governance solutions deliver complete identity administration and governance solutions with support for emerging requirements like cloud identities and mobile devices. CON9447: Enabling Access for Hundreds of Millions of Users 1:15 p.m. – 2:15 p.m., Moscone West 3008 Dealing with scale problems? Looking to address identity management requirements with million or so users in mind? Then take note of Cisco’s implementation. Join this session to hear first-hand how Cisco tackled identity management and scaled their implementation to bolster security and enforce compliance. CON9465: Next Generation Directory – Oracle Unified Directory 5:00 p.m. – 6:00 p.m., Moscone West 3008 Get the 360 degrees perspective from a solution provider, implementation services partner and the customer in this session to learn how the latest Oracle Unified Directory solutions can help you build a directory infrastructure that is optimized to support cloud, mobile and social networking and yet deliver on scale and performance. Wednesday, October 3, 2012 CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? How do you externalize such entitlements to allow dynamic changes without having to touch the application code? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. Thursday, October 4, 2012 CON9662: Securing Oracle Applications with the Oracle Enterprise Identity Management Platform 2:15 p.m. – 3:15 p.m., Moscone West 3008 Oracle Enterprise identity Management solutions are designed to secure access and simplify compliance to Oracle Applications.  Whether you are an EBS customer looking to upgrade from Oracle Single Sign-on or a Fusion Application customer seeking to leverage the Identity instance as an enterprise security platform, this session with Qualcomm and Oracle will help you understand how to get the most out of your investment. And here’s the complete listing of all the Identity Management sessions at Oracle OpenWorld.

    Read the article

  • Agent versus Agentless management

    - by Owen Allen
    I got a couple of questions about Agentless asset management: "What does agentless management do for an asset?" Agentless management is one of the two ways that you can manage an operating system. Rather than installing an Agent Controller on the OS, agentless management uses SSH to regularly check the system and gather monitoring data. Many of the actions that would be available on an agent-managed system are available on an agentless system, but actions such as running reports or updating an Oracle Solaris 10 or Linux OS are not available. A table showing the capabilities of agentless management is here. "What permissions does agentless management require?" Agentless management still requires root credentials. If you can't log into the system as root, you can provide one set of credentials for the login, and then a set of root credentials to switch to.

    Read the article

  • Product News: Oracle Unveils a Waste Management Solution for the Oracle E-Business Suite

    - by Evelyn Neumayr
    Oracle recently announced a new product to help organizations reduce the cost and compliance with international hazmat (short for hazardous materials) and recycling and environmental protection laws. This new waste management solution for Oracle E-Business Suite extends the capabilities of  Oracle Depot Repair, Oracle Transportation Management and Oracle Global Trade Management. It automates and monitors waste management processes to help ensure that hazardous materials are tracked and handled in accordance with regulatory requirements. Oracle’s waste management solution for the Oracle E-Business Suite leverages Oracle Transportation Management and Oracle Global Trade Management, enabling customers to view in-transit inventory across the extended supply chain, while also providing a single repository for all legal, regulatory and compliance related information. Read here for more information.

    Read the article

  • I've inherited 200K lines of spaghetti code -- what now?

    - by kmote
    I hope this isn't too general of a question; I could really use some seasoned advice. I am newly employed as the sole "SW Engineer" in a fairly small shop of scientists who have spent the last 10-20 years cobbling together a vast code base. (It was written in a virtually obsolete language: G2 -- think Pascal with graphics). The program itself is a physical model of a complex chemical processing plant; the team that wrote it have incredibly deep domain knowledge but little or no formal training in programming fundamentals. They've recently learned some hard lessons about the consequences of non-existant configuration management. Their maintenance efforts are also greatly hampered by the vast accumulation of undocumented "sludge" in the code itself. I will spare you the "politics" of the situation (there's always politics!), but suffice to say, there is not a consensus of opinion about what is needed for the path ahead. They have asked me to begin presenting to the team some of the principles of modern software development. They want me to introduce some of the industry-standard practices and strategies regarding coding conventions, lifecycle management, high-level design patterns, and source control. Frankly, it's a fairly daunting task and I'm not sure where to begin. Initially, I'm inclined to tutor them in some of the central concepts of The Pragmatic Programmer, or Fowler's Refactoring ("Code Smells", etc). I also hope to introduce a number of Agile methodologies. But ultimately, to be effective, I think I'm going to need to hone in on 5-7 core fundamentals; in other words, what are the most important principles or practices that they can realistically start implementing that will give them the most "bang for the buck". So that's my question: What would you include in your list of the most effective strategies to help straighten out the spaghetti (and prevent it in the future)?

    Read the article

  • Project Management Helps AmeriCares Deliver International Aid

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Alison Weiss Handle with Care Sound project management helps AmeriCares bring international aid to those in need. The stakes are always high for AmeriCares. On a mission to restore health and save lives during times of disaster, the nonprofit international relief and humanitarian aid organization delivers donated medicines, medical supplies, and humanitarian aid to people in the U.S. and around the globe. Founded in 1982 with the express mission of responding as quickly and efficiently as possible to help people in need, the Stamford, Connecticut-based AmeriCares has delivered more than US$10.5 billion in aid to 147 countries over the past three decades. Launch the Slideshow “It’s critically important to us that we steward all the donations and that the medical supplies and medicines get to people as quickly as possible with no loss,” says Kate Sears, senior vice president for finance and technology at AmeriCares. “Whether we’re shipping IV solutions to victims of cholera in Haiti or antibiotics to Somali famine victims, we need to get the medicines there sooner because it means more people will be helped and lives improved or even saved.” Ten years ago, the tracking systems used by AmeriCares associates were paper-based. In recent years, staff started using spreadsheets, but the tracking processes were not standardized between teams. “Every team was tracking completely different information,” says Megan McDermott, senior associate, Sub-Saharan Africa partnerships, at AmeriCares. “It was just a few key things. For example, we tracked the date a shipment was supposed to arrive and the date we got reports from our partner that a hospital received aid on their end.” While the data was accurate, much detail was being lost in the process. AmeriCares management knew it could do a better job of tracking this enterprise data and in 2011 took a significant step by implementing Oracle’s Primavera P6 Professional Project Management. “It’s a comprehensive solution that has helped us improve the monitoring and controlling processes. It has allowed us to do our distribution better,” says Sears. In addition, the implementation effort has been a change agent, helping AmeriCares leadership rethink project management across the entire organization. Initially, much of the focus was on standardizing processes, but staff members also learned the importance of thinking proactively to prevent possible problems and evaluating results to determine if goals and objectives are truly being met. Such data about process efficiency and overall results is critical not only to AmeriCares staff but also to the donors supporting the organization’s life-saving missions. Efficiency Saves Lives One of AmeriCares’ core operations is to gather product donations from the private sector, establish where the most-urgent needs are, and solicit monetary support to send the aid via ocean cargo or airlift to welfare- and health-oriented nongovernmental organizations, hospitals, health networks, and government ministries based in areas in need. In 2011 alone, AmeriCares sent more than 3,500 shipments to 95 countries in response to both ongoing humanitarian needs and more than two dozen emergencies, including deadly tornadoes and storms in the U.S. and the devastating tsunami in Japan. When it comes to nonprofits in general, donors want to know that the charitable organizations they support are using funds wisely. Typically, nonprofits are evaluated by donors in terms of efficiency, an area where AmeriCares has an excellent reputation: 98 percent of expenses go directly to supporting programs and less than 2 percent represent administrative and fundraising costs. Donors, however, should look at more than simple efficiency, says Peter York, senior partner and chief research and learning officer at TCC Group, a nonprofit consultancy headquartered in New York, New York. They should also look at whether organizations have the systems in place to sustain their missions and continue to thrive. An expert on nonprofit organizational management, York has spent years studying sustainable charitable organizations. He defines them as nonprofits that are able to achieve the ongoing financial support to stay relevant and continue doing core mission work. In his analysis of well over 2,500 larger nonprofits, York has found that many are not sustaining, and are actually scaling back in size. “One of the biggest challenges of nonprofit sustainability is the general public’s perception that every dollar donated has to go only to the delivery of service,” says York. “What our data shows is that there are some fundamental capacities that have to be there in order for organizations to sustain and grow.” York’s research highlights the importance of data-driven leadership at successful nonprofits. “You’ve got to have the tools, the systems, and the technologies to get objective information on what you do, the people you serve, and the results you’re achieving,” says York. “If leaders don’t have the knowledge and the data, they can’t make the strategic decisions about programs to take organizations to the next level.” Historically, AmeriCares associates have used time-tested and cost-effective strategies to ship and then track supplies from donation to delivery to their destinations in designated time frames. When disaster strikes, AmeriCares ships by air and generally pulls out all the stops to deliver the most urgently needed aid within the first few days and weeks. Then, as situations stabilize, AmeriCares turns to delivering sea containers for the postemergency and ongoing aid so often needed over the long term. According to McDermott, getting a shipment out the door is fairly complicated, requiring as many as five different AmeriCares teams collaborating together. The entire process can take months—from when products are received in the warehouse and deciding which recipients to allocate supplies to, to getting customs and governmental approvals in place, actually shipping products, and finally ensuring that the products are received in-country. Delivering that aid is no small affair. “Our volume exceeds half a billion dollars a year worth of donated medicines and medical supplies, so it’s a sizable logistical operation to bring these products in and get them out to the right place quickly to have the most impact,” says Sears. “We really pride ourselves on our controls and efficiencies.” Adding to that complexity is the fact that the longer it takes to deliver aid, the more dire the human need can be. Any time AmeriCares associates can shave off the complicated aid delivery process can translate into lives saved. “It’s really being able to track information consistently that will help us to see where are the bottlenecks and where can we work on improving our processes,” says McDermott. Setting a Standard Productivity and information management improvements were key objectives for AmeriCares when staff began the process of implementing Oracle’s Primavera solution. But before configuring the software, the staff needed to take the time to analyze the systems already in place. According to Greg Loop, manager of database systems at AmeriCares, the organization received guidance from several consultants, including Rich D’Addario, consulting project manager in the Primavera Global Business Unit at Oracle, who was instrumental in shepherding the critical requirements-gathering phase. D’Addario encouraged staff to begin documenting shipping processes by considering the order in which activities occur and which ones are dependent on others to get accomplished. This exercise helped everyone realize that to be more efficient, they needed to keep track of shipments in a more standard way. “The staff didn’t recognize formal project management methodology,” says D’Addario. “But they did understand what the most important things are and that if they go wrong, an entire project can go off course.” Before, if a boatload of supplies was being sent to Haiti and there was a problem somewhere, a lot of time was taken up finding out where the problem was—because staff was not tracking things in a standard way. As a result, even more time was needed to find possible solutions to the problem and alert recipients that the aid might be delayed. “For everyone to put on the project manager hat and standardize the way every single thing is done means that now the whole organization is on the same page as to what needs to occur from the time a hurricane hits Haiti and when a boat pulls in to unload supplies,” says D’Addario. With so much care taken to put a process foundation firmly in place, configuring the Primavera solution was actually quite simple. Specific templates were set up for different types of shipments, and dashboards were implemented to provide executives with clear overviews of every project in the system. AmeriCares’ Loop reports that system planning, refining, and testing, followed by writing up documentation and training, took approximately four months. The system went live in spring 2011 at AmeriCares’ Connecticut headquarters. While the nonprofit has an international presence, with warehouses in Europe and offices in Haiti, India, Japan, and Sri Lanka, most donated medicines come from U.S. entities and are shipped from the U.S. out to the rest of the world. In addition, all shipments are tracked from the U.S. office. AmeriCares doesn’t expect the Primavera system to take months off the shipping time, especially for sea containers. However, any time saved is still important because it will allow aid to be delivered to people more quickly at a lower overall cost. “If we can trim a day or two here or there, that can translate into lives that we’re saving, especially in emergency situations,” says Sears. A Cultural Change Beyond the measurable benefits that come with IT-driven process improvement, AmeriCares management is seeing a change in culture as a result of the Primavera project. One change has been treating every shipment of aid as a project, and everyone involved with facilitating shipments as a project manager. “This is a revolutionary concept for us,” says McDermott. “Before, we were used to thinking we were doing logistics—getting a container from point A to point B without looking at it as one project and really understanding what it meant to manage it.” AmeriCares staff is also happy to report that collaboration within the organization is much more efficient. When someone creates a shipment in the Primavera system, the same shared template is used, which means anyone can log in to the system to see the status of a shipment. Knowledgeable staff can access a shipment project to help troubleshoot a problem. Management can easily check the status of projects across the organization. “Dashboards are really useful,” says McDermott. “Instead of going into the details of each project, you can just see the high-level real-time information at a glance.” The new system is helping team members focus on proactively managing shipments rather than simply reacting when problems occur. For example, when a container is shipped, documents must be included for customs clearance. Now, the shipping template has built-in reminders to prompt team members to ask for copies of these documents from freight forwarders and to follow up with partners to discover if a shipment is on time. In the past, staff may not have worked on securing these documents until they’d been notified a shipment had arrived in-country. Another benefit of capturing and adopting best practices within the Primavera system is that staff training is easier. “Capturing the processes in documented steps and milestones allows us to teach new staff members how to do their jobs faster,” says Sears. “It provides them with the knowledge of their predecessors so they don’t have to keep reinventing the wheel.” With the Primavera system already generating positive results, management is eager to take advantage of advanced capabilities. Loop is working on integrating the company’s proprietary inventory management system with the Primavera system so that when logistics or warehousing operators input data, the information will automatically go into the Primavera system. In the past, this information had to be manually keyed into spreadsheets, often leading to errors. Mining Historical Data Another feature on the horizon for AmeriCares is utilizing Primavera P6 Professional Project Management reporting capabilities. As the system begins to include more historical data, management soon will be able to draw on this information to conduct analysis that has not been possible before and create customized reports. For example, at the beginning of the shipment process, staff will be able to use historical data to more accurately estimate how long the approval process should take for a particular country. This could help ensure that food and medicine with limited shelf lives do not get stuck in customs or used beyond their expiration dates. The historical data in the Primavera system will also help AmeriCares with better planning year to year. The nonprofit’s staff has always put together a plan at the beginning of the year, but this has been very challenging simply because it is impossible to predict disasters. Now, management will be able to look at historical data and see trends and statistics as they set current objectives and prepare for future need. In addition, this historical data will provide AmeriCares management with the ability to review year-end data and compare actual project results with goals set at the beginning of the year—to see if desired outcomes were achieved and if there are areas that need improvement. It’s this type of information that is so valuable to donors. And, according to York, project management software can play a critical role in generating the data to help nonprofits sustain and grow. “It is important to invest in systems to help replicate, expand, and deliver services,” says York. “Project management software can help because it encourages nonprofits to examine program or service changes and how to manage moving forward.” Sears believes that AmeriCares donors will support the return on investment the organization will achieve with the Primavera solution. “It won’t be financial returns, but rather how many more people we can help for a given dollar or how much more quickly we can respond to a need,” says Sears. “I think donors are receptive to such arguments.” And for AmeriCares, it is all about the future and increasing results. The project management environment currently may be quite simple, but IT staff plans to expand the complexity and functionality as the organization grows in its knowledge of project management and the goals it wants to achieve. “As we use the system over time, we’ll continue to refine our best practices and accumulate more data,” says Sears. “It will advance our ability to make better data-driven decisions.”

    Read the article

  • Centralized Windows/Mac Patch Management that is easy to use

    - by BiggsTRC
    I'm looking for advice on what patch management solutions you would recommend based upon your experience. I'm also looking for which ones you would not recommend based upon your experience. We have a mixed network of Windows and Mac clients. Our central servers are all Windows servers, although I have considered putting in a Mac server to better handle our Mac clients. The issue we are facing currently is that we need to maintain the patches on all of our third-party applications. Right now we use WSUS, which handles with patching of Windows and some Microsoft products but that is about it. I need something to cover the other applications, specifically things like Adobe products (Reader, Flash, Dreamweaver, etc.) Our network isn't that big (maybe 200 clients) and I don't have a person to dedicate just to patching and maintaining a patch management solution. Thus very large and complicated solutions like System Center are most likely out. I have recently been looking at Dell's Kace K1000 solution (http://www.kace.com/products/systems-management-appliance/). It seems simple and it provides a lot of tools in one package that I would like/need as well. I like the fact that it is self-contained in an appliance and that it is designed for solutions like mine. However, I'm not sure if this is the best solution. I've also looked some at Shavlik's Netchk solution (http://www.shavlik.com/netchk-protect.aspx) but I don't need an anti-virus product. However, it looks like they might have a very good patch database. My question is this: What are your thoughts on these to products? Are there better products out there? Are there issues that I'm not considering? I want something that is very good at patching a broad range of products, that is simple to use, that takes a minimal amount of management (like WSUS), and that (hopefully) works with Mac and Windows.

    Read the article

  • SQL SERVER – Another lesser known feature of SQL Server Management Studio 2012 – Guest Post by Balmukund Lakhani

    - by Pinal Dave
    This is a fantastic blog post from my dear friend Balmukund ( blog | twitter | facebook ). He had presented a fantastic session in our last UG and there were lots of requests from attendees that he blogs about it. Well, here is the blog post about the same very popular UG session. Let us read the entire blog post in the voice of the Balmukund himself. In one of my previous guest blog on SQL Authority, I wrote about “Additional Connection Parameter” tab of login screen in SQL Server Management Studio (a.k.a. SSMS). On the similar lines, this blog is going to show little less known new feature of login main screen (“Connect to Server”) of SSMS 2012. You might have seen below screen countless times and you might wonder what is there is blog about in this simple screen. Well, continue reading and you would get the answer. Many times, DBA have to login to production server from non-regular machine, may be a developer’s workstation. Once you login to SQL, do your work and close the management studio. Do you know that your server name is saved in management studio? Of course, very useful feature because you may not like to type server name/IP address every time. Whatever servers you have connected, it would be stored by management studio. But sometime, it’s annoying! What you would do if you want SQL Server Management Studio to forget “all” the servers listed in drop down of Server name? To do that, you need to know how and where it’s stored. You can use one of my favorite tool from sysinternals called Process Monitor (also known as ProcMon) and easily figure out that this is stored in a file under your windows user profile. Below is the file in SQL 2008 R2 Management Studio. %appdata%\Microsoft\Microsoft SQL Server\100\Tools\Shell\SqlStudio.bin For SQL Server 2012, here is what we can see in ProcMon So, the path is %appdata%\Microsoft\Microsoft SQL Server\110\Tools\Shell\SqlStudio.bin So far, you might wonder, where is the new feature? I have been asked by many users to delete entries from SSMS “Connect to Server” server name list. Well, unofficially, you can delete the file directly which we found via ProcMon. Note that delete file to get rid of server list is not officially supported by Microsoft. Better way to achieve this is provided in SSMS 2012. To delete the servers from the list, highlight the name we want to delete (via keyboard or mouse) and then press delete key via keyboard. We can’t be multi-select and has to be done one by one. We can delete as many entries we want. I have delete few from first screenshot taken and here is the modified version. This is not available in SQL 2008 R2 and its previous version. This came from feedback given to SQL Server Product group. Hope you have learned something new today! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What micro web-framework has the lowest overhead but includes templating

    - by Simon Martin
    I want to rewrite a simple small (10 page) website and besides a contact form it could be written in pure html. It is currently built with classic asp and Dreamweaver templates. The reason I'm not simply writing 10 html pages is that I want to keep the layout all in 1 place so would need either includes or a masterpage. I don't want to use Dreamweaver templates, or batch processing (like org-mode) because I want to be able to edit using notepad (or Visual Studio) because occasionally I might need to edit a file on the server (Go Daddy's IIS admin interface will let me edit text). I don't want to use ASP.NET MVC or WebForms (which I use in my day job) because I don't need all the overhead they bring with them when essentially I'm serving up 9 static files, 1 contact form and 1 list of clubs (that I aim to use jQuery to filter). The shared hosting package I have on Go Daddy seems to take a long time to spin up when serving aspx files. Currently the clubs page is driven from an MS SQL database that I try to keep up to date by manually checking the dojo locator on the main HQ pages and editing the entries myself, this is again way over the top. I aim to get a text file with the club details (probably in JSON or xml format) and use that as the source for the clubs page. There will need to be a bit of programming for this as the HQ site is unable to provide an extract / feed so something will have to scrape the site periodically to update my clubs persistence file. I'd like that to be automated - but I'm happy to have that triggered on a visit to the clubs page so I don't need to worry about scheduling a job. I would probably have a separate process that updates the persistence that has nothing to do with the rest of the site. Ideally I'd like to use Mercurial (or git) to publish, I know Bitbucket (and github) both serve static page sites so they wouldn't work in this scenario (dynamic pages and a contact form) but that's the model I'd like to use if there is such a thing. My requirements are: Simple templating system, 1 place to define header, footers, menu etc., that can be edited using just notepad. Very minimal / lightweight framework. I don't need a monster for 10 pages Must run either on IIS7 (shared Go Daddy Windows hosting) or other free host

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >