Search Results

Search found 16903 results on 677 pages for 'single responsibility'.

Page 471/677 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • SQL SERVER – Sharing your ETL Resources Across Applications with Ease

    - by pinaldave
    Frequently an organization will find that the same resources are used in multiple ETL applications, for example, the same database, general purpose processing logic, or file system locations.  Creating an easy way to reuse these resources across multiple applications would increase efficiency and reduce errors.  Moreover, not every ETL developer has the same skill set, and it is likely that one developer will be more adept at writing code while another is more comfortable configuring database connections.  Real productivity gains will come when these developers are able to work independently while still making their work available to others assigned to the same project.  These are the benefits of a centralized version control system. Of course, most version control systems could be used to store and serve files, but the real need is to store and serve entire ETL applications so that each developer’s ongoing work can immediately benefit from another developer’s completed work.  In other words, the version control system needs to be tightly integrated with the tools used to develop the ETL application. The following screen shot shows such a tool. Desktop ETL tool that tightly integrates with a central version control system Developers can checkout or commit entire projects or just a single artifact.  Each artifact may be managed independently so if you need to go back to an earlier version of one artifact, changes you may have made to other artifacts are not lost.  By being tightly integrated into the graphical environment used to create and edit the project artifacts, it is extremely easy and straight-forward to move your files to and from the version control system and there is no dependency on another vendor’s version control system.  The built in version control system is optimized for managing the artifacts of ETL applications. It is equally important that the version control system supports all of the actions one typically performs such as rollbacks, locking and unlocking of files, and the ability to resolve conflicts.  Note that this particular ETL tool also has the capability to switch back and forth between multiple version control systems. It also needs to be easy to determine the status of an artifact.  Not just that it has been committed or modified, but when and by whom.  Generally you must query the version control system for this information, but having it displayed within the development environment is more desirable. Who’s ETL tool works in this fashion?  Last month I mentioned the data integration solution offered by expressor software.  The version control features I described in this post are all available in their just released expressor 3.1 Standard Edition through the integration of their expressor Studio development environment with a centralized metadata repository and version control system. You can download their Studio application, which is free, or evaluate the full Standard Edition on your own hardware.  It may be worth your time. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Move Database Files MDF and LDF to Another Location

    - by pinaldave
    When a novice DBA or Developer create a database they use SQL Server Management Studio to create new database. Additionally, the T-SQL script to create a database is very easy as well. You can just write CREATE DATABASE DatabaseName and it will create new database for you. The point to remember here is that it will create the database at the default location specified for SQL Server Instance (this default instance can be changed and we will see that in future blog posts). Now, once the database goes in production it will start to grow. It is not common to keep the Database on the same location where OS is installed. Usually Database files are on SAN, Separate Disk Array or on SSDs. This is done usually for performance reason and manageability perspective. Now the challenges comes up when database which was installed at not preferred default location and needs to move to a different location. Here is the quick tutorial how you can do it. Let us assume we have two folders loc1 and loc2. We want to move database files from loc1 to loc2. USE MASTER; GO -- Take database in single user mode -- if you are facing errors -- This may terminate your active transactions for database ALTER DATABASE TestDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO -- Detach DB EXEC MASTER.dbo.sp_detach_db @dbname = N'TestDB' GO Now move the files from loc1 to loc2. You can now reattach the files with new locations. -- Move MDF File from Loc1 to Loc 2 -- Re-Attached DB CREATE DATABASE [TestDB] ON ( FILENAME = N'F:\loc2\TestDB.mdf' ), ( FILENAME = N'F:\loc2\TestDB_log.ldf' ) FOR ATTACH GO Well, we are done. There is little warning here for you: If you do ROLLBACK IMMEDIATE you may terminate your active transactions so do not use it randomly. Do it if you are confident that they are not needed or due to any reason there is a connection to the database which you are not able to kill manually after review. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • links for 2011-03-15

    - by Bob Rhubart
    Dr. Frank Munz: Resize AWS EC2 Cloud Instances Dr Munz says: "You cannot dynamically resize a running cloud instance. E.g. there is no API call to ask for 2.2 GHz CPU speed instead of 1.8 GHz or to dynamically add another 3.5 GB of RAM." (tags: oracle cloud amazon ec2) Roddy Rodstein: Oracle VM Manager Architecture and Scalability Rodstein says: "Oracle VM Manager can be installed in an all-in-one configuration using the default Oracle 10g Express Database or in a more traditional two tier architecture with an OC4J web tier and a 10 or 11g database tier." (tags: oracle otn virtualization oraclevm) Mark Nelson: Getting started with Continuous Integration for SOA projects Nelson says: "I am exploring how to use Maven and Hudson to create a continuous integration capability for SOA and BPM projects. This will be the first post of several on this topic, and today we will look at setting up some simple continuous integration for a single SOA project." (tags: oracle maven hudson soa bpm) 5 New Java Champions (The Java Source) Tori Wieldt shares the big news. Congratulations to new Java Champs Jonas Bonér, James Strachan, Rickard Oberg, Régina ten Bruggencate, and Clara Ko. (tags: oracle java) Alert for Forms customers running Oracle Forms 10g (Grant Ronald's Blog) Ronald says: "While you might have been happily running your Forms 10g applications for about 5 years or so now, the end of premier support is creeping up and you need to start planning for a move to Oracle Forms 11g." (tags: oracle oracleforms) Brenda Michelson: Enterprise Architecture Rant #4,892 "I’m increasingly concerned about the macro-direction of our field, as we continue to suffer ivory tower enterprise architecture punditry, rigid frameworks and endless philosophical waxing." - Brenda Michelson (tags: entarch enterprisearchitecture ivorytower) Amitabh Apte: Enterprise Architecture - Different Perspectives "Business does not need Enterprise Architecture," says Apte, "it needs value and outcomes from the EA function." (tags: entarch enterprisearchitecture) First Ever MySQL on Windows Online Forum - March 16, 2011 (Oracle's MySQL Blog) Monica Kumar shares the details. (tags: oracle mysql mswindows) Jeff Davies: Running Multiple WebLogic and OSB Domains "There is a small 'gotcha' if you want to create multiple domains on a devevelopment machine," says Jeff Davies. But don't worry - there's a solution. (tags: oracle soa osb weblogic servicebus) The Arup Nanda Blog: Good Engineering "Engineering is not about being superficially creative," Nanda says, "it's about reliability and trustworthiness." (tags: oracle engineering software technology) Welcome to the SOA & E2.0 Partner Community Forum (SOA Partner Community Blog) (tags: ping.fm)

    Read the article

  • JCP EC Nominations and Meet the Candidates Call

    - by heathervc
    The Nominations period for the 2012 JCP EC Elections closes tomorrow, 11 October at midnight pacific time.  Eligible JCP Members (all current JSPA 2 signers) may nominate themselves.  You will need your Elections credentials to complete the nomination, which were sent to the primary contacts of all eligible JCP Members via email last week. This year all ratified (there are 4 proposed ratified candidates) and elected (there are 7 candidates so far) will appear on one ballot; the top 2 candidates will win elected seats. This year, the selected EC Members will serve a single year term.  Following the 2012 Elections, there will be one merged EC (approved through JSR 355), and a new JCP version, JCP 2.9 will be in effect.  In 2013, all EC members will stand for election to complete the merge process described in the JCP 2.9 process document. All of the candidates' nominations materials are now available. The ratified candidates are:  Cinterion, Credit Suisse, Fujitsu and HP.The elected candidates are:  Cisco Systems, CloudBees, Giuseppe Dell'Abate, London Java Community, MoroccoJUG, Software AG, and Zero Turnaround. Next week, 18 October, we will hold an open teleconference for the Java Community to meet the candidates and ask questions regarding their nomination.  We hope you will be able to participate in the call.  Should the time be inconvenient, a recording will be made available for download, and candidate questions may be posted on this blog entry or sent to [email protected]. Topic: Meet the EC Candidates Date: Thursday, October 18, 2012 Time: 9:30 am, Pacific Daylight Time (San Francisco, GMT-07:00) Meeting Number: 807 818 225 Meeting Password: MeetEC ------------------------------------------------------- To join the online meeting (Now from mobile devices) ------------------------------------------------------- 1. Go to https://jcp.webex.com/jcp/j.php?ED=186721592&UID=0&PW=NMmUzNjY5ZTMw&RT=MiM0 2. If requested, enter your name and email address. 3. If a password is required, enter the meeting password: MeetEC 4. Click "Join". To view in other time zones or languages, please click the link: https://jcp.webex.com/jcp/j.php?ED=186721592&UID=0&PW=NMmUzNjY5ZTMw&ORT=MiM0 ------------------------------------------------------- To join the audio conference only -------------------------------------------------------     +1 (866) 682-4770     Outside the US: global access numbers  https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=6279803 or +1 (408) 774-4073     Conference code: 9454597     Security code: JCPEC (52732)------------------------------------------------------- For assistance ------------------------------------------------------- 1. Go to https://jcp.webex.com/jcp/mc 2. On the left navigation bar, click "Support".

    Read the article

  • Book review: Microsoft System Center Enterprise Suite Unleashed

    - by BuckWoody
    I know, I know – what’s a database guy doing reading a book on System Center? Well, I need it from time to time. System Center is actually a collection of about 7 different products that you can use to manage and monitor your software and hardware, from drive space through Microsoft Office, UNIX systems, and yes, SQL Server. It’s that last part I care about the most, and so I’ve dealt with Data Protection Manager and System Center Operations Manager (I call it SCOM) in SQL Server. But I wasn’t familiar with the rest of the suite nor was I as familiar as I needed to be with the “Essentials” release – a separate product that groups together the main features of System Center into a single offering for smaller organizations. These companies usually run with a smaller IT shop, so they sometimes opt for this product to help them monitor everything, including SQL Server. So I picked up “Microsoft System Center Enterprise Suite Unleashed” by Chris Amaris and a cast of others. I don’t normally like to get a technical book by multiple authors – I just find that most of the time it’s quite jarring to switch from author to author, but I think this group did pretty well here.  The first chapter on introducing System Center has helped me talk with others about what the product does, and which pieces fit well together with SQL Server. The writing is well done, and I didn’t find a jump from author to author as I went along. The information is sequential, meaning that they lead you from install to configuration and then use. It’s very much a concepts-and-how-to book, and a big one at that – over 950 pages of learning! It was a pretty quick read, though, since I skipped the installation parts and there are lots of screenshots. While I’m not sure you’d be an expert on the product when you finish reading this book, but I would say you’re more than halfway there. I would say it suits someone that learns through examples the best, since they have a lot of step-by-step examples I do recommend that you take a look if you have to interact with this product, or even if you are a smaller shop and you’re the primary IT resource. The last few chapters deal with System Center Essentials, and honestly it was the best part of the book for me. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Unification of TPL TaskScheduler and RX IScheduler

    - by JoshReuben
    using System; using System.Collections.Generic; using System.Reactive.Concurrency; using System.Security; using System.Threading; using System.Threading.Tasks; using System.Windows.Threading; namespace TPLRXSchedulerIntegration { public class MyScheduler :TaskScheduler, IScheduler     { private readonly Dispatcher _dispatcher; private readonly DispatcherScheduler _rxDispatcherScheduler; //private readonly TaskScheduler _tplDispatcherScheduler; private readonly SynchronizationContext _synchronizationContext; public MyScheduler(Dispatcher dispatcher)         {             _dispatcher = dispatcher;             _rxDispatcherScheduler = new DispatcherScheduler(dispatcher); //_tplDispatcherScheduler = FromCurrentSynchronizationContext();             _synchronizationContext = SynchronizationContext.Current;         }         #region RX public DateTimeOffset Now         { get { return _rxDispatcherScheduler.Now; }         } public IDisposable Schedule<TState>(TState state, DateTimeOffset dueTime, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, dueTime, action);         } public IDisposable Schedule<TState>(TState state, TimeSpan dueTime, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, dueTime, action);         } public IDisposable Schedule<TState>(TState state, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, action);         }         #endregion         #region TPL /// Simply posts the tasks to be executed on the associated SynchronizationContext         [SecurityCritical] protected override void QueueTask(Task task)         {             _dispatcher.BeginInvoke((Action)(() => TryExecuteTask(task))); //TryExecuteTaskInline(task,false); //task.Start(_tplDispatcherScheduler); //m_synchronizationContext.Post(s_postCallback, (object)task);         } /// The task will be executed inline only if the call happens within the associated SynchronizationContext         [SecurityCritical] protected override bool TryExecuteTaskInline(Task task, bool taskWasPreviouslyQueued)         { if (SynchronizationContext.Current != _synchronizationContext)             { SynchronizationContext.SetSynchronizationContext(_synchronizationContext);             } return TryExecuteTask(task);         } // not implemented         [SecurityCritical] protected override IEnumerable<Task> GetScheduledTasks()         { return null;         } /// Implementes the MaximumConcurrencyLevel property for this scheduler class. /// By default it returns 1, because a <see cref="T:System.Threading.SynchronizationContext"/> based /// scheduler only supports execution on a single thread. public override Int32 MaximumConcurrencyLevel         { get             { return 1;             }         } //// preallocated SendOrPostCallback delegate //private static SendOrPostCallback s_postCallback = new SendOrPostCallback(PostCallback); //// this is where the actual task invocation occures //private static void PostCallback(object obj) //{ //    Task task = (Task) obj; //    // calling ExecuteEntry with double execute check enabled because a user implemented SynchronizationContext could be buggy //    task.ExecuteEntry(true); //}         #endregion     } }     What Design Pattern did I use here?

    Read the article

  • VLC will sometimes have issues displaying video in fullscreen. What could cause this? How would I troubleshoot the issue?

    - by George Marian
    Recently VLC has been having issues displaying video in fullscreen mode. AFAIK, nothing has changed with the video card drivers and it's certainly the same version of VLC. (/me shakes a fist at the repository maintainers) This has worked without issue in the past. In fact, I've had as many as 6 instances of VLC running, each playing a video. One was always fullscreen on my second monitor, while the others were tiled on my primary monitor. I was able to toggle any of the other 5 into fullscreen mode and the video displayed without issue. Lately, I've been having trouble running 2 instances in fullscreen mode. (Sometimes, even a single instance will not display the video in fullscreen.) VLC will continue to play the video, but in fullscreen mode I see nothing but a black screen. Sometimes, the video will display if I maximize the VLC window. Other times, I have to settle for a smaller sized window. I don't know if this is pertinent, but sometimes changing the min/max state of a Firefox window (Minefield, specifically) seemed to allow the troublesome instance to display the video in fullscreen mode. However, that did not prove to be a consistent workaround. Sometimes, it seemed that closing a Firefox window did the trick, though that isn't consistently successful either. (I futzed with Firefox, because with the crazy number of windows and tabs that I normally have open, it regularly hogs about 1 GB of RAM.) Another bit of funkiness that comes to mind is the fact that my secondary monitor is considered the primary on boot-up. I use xrandr to designate the real 1st monitor as primary after boot-up, as suggested by someone in a question I asked on the Unix & Linux SE site. Specs: Ubuntu 10.10 w/ Gnome and Compiz 8GB RAM AMD Phenom II 965 Black Edition Asus M4A79 Deluxe mobo XFX ATI Radeon HD 5750 w/ 1GB RAM VLC is configured to use the hardware overlay for video (as per the default setting) Does anyone have an idea what may cause this issue or how I may go about troubleshooting it? Update: Right now I have 2 instances of VLC playing, each in fullscreen mode on a separate monitor. This is what I see:

    Read the article

  • American Modern Insurance Group recognized at 2010 INN VIP Best Practices Awards

    - by [email protected]
    Below: Helen Pitts (right), Oracle Insurance, congratulates Bruce Weisgerber, Munich Re, as he accepts a VIP Best Practices Award on behalf of American Modern Insurance Group.     Oracle Insurance Senior Product Marketing Manager Helen Pitts is attending the 2010 ACORD LOMA Insurance Forum this week at the Mandalay Bay Resort in Las Vegas, Nevada, and will be providing updates from the show floor. This is one of my favorite seasons of the year--insurance trade show season. It is a time to reconnect with peers, visit with partners, make new industry connections, and celebrate our customers' achievements. It's especially meaningful when we can share the experience of having one of our Oracle Insurance customers recognized for being an innovator in its business and in the industry. Congratulations to American Modern Insurance Group, part of the Munich Re Group. American Modern earned an Insurance Networking News (INN) 2010 VIP Best Practice Award yesterday evening during the 2010 ACORD LOMA Insurance Forum. The award recognizes an insurer's best practice for use of a specific technology and the role, if feasible, that ACORD data standards played as a part of their business and technology. American Modern received an Honorable Mention for leveraging the Oracle Documaker enterprise document automation solution to: Improve the quality of communications with customers in high value, high-touch lines of business Convert thousands of page elements or "forms" from their previous system, with near pixel-perfect accuracy Increase efficiency and reusability by storing all document elements (fonts, logos, approved wording, etc.) in one place Issue on-demand documents, such as address changes or policy transactions to multiple recipients at once Consolidate all customer communications onto a single platform Gain the ability to send documents to multiple recipients at once, further improving efficiency Empower agents to produce documents in real time via the Web, such as quotes, applications and policy documents, improving carrier-agent relationships Munich Re's Bruce Weisgerber accepted the award on behalf of American Modern from Lloyd Chumbly, vice president of standards at ACORD. In a press release issued after the ceremony Chumbly noted, "This award embodies a philosophy of efficiency--working smarter with standards, these insurers represent the 'best of the best' as chosen by a body of seasoned insurance industry professionals." We couldn't agree with you more, Lloyd. Congratulations again to American Modern on your continued innovation and success. You're definitely a VIP in our book! To learn more about how American Modern is putting its enterprise document automation strategy into practice, click here to read a case study. Helen Pitts is senior product marketing manager for Oracle Insurance.

    Read the article

  • How do you structure computer science University notes?

    - by Sai Perchard
    I am completing a year of postgraduate study in CS next semester. I am finishing a law degree this year, and I will use this to briefly explain what I mean when I refer to the 'structure' of University notes. My preferred structure for authoring law notes: Word Two columns 0.5cm margins (top, right, bottom, middle, left) Body text (10pt, regular), 3 levels of headings (14/12/10pt, bold), 3 levels of bulleted lists Color A background for cases Color B background for legislation I find that it's crucial to have a good structure from the outset. My key advice to a law student would be to ensure styles allows cases and legislation to be easily identified from supporting text, and not to include too much detail regarding the facts of cases. More than 3 levels of headings is too deep. More than 3 levels of a bulleted list is too deep. In terms of CS, I am interested in similar advice; for example, any strategies that have been successfully employed regarding structure, and general advice regarding note taking. Has latex proved better than Word? Code would presumably need to be stylistically differentiated, and use a monospaced font - perhaps code could be written in TextMate so that it could be copied to retain syntax highlighting? (Are notes even that useful in a CS degree? I am tempted to simply use a textbook. They are crucial in law.) I understand that different people may employ varying techniques and that people will have personal preferences, however I am interested in what these different techniques are. Update Thank you for the responses so far. To clarify, I am not suggesting that the approach should be comparable to that I employ for law. I could have been clearer. The consensus so far seems to be - just learn it. Structure of notes/notes themselves are not generally relevant. This is what I was alluding to when I said I was just tempted to use a textbook. Re the comment that said textbooks are generally useless - I strongly disagree. Sure, perhaps the recommended textbook is useless. But if I'm going to learn a programming language, I will (1) identify what I believe to be the best textbook, and (2) read it. I was unsure if the combination of theory with code meant that lecture notes may be a more efficient way to study for an exam. I imagine that would depend on the subject. A subject specifically on a programming language, reading a textbook and coding would be my preferred approach. But I was unsure if, given a subject containing substantive theory that may not be covered in a single textbook, people may have preferences regarding note taking and structure.

    Read the article

  • Freshen the RTS genre

    - by William Michael Thorley
    This isn't really a question, but a request for feedback. RPS (Rock, Paper, Scissors) RTS (Real Time Strategy) Demo version is out: The game is simple. It is an RTS. Why has it been made? Many if not most RTS’s are about economy and large numbers of unit types. The genre hasn’t actually developed the gameplay drastically from the very first RTS’s produced, some lesson have been learned, but the games are really very similar to how they have always been. RPS brings new gameplay to the RTS genre. Through three means: • New combat mechanics: RPS has two unique modes (as well as the old favourite) of resolving weapon fire. These change how combat happens, and make application of the correct units vital to success. From this comes the requirement to run Intel on your enemies. • Fixed Resource Economy: Each player has a fixed amount of energy, This means that there is a definite end to the game. You can attrition your enemy and try to outlast them, or try to outspend your opponent and destroy them. There is a limit to how fast ships can be built, through the generation of construction blocks, but energy is the fast limit on economy. • Game Modes: Game modes add victory conditions and new game pieces. The game is overseen by a controller which literally runs the game. Games are no longer line them up, gun them down. This means that new tactics must be played making skirmish games fresh with novel tactics without adding huge amounts of new game units to learn. I’ve produced RPS from the ground. I will be running a kickstarter in the near future, but right now I want feedback and input from the game developing community. Regarding the concepts, where RPS is going, the game modes, the combat mechanics. How it plays. RPS will give fresh gameplay to the genre so it must be right. It works over the internet or a LAN and supports single player games. Get it. Play it. Tell me about your games. Thank you Demo: https://dl.dropbox.com/u/51850113/RPS%20Playtest.zip Tutorials: https://dl.dropbox.com/u/51850113/RPSGamePlay.zip

    Read the article

  • Best Platform/Engine for turn based Client/Server Android game

    - by Paradine
    I'm currently designing a turn based game for tablets. Initially for Android with porting to iOS later considered in design. I'm having trouble narrowing down the available technologies to even know where to spend my research time. I am hoping that if I explain what I am trying to achieve someone may be able to suggest a platform and/or engine. I've looked into some of the open source Engines ( http://www.cuteandroid.com/ten-open-source-android-2d-or-3d-game-engine-for-android-developers ) and some appear to handle much of what I might require - although with a higher focus on graphics than i need. Mages looks interesting although development appears to have ceased. If I could somehow leverage GoogleApps that would be excellent. Here is what I am trying to achieve: PvP turn based strategy game over internet - minimal animation and bandwidth required Players match up online using MetaGame system MatchID created on Resolution Server and Game starts Clients have 30 second countdown to select MoveString Clients sends small secure timestamped and MatchIDed MoveString to Resolution server Resolution server looks up Move String for each player, Resolves and Updates Players status in MatchID on Server Resolution server updates Client Views Repeat until victory conditions met - MatchID Closed, Rewards earned in MetaGame There will also need to be a full social and account system and metagame backend - but this could be running on separate system(s) Tablet in Offline mode would be catalog browsing and perhaps single player AI - bum I'm focusing on the Resolution Server at this point I'm not even certain if I would be looking at an Android App or a WebApp at this stage! I want a custom GUI so I guess an app - but maybe as I have little animation a WebApp might also work. Probably some combination of both. There will be very small overhead in data between client server - essentially a small text string every 30 seconds sent to the Resolution server which looks up the Effect and applies it to the Opponents string and determines some results to apply to the match. The client view is updated minimally with the results (only 5 in game Integers tracked) - perhaps triggering small animations/popups on the client to show the end result. e.g Explosion. If you have suggestions for a good technology or platform to best achieving the Resolution Server I'd love to hear. Also if you have experience with open source Engines - and could narrow down which (if any ) might be most suitable that would be a big help. Thanks in advance

    Read the article

  • Introduction to Lean Software Development and Kanban Systems

    - by Ben Griswold
    Last year I took myself through a crash course on Lean Software Development and Kanban Systems in preparation for an in-house presentation.  I learned a bunch.  In this series, I’ll be sharing what I learned with you.   If your career looks anything like mine, you have probably been affiliated with a company or two which pushed requirements gathering and documentation to the nth degree. To add insult to injury, they probably added planning process (documentation, requirements, policies, meetings, committees) to the extent that it possibly retarded any progress. In my opinion, the typical company resembles the quote from Tom DeMarco. It isn’t enough just to do things right – we also had to say in advance exactly what we intended to do and then do exactly that. In the 1980s, Toyota turned the tables and revolutionize the automobile industry with their approach of “Lean Manufacturing.” A massive paradigm shift hit factories throughout the US and Europe. Mass production and scientific management techniques from the early 1900’s were questioned as Japanese manufacturing companies demonstrated that ‘Just-in-Time’ was a better paradigm. The widely adopted Japanese manufacturing concepts came to be known as ‘lean production’. Lean Thinking capitalizes on the intelligence of frontline workers, believing that they are the ones who should determine and continually improve the way they do their jobs. Lean puts main focus on people and communication – if people who produce the software are respected and they communicate efficiently, it is more likely that they will deliver good product and the final customer will be satisfied. In time, the abstractions behind lean production spread to logistics, and from there to the military, to construction, and to the service industry. As it turns out, principles of lean thinking are universal and have been applied successfully across many disciplines. Lean has been adopted by companies including Dell, FedEx, Lens Crafters, LLBean, SW Airlines, Digital River and eBay. Lean thinking got its name from a 1990’s best seller called The Machine That Changed the World : The Story of Lean Production. This book chronicles the movement of automobile manufacturing from craft production to mass production to lean production. Tom and Mary Poppendieck, that is.  Here’s one of their books: Implementing Lean Software Thinking: From Concept to Cash Our in-house presentations are supposed to run no more than 45 minutes.  I really cranked and got through my 87 slides in just under an hour. Of course, I had to cheat a little – I only covered the 7 principles and a single practice. In the next part of the series, we’ll dive into Principle #1: Eliminate Waste. And I am going to be a little obnoxious about listing my Lean and Kanban references with every series post.  The references are great and they deserve this sort of attention. 

    Read the article

  • ASP.Net 4.5 Garbage Collection Improvement

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/06/24/asp.net-4.5-garbage-collection-improvement.aspxI just read Five Great .NET Framework 4.5 Features on CodeProject by Shivprasad koirala. Feature 5 in his article mentions the GC background cleanup and has a good explanation of the work the GC has to do for ASP.Net on the server. “Garbage collector is one real heavy task in a .NET application. And it becomes heavier when it is an ASP.NET application. ASP.NET applications run on the server and a lot of clients send requests to the server thus creating loads of objects, making the GC really work hard for cleaning up unwanted objects.” “To overcome the above problem, server GC was introduced. In server GC there is one more thread created which runs in the background. This thread works in the background and keeps cleaning…objects thus minimizing the load on the main GC thread. Due to double GC threads running, the main application threads are less suspended, thus increasing application throughput. To enable server GC, we need to use the gcServer XML tag and enable it to true.” <configuration> <runtime> <gcServer enabled="true"/> </runtime> </configuration> This is not done by default. The MSDN information page says “There are only two garbage collection options, workstation or server. For single-processor computers, the default workstation garbage collection should be the fastest option. Either workstation or server can be used for two-processor computers. Server garbage collection should be the fastest option for more than two processors. Use the GCSettingsIsServerGC property to determine if server garbage collection is enabled.” “In the .NET Framework 4 and earlier versions, concurrent garbage collection is not available when server garbage collection is enabled. Starting with the .NET Framework 4.5, server garbage collection is concurrent. To use non-concurrent server garbage collection, set the <gcServer> element to true and the <gcConcurrent> element to false. “ So if you’re using ASP.Net 4.5 and have a multi-core server, you should try turning on the Server Garbage Collection and do some profiling to see if it improves the performance of your site.

    Read the article

  • Oracle Launches New Oracle Database 12c Administrator Certifications

    - by Brandye Barrington
    Today Oracle University announces the release of new Oracle Database 12c Administrator certifications. The new Oracle Database 12c certifications emphasize the foundational and advanced skills needed by Database Administrators and will prepare DBAs to leverage powerful new management and consolidation capabilities, resulting in an even more valuable credential for customers and partners. ORACLE CERTIFIED ASSOCIATE (OCA)  The Oracle Certified Associate (OCA) for Oracle Database 12c objectives measure IT professionals' mastery of day-to-day administration skills and their ability to manage the challenges they're likely to encounter on the job. This credential focuses on SQL skills, operational administration of the Oracle Database including performance and space management, and installing, patching and upgrading the Oracle Database. Earning the OCA credential requires successful completion of two exams: 1Z0-061 - Oracle Database 12c: SQL Fundamentals and 1Z0-062 - Oracle Database 12c: Installation and Administration. The OCA certification track also allows for several alternate exams which can be substituted for 1Z0-061. ORACLE CERTIFIED PROFESSIONAL (OCP) Building on the competencies in the Oracle Database 12c OCA certification, the Oracle Certified Professional (OCP) for Oracle Database 12c certification includes advanced knowledge and skills required of top-performing database administrators. The OCP credential focuses on developing and implementing backup and recovery strategies, designing consolidation strategies to exploit multitenant container and pluggable databases, and thorough understanding how CDB/PDBs fit into the DBaaS cloud-computing model. Today, Oracle is releasing 1Z0-060 - Upgrade to Oracle Database 12c, which allows Oracle Certified Professionals with credentials in Oracle 9i, Oracle Database 10g or Oracle Database 11g to upgrade to Oracle Database 12c with a single exam. The upgrade exam focuses on designing consolidation strategies to exploit multitenant container and pluggable databases, implementing Oracle 12c feature-rich ILM support, optimizing SQL execution using dynamic swapping of sub plans, implementing real-time data redaction within databases, as well as exploiting many additional performance, backup and recovery, security and partitioning enhancements. The exam also includes a thorough review of core DBA skills. Visit the OCP certification track for more details on the new upgrade exam as well as alternate certification paths. ORACLE CERTIFIED MASTER (OCM) The Oracle Certified Master (OCM) for Oracle Database 12c - a very challenging and elite top-level certification - certifies the most highly skilled and experienced database experts. Further information on the 12c OCM level will be announced as exam development concludes. To date, there have been more than 1.6 million Oracle certifications granted worldwide. Explore these certification tracks, exam requirements and objectives, and start toward earning your exciting new Oracle Database 12c certification credentials from Oracle.

    Read the article

  • Easy Scaling in XAML (WPF)

    - by Robert May
    Ran into a problem that needed solving that was kind of fun.  I’m not a XAML guru, and I’m sure there are better solutions, but I thought I’d share mine. The problem was this:  Our designer had, appropriately, designed the system for a 1920 x 1080 screen resolution.  This is for a full screen, touch screen device (think Kiosk), which has that resolution, but we also wanted to demo the device on a tablet (currently using the AWESOME Samsung tablet given out at Microsoft Build).  When you’d run it on that tablet, things were ugly because it was at a lower resolution than the target device. Enter scaling.  I did some research and found out that I probably just need to monkey with the LayoutTransform of some grid somewhere.  This project is using MVVM and has a navigation container that we built that lives on a single root view.  User controls are then loaded into that view as navigation occurs. In the parent grid of the root view, I added the following XAML: <Grid.LayoutTransform> <ScaleTransform ScaleX="{Binding ScaleWidth}" ScaleY="{Binding ScaleHeight}" /> </Grid.LayoutTransform> And then in the root View Model, I added the following code: /// <summary> /// The required design width /// </summary> private const double RequiredWidth = 1920; /// <summary> /// The required design height /// </summary> private const double RequiredHeight = 1080; /// <summary>Gets the ActualHeight</summary> public double ActualHeight { get { return this.View.ActualHeight; } } /// <summary>Gets the ActualWidth</summary> public double ActualWidth { get { return this.View.ActualWidth; } } /// <summary> /// Gets the scale for the height. /// </summary> public double ScaleHeight { get { return this.ActualHeight / RequiredHeight; } } /// <summary> /// Gets the scale for the width. /// </summary> public double ScaleWidth { get { return this.ActualWidth / RequiredWidth; } } Note that View.ActualWidth and View.ActualHeight are just pointing directly at FrameworkElement.ActualWidth and FrameworkElement.ActualHeight. That’s it.  Just calculate the ratio and bind the scale transform to it. Hopefully you’ll find this useful. Technorati Tags: WPF,XAML

    Read the article

  • Oracle’s Sun Server X4-8 with Built-in Elastic Computing

    - by kgee
    We are excited to announce the release of Oracle's new 8-socket server, Sun Server X4-8. It’s the most flexible 8-socket x86 server Oracle has ever designed, and also the most powerful. Not only does it use the fastest Intel® Xeon® E7 v2 processors, but also its memory, I/O and storage subsystems are all designed for maximum performance and throughput. Like its predecessor, the Sun Server X4-8 uses a “glueless” design that allows for maximum performance for Oracle Database, while also reducing power consumption and improving reliability. The specs are pretty impressive. Sun Server X4-8 supports 120 cores (or 240 threads), 6 TB memory, 9.6 TB HDD capacity or 3.2 TB SSD capacity, contains 16 PCIe Gen 3 I/O expansion slots, and allows for up to 6.4 TB Sun Flash Accelerator F80 PCIe Cards. The Sun Server X4-8 is also the most dense x86 server with its 5U chassis, allowing 60% higher rack-level core and DIMM slot density than the competition.  There has been a lot of innovation in Oracle’s x86 product line, but the latest and most significant is a capability called elastic computing. This new capability is built into each Sun Server X4-8.   Elastic computing starts with the Intel processor. While Intel provides a wide range of processors each with a fixed combination of core count, operational frequency, and power consumption, customers have been forced to make tradeoffs when they select a particular processor. They have had to make educated guesses on which particular processor (core count/frequency/cache size) will be best suited for the workload they intend to execute on the server.Oracle and Intel worked jointly to define a new processor, the Intel Xeon E7-8895 v2 for the Sun Server X4-8, that has unique characteristics and effectively combines the capabilities of three different Xeon processors into a single processor. Oracle system design engineers worked closely with Oracle’s operating system development teams to achieve the ability to vary the core count and operating frequency of the Xeon E7-8895 v2 processor with time without the need for a system level reboot.  Along with the new processor, enhancements have been made to the system BIOS, Oracle Solaris, and Oracle Linux, which allow the processors in the system to dynamically clock up to faster speeds as cores are disabled and to reach higher maximum turbo frequencies for the remaining active cores. One customer, a stock market trading company, will take advantage of the elastic computing capability of Sun Server X4-8 by repurposing servers between daytime stock trading activity and nighttime stock portfolio processing, daily, to achieve maximum performance of each workload.To learn more about Sun Server X4-8, you can find more details including the data sheet and white papers here.Josh Rosen is a Principal Product Manager for Oracle’s x86 servers, focusing on Oracle’s operating systems and software. He previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • SQLAuthority News – Weekend Experiment with NuoDB – Points to Pondor and Whitepaper

    - by pinaldave
    This weekend I have downloaded the latest beta version of NuoDB. I found it much improved and better UI. I was very much impressed as the installation was very smooth and I was up and running in less than 5 minutes with the product. The tools which are related to the Administration of the NuoDB seems to get makeover during this beta release. As per the claim they support now Solaris platform and have improved the native MacOS installation. I neither have Mac nor Solaris – I wish I would have experimented with the same. I will appreciate if anyone out there can confirm how the installations goes on these platforms. I have previously blogged about my experiment with NuoDB here: SQL SERVER – Weekend Project – Experimenting with ACID Transactions, SQL Compliant, Elastically Scalable Database SQL SERVER – Beginning NuoDB – Who will Benefit and How to Start SQL SERVER – Follow up on Beginning NuoDB – Who will Benefit and How to Start – Part 2 I am very impressed with the product so far and I have decided to understand the product further deep. Here are few of the questions which I am going to try to find answers with regards to NuoDB. Just so it is clear – NuoDB is not NOSQL, matter of the fact, it is following all the ACID properties of the database. If ACID properties are crucial why many NoSQL products are not adhering to it? (There are few out there do follow ACID but not all). I do understand the scalability of the database however does elasticity is crucial for the database and if yes how? (Elasticity is where the workload on the database is heavily fluctuating and the need of more than a single database server is coming up). How NuoDB has built scalable, elastic and 100% ACID compliance database which supports multiple platforms? How is NOSQL compared to NuoDB’s new architecture? In the next coming weeks, I am going to explore above concepts and dive deeper into the understanding of the same. Meanwhile I have read following white paper written by Experts at University of California at Santa Barbara. Very interesting read and great starter on the subject Database Scalability, Elasticity, and Autonomy in the Cloud. Additionally, my questions are also talking about NoSQL, this weekend I have started to learn about NoSQL from Pluralsight‘s online learning library. I will share my experience very soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology Tagged: NuoDB

    Read the article

  • ODI 11g – How to Load Using Partition Exchange

    - by David Allan
    Here we will look at how to load large volumes of data efficiently into the Oracle database using a mixture of CTAS and partition exchange loading. The example we will leverage was posted by Mark Rittman a couple of years back on Interval Partitioning, you can find that posting here. The best thing about ODI is that you can encapsulate all those ‘how to’ blog posts and scripts into templates that can be reused – the templates are of course Knowledge Modules. The interface design to mimic Mark's posting is shown below; The IKM I have constructed performs a simple series of steps to perform a CTAS to create the stage table to use in the exchange, then lock the partition (to ensure it exists, it will be created if it doesn’t) then exchange the partition in the target table. You can find the IKM Oracle PEL.xml file here. The IKM performs the follows steps and is meant to illustrate what can be done; So when you use the IKM in an interface you configure the options for hints (for parallelism levels etc), initial extent size, next extent size and the partition variable;   The KM has an option where the name of the partition can be passed in, so if you know the name of the partition then set the variable to the name, if you have interval partitioning you probably don’t know the name, so you can use the FOR clause. In my example I set the variable to use the date value of the source data FOR (TO_DATE(''01-FEB-2010'',''dd-MON-yyyy'')) Using a variable lets me invoke the scenario many times loading different partitions of the same target table. Below you can see where this is defined within ODI, I had to double single-quote the strings since this is placed inside the execute immediate tasks in the KM; Note also this example interface uses the LKM Oracle to Oracle (datapump), so this illustration uses a lot of the high performing Oracle database capabilities – it uses Data Pump to unload, then a CreateTableAsSelect (CTAS) is executed on the external table based on top of the Data Pump export. This table is then exchanged in the target. The IKM and illustrations above are using ODI 11.1.1.6 which was needed to get around some bugs in earlier releases with how the variable is handled...as far as I remember.

    Read the article

  • You do not need a separate SQL Server license for a Standby or Passive server - this Microsoft White Paper explains all

    - by tonyrogerson
    If you were in any doubt at all that you need to license Standby / Passive Failover servers then the White Paper “Do Not Pay Too Much for Your Database Licensing” will settle those doubts. I’ve had debate before people thinking you can only have a single instance as a standby machine, that’s just wrong; it would mean you could have a scenario where you had a 2 node active/passive cluster with database mirroring and log shipping (a total of 4 SQL Server instances) – in that set up you only need to buy one physical license so long as the standby nodes have the same or less physical processors (cores are irrelevant). So next time your supplier suggests you need a license for your standby box tell them you don’t and educate them by pointing them to the white paper. For clarity I’ve copied the extract below from the White Paper. Extract from “Do Not Pay Too Much for Your Database Licensing” Standby Server Customers often implement standby server to make sure the application continues to function in case primary server fails. Standby server continuously receives updates from the primary server and will take over the role of primary server in case of failure in the primary server. Following are comparisons of how each vendor supports standby server licensing. SQL Server Customers does not need to license standby (or passive) server provided that the number of processors in the standby server is equal or less than those in the active server. Oracle DB Oracle requires customer to fully license both active and standby servers even though the standby server is essentially idle most of the time. IBM DB2 IBM licensing on standby server is quite complicated and is different for every editions of DB2. For Enterprise Edition, a minimum of 100 PVUs or 25 Authorized User is needed to license standby server.   The following graph compares prices based on a database application with two processors (dual-core) and 25 users with one standby server. [chart snipped]  Note   All prices are based on newest Intel Xeon Nehalem processor database pricing for purchases within the United States and are in United States dollars. Pricing is based on information available on vendor Web sites for Enterprise Edition. Microsoft SQL Server Enterprise Edition 25 users (CALs) x $164 / CAL + $8,592 / Server = $12,692 (no need to license standby server) Oracle Enterprise Edition (base license without options) Named User Plus minimum (25 Named Users Plus per Core) = 25 x 2 = 50 Named Users Plus x $950 / Named Users Plus x 2 servers = $95,000 IBM DB2 Enterprise Edition (base license without feature pack) Need to purchase 125 Authorized User (400 PVUs/100 PVUs = 4 X 25 = 100 Authorized User + 25 Authorized Users for standby server) = 125 Authorized Users x $1,040 / Authorized Users = $130,000  

    Read the article

  • Implementing set operations in TSQL

    - by dotneteer
    SQL excels at operating on dataset. In this post, I will discuss how to implement basic set operations in transact SQL (TSQL). The operations that I am going to discuss are union, intersection and complement (subtraction).   Union Intersection Complement (subtraction) Implementing set operations using union, intersect and except We can use TSQL keywords union, intersect and except to implement set operations. Since we are in an election year, I will use voter records of propositions as an example. We create the following table and insert 6 records into the table. declare @votes table (VoterId int, PropId int) insert into @votes values (1, 30) insert into @votes values (2, 30) insert into @votes values (3, 30) insert into @votes values (4, 30) insert into @votes values (4, 31) insert into @votes values (5, 31) Voters 1, 2, 3 and 4 voted for proposition 30 and voters 4 and 5 voted for proposition 31. The following TSQL statement implements union using the union keyword. The union returns voters who voted for either proposition 30 or 31. select VoterId from @votes where PropId = 30 union select VoterId from @votes where PropId = 31 The following TSQL statement implements intersection using the intersect keyword. The intersection will return voters who voted only for both proposition 30 and 31. select VoterId from @votes where PropId = 30 intersect select VoterId from @votes where PropId = 31 The following TSQL statement implements complement using the except keyword. The complement will return voters who voted for proposition 30 but not 31. select VoterId from @votes where PropId = 30 except select VoterId from @votes where PropId = 31 Implementing set operations using join An alternative way to implement set operation in TSQL is to use full outer join, inner join and left outer join. The following TSQL statement implements union using full outer join. select Coalesce(A.VoterId, B.VoterId) from (select VoterId from @votes where PropId = 30) A full outer join (select VoterId from @votes where PropId = 31) B on A.VoterId = B.VoterId The following TSQL statement implements intersection using inner join. select Coalesce(A.VoterId, B.VoterId) from (select VoterId from @votes where PropId = 30) A inner join (select VoterId from @votes where PropId = 31) B on A.VoterId = B.VoterId The following TSQL statement implements complement using left outer join. select Coalesce(A.VoterId, B.VoterId) from (select VoterId from @votes where PropId = 30) A left outer join (select VoterId from @votes where PropId = 31) B on A.VoterId = B.VoterId where B.VoterId is null Which one to choose? To choose which technique to use, just keep two things in mind: The union, intersect and except technique treats an entire record as a member. The join technique allows the member to be specified in the “on” clause. However, it is necessary to use Coalesce function to project sets on the two sides of the join into a single set.

    Read the article

  • SQLAuthority News – Updates on Contests, Books and SQL Server

    - by pinaldave
    There are lots of things happening on this blog and I feel sometime it is difficult to keep up. One of the suggestion I keep on receiving if there is a single page where one can visit and know the updates. I did consider of the same at some point but in era of RSS Feed it is difficult to have proper audience to that page. Here are few updates on various contest and books give away in recent time. Combo set of 5 Joes 2 Pros Book – 1 for YOU and 1 for Friend – I have received so many entries for this contest. Many have sent me email asking if this contest can be extended by couple of days. For the same the deadline for this contest is now Nov 10th 7 AM. You can send your entries by that time. The prize is 2 combo set of Joes 2 Pros is of USD 444. If you have not take part in the contest please take part now. Guess What is in the box? – There were many entry for this contest. We played this contest on blog as well, facebook. The answer of this contest was announced in 2 days in blog post announcing my new book. The winner was Manas Dash from Bangalore. He answered “The box will contain SQL book authored by Vinod and Pinal”. This was the closest answer we received. Win 5 SQL Programming Book Contest will have winner announced by Nov 15th and winners will be sent email. Win 5 SQL Wait Stats Book Contest is closed and winners have been sent their award. My third book SQL Server Interview Questions and Answers have run out of stock in India in 36 hours of its launch. We are working very hard to make it available again. Thank you again for excellent support! Without your participation all the give away have no significance. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Steps to deploying on Windows Azure

    - by Vincent Grondin
    Alright, these steps might be a little detailed and of few might not be necessary but still it's a pretty accurate road map to deploying on azure...     1)     Open you solution 2)      Rebuild ALL 3)      Right click on your Azure project and click "Publish" 4)      It should open a windows explorer window with your package to be uploaded (.cspkg ) and its associated configuration (.cscfg) to be uploaded too.  Keep it open, you'll need that path later on... 5)      It should also open a browser asking you to login to your passport account, please do so. 6)      After this you will be redirected to the Azure Portal where you will see your Azure Project Name below the « Projet Name » section.  Click on it. 7)      Then you should be redirected to a detailed view of your account on Azure where you will create a new service by clicking the hyperlink on the top right corner. 8)      Choose the right service type for you, most likely the "Hosted Service" type 9)      Choose a « Label » name and click « next » 10)   Choose a name for your service and validate that the name is available in the cloud by clicking the "Check Availability" button 11)   At the bottom of this same page, you can choose to create a group for your service, use no group or join an existing group.  Creating a group means that all applications that belong to the same group will see no cost to exchanging data between other applications of the same group.  Most of the time when you create a single application, creating a group is not necessary.  You should choose a region that's close to your own region. 12)   On the next window, you should see a "Production" environment and a "Staging" environment.  Beware because "Staging" and "Production" are two different environments in the cloud and applications in "Staging" even when not runing do continue to rack in charges...  Choose an environment and click "Deploy". 13)   In the following window, browse to the path where your cspkg resides and then do the same thing with your cscfg file.  Choose a name for your Label,  and click "Deploy"... 14)   From now on, the clock is ticking and unless you have free Azure hours, your credit card is being billed… 15)   Click on the « Run » button to start your application 16)   Be patient.... be very patient… 17)   Once your application has finished starting, you should see a GREEN circle on the left side of the screen indicating that your application is READY.  Click the URL to test your application and remember that if your application is a service, you have to hit the "svc" class behind the link you see there.  Something in the likes of http://testvince2.cloudapp.net/service1.svc  (this is a fictional link) 18)   Hopefully your application will show up or in the case of a service, you will see your service's wsdl meaning that everything is working fine. Happy cloud computing all!

    Read the article

  • Create Image Maps with GIMP

    - by SGWellens
    Having a clickable image in a web page is not a big deal. Having an image in a web page with clickable hotspots is a big deal. The powerful GIMP editor has a tool to make creating clickable hotspots much easier. GIMP stands for GNU Image Manipulation Program. Its home page and download links are here: http://www.gimp.org/ (it is completely free). Beware: GIMP is an extraordinarily advanced and powerful image editor. If you wish to use it for general image editing tasks, you have a steep learning curve to climb. FYI: I used it to create the shadows you see on the images below. Fortunately, the tool to make Image Maps is separate from the main program. To start, open an image with GIMP or, drag and drop an image onto the GIMP main window. I'm using the image of a bar graph. Next, we have to find the Image Map tool and launch it (Filters->Web->Image Map…): Why is the Image Map tool under Filters and not Tools? I don't know. It's mystery—much like the Loch Ness Monster, the Bermuda Triangle, or why my socks keep disappearing when I do laundry. I swear I've got twenty single unmatched socks. But I digress… Here is what the Image Map tool looks like: If we click the blue 'I' button, we can add information to the Image Map: Now we'll use the rectangle tool to create some clickable hotspots. Select the Blue Rectangle tool, drag a rectangle, click when done and you'll get something like this: You can also make circle/oval and polygon areas. You can edit all the parameters of an image map area after drawing it. Rectangle settings (for fine tweaking): JavaScript functions (it's up to you to write them): Here is a setup with two rectangles and one polygon area: When you hit save a map file is generated that looks something like this: Paste the contents into a web page and you are almost there. I made some tweaks before it became usable: Replaced &apos; with apostrophes in the javascript functions. Changed the image path so it would find the image in my images directory Tweaked the href urls. Added Title="Some Text" to get tool tips. Cleaned out the comments. Result: The final markup (with JavaScript function): function ImageMapMouseHover(Msg) { $("#Label1").html(Msg); } It may seem like a lot of bother but, the tool does the heavy lifting: i.e. the coordinates. Getting the regions positioned and sized is easy using a visual tool…much better than doing it by hand. This, of course, isn't a full treatise on the tool but it should give you enough information to decide if it's helpful. I hope someone finds this useful Steve Wellens

    Read the article

  • The winning combination: Oracle VM Server for x86 + Oracle Sun Fire HW

    - by Karim Berrah
    You might be wondering why OVM Server for x86 (OVM/x86 here and below) should be seriously considered as a nice (business point of view) alternative to standard Hypervisors, if you are virtualizing Oracle Software, especially if you are planning to move to Oracle x86 Hardware (rackmount or blades). Well, let see some "not well known" facts that might interest you and help you in saving more money for your entire company (and not only the Virtulization team). Fact 1: OVM/x86 is considered as a hard partitionning technology (check page 2 of Oracle Server Partitionning Licencing Policies), so if you are buying new servers based on the latest INTEL Xeon E7 CPUs (10 cores per Socket) and have some licencing issues in deploying further Oracle SW, because you are using a hypervisor not recognized as a hard partitionning technology (like VMware), then you need to check here how to do it with OVM . This might help you to continue to deploy your Oracle DB instances on new x86 HW (12 cores, 40 cores, 64 cores servers) in a reasonable way, without having to pay licences for 12 CPU, 40 CPUs or 64 CPUs. You might also consider migrating your legacy Oracle DB DBs to a virtualized environment like OVM/x86 an recover some CPU licences, that can be reused somewhere else in production. Fact 2: OVM/x86 is free to use, without any extra licence for any specific feature (LiveMigration, High Availability, Embedded Management Console). If you want to use it on non Oracle HW, there is a support fee per  system and per year, that is much below VMware support (Oracle VM Premier Limited Support for systems up to 2 CPUs, and Oracle VM Premier Support for any bigger system, independently on the number of populated sockets). Fact 3: support is included with your Oracle x86 HW support (OPS for systems)  and you can re-install on you system Oracle Linux, Oracle Solaris or Oracle VM server for x86, without beeing charged, an keeping the same support level. Fact 4: it is less expensive to virtualize Oracle Linux or Oracle Solaris on OVM/x86 with Oracle HW that any other similar solution with VMware, because all the VMs are then supported and licenced when you buy Oracle HW with OPS. Fact 5: Oracle VM Templates bring you many Virtual Machines already installed, patched and optimized for various Oracle applications. And to be more specific, those templates are fully supported by Oracle, which is not really true when it comes to another hypervisor. By optimized VM Kernel, I mean PV drivers, OVM-ready kernels in the VM, single source clock for all the VMS, better memory management of the VM ... Fact 6: there is no extra costs for a management console. OVM comes with a free OVM Manager package for Linux.  More infos: Latest announcement of OVM/x86 update 2.2.2 A short flash demo of OVM server for x86 A short flash demo on OVM Templates and Virtual Assembly Builder Oracle Linux Support and Oracle VM Support Global Price List  ISVs: Benefits for Independant Sofwtare Vendors (ISVs) in using OVM/x86 Consultant Services: Advanced Customer Services for OVM/x86  Technical Features Best practices and Guideline for OVM with Oracle Blades Reduce TCO and get more Value from your x86 Infrastructure

    Read the article

  • Ghost team foundation build controllers

    - by Martin Hinshelwood
    Quite often after an upgrade there are things left over. Most of the time they are easy to delete, but sometimes it takes a little effort. Even rarer are those times when something just will not go away no matter how much you try. We have had a ghost team build controller hanging around for a while now, and it had defeated my best efforts to get rid of it. The build controller was from our old TFS server from before our TFS 2010 beta 2 upgrade and was really starting to annoy me. Every time I try to delete it I get the message: Controller cannot be deleted because there are build in progress -Manage Build Controller dialog   Figure: Deleting a ghost controller does not always work. I ended up checking all of our 172 Team Projects for the build that was queued, but did not find anything. Jim Lamb pointed me to the “tbl_BuildQueue” table in the team Project Collection database and sure enough there was the nasty little beggar. Figure: The ghost build was easily spotted Adam Cogan asked me: “Why did you suspect this one?” Well, there are a number of things that led me to suspect it: QueueId is very low: Look at the other items, they are in the thousands not single digits ControllerId: I know there is only one legitimate controller, and I am assuming that 6 relates to “zzUnicorn” DefinitionId: This is a very low number and I looked it up in “tbl_BuildDefinition” and it did not exist QueueTime: As we did not upgrade to TFS 2010 until late 2009 a date of 2008 for a queued build is very suspect Status: A status of 2 means that it is still queued This build must have been queued long ago when we were using TFS 2008, probably a beta, and it never got cleaned up. As controllers are new in TFS 2010 it would have created the “zzUnicorn” controller to handle any build servers that already exist. I had previously deleted the Agent, but leaving the controller just looks untidy. Now that the ghost build has been identified there are two options: Delete the row I would not recommend ever deleting anything from the database to achieve something in TFS. It is really not supported. Set the Status to cancelled (Recommended) This is the best option as TFS will then clean it up itself So I set the Status of this build to 2 (cancelled) and sure enough it disappeared after a couple of minutes and I was then able to then delete the “zzUnicorn” controller. Figure: Almost completely clean Now all I have to do is get rid of that untidy “zzBunyip” agent, but that will require rewriting one of our build scripts which will have to wait for now.   Technorati Tags: ALM,TFBS,TFS 2010

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >