Search Results

Search found 25570 results on 1023 pages for 'low level api'.

Page 466/1023 | < Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • Maintaining packages with code - Adding a property expression programmatically

    Every now and then I've come across scenarios where I need to update a lot of packages all in the same way. The usual scenario revolves around a group of packages all having been built off the same package template, and something needs to updated to keep up with new requirements, a new logging standard for example.You'd probably start by updating your template package, but then you need to address all your existing packages. Often this can run into the hundreds of packages and clearly that's not a job anyone wants to do by hand. I normally solve the problem by writing a simple console application that looks for files and patches any package it finds, and it is an example of this I'd thought I'd tidy up a bit and publish here. This sample will look at the package and find any top level Execute SQL Tasks, and change the SQL Statement property to use an expression. It is very simplistic working on top level tasks only, so nothing inside a Sequence Container or Loop will be checked but obviously the code could be extended for this if required. The code that actually sets the expression is shown below, the rest is just wrapper code to find the package and to find the task. /// <summary> /// The CreationName of the Tasks to target, e.g. Execute SQL Task /// </summary> private const string TargetTaskCreationName = "Microsoft.SqlServer.Dts.Tasks.ExecuteSQLTask.ExecuteSQLTask, Microsoft.SqlServer.SQLTask, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; /// <summary> /// The name of the task property to target. /// </summary> private const string TargetPropertyName = "SqlStatementSource"; /// <summary> /// The property expression to set. /// </summary> private const string ExpressionToSet = "@[User::SQLQueryVariable]"; .... // Check if the task matches our target task type if (taskHost.CreationName == TargetTaskCreationName) { // Check for the target property if (taskHost.Properties.Contains(TargetPropertyName)) { // Get the property, check for an expression and set expression if not found DtsProperty property = taskHost.Properties[TargetPropertyName]; if (string.IsNullOrEmpty(property.GetExpression(taskHost))) { property.SetExpression(taskHost, ExpressionToSet); changeCount++; } } } This is a console application, so to specify which packages you want to target you have three options: Find all packages in the current folder, the default behaviour if no arguments are specified TaskExpressionPatcher.exe .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find all packages in a specified folder, pass the folder as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\ .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find a specific package, pass the file path as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\Package.dtsx The code was written against SQL Server 2005, but just change the reference to Microsoft.SQLServer.ManagedDTS to be the SQL Server 2008 version and it will work fine. If you get an error Microsoft.SqlServer.Dts.Runtime.DtsRuntimeException: The package failed to load due to error 0xC0011008… then check that the package is from the correct version of SSIS compared to the referenced assemblies, 2005 vs 2008 in other words. Download Sample Project TaskExpressionPatcher.zip (6 KB)

    Read the article

  • Value of Certification Proven Again

    - by Paul Sorensen
    Hi Everyone,A recent article in Certification Magazine spells out the value of certification IT professionals and article provides some detailed insight on the state of the economy and what it means to IT professionals. A few of the key findings (among many) by Certification Magazine:Even in this tough economy, IT worker salaries grew at about 9%.Many respondents reported that they received raises after earning a new certification.Many people reported that they had earned one or two new credentials within the last year.Salaries for more entry-level certifications was stagnant over the last year or so.I encourage you to take a look at the article. It is very encouraging to see that companies and individuals still recognize and value the hard work that goes in to getting certified.Thank you,

    Read the article

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • Getting a solid understanding of Linux fundamentals

    - by JoshEarl
    I'm delving into the Linux world again as a diversion from my Microsoft-centric day job, and every time I tackle a new project I find it a frustrating exercise in trial and error. One thing that I always try to do when learning something new is figure out what the big pieces are and how they work together. I haven't yet come across a resource that explains Linux at this level. Resources seem to be either aimed at the barely computer literate crowd (Linux doesn't bite. Promise!) or the just compile the kernel and make your own distro crowd. I'm looking for a "JavaScript: The Good Parts" type of road map that doesn't necessarily answer all my questions so much as help me understand what questions I need to be asking. Any suggestions?

    Read the article

  • ASP.NET 4 Website Fails to Start on Your TFS 2010 Server?

    - by jdanforth
    Getting a “Could not find permission set named ‘ASP.Net’” error on your TFS 2010 server? It may have to do with the fact you’re trying to run ASP.NET as a child site of a SharePoint Web Site. The problem is described in the “ASP.NET 4 braking changes” site:   This error occurs because the ASP.NET 4 code access security (CAS) infrastructure looks for a permission set named ASP.Net. However, the partial trust configuration file that is referenced by WSS_Minimal does not contain any permission sets with that name. Currently there is not a version of SharePoint available that is compatible with ASP.NET. As a result, you should not attempt to run an ASP.NET 4 Web site as a child site underneath SharePoint Web sites.   There is a workaround you could try by setting this in your web.config, if you know what you’re doing: <trust level="Full" originUrl="" />

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Google I/O 2010 - Geospatial apps for desktop and mobile

    Google I/O 2010 - Geospatial apps for desktop and mobile Google I/O 2010 - Map once, map anywhere: Developing geospatial applications for both desktop and mobile Geo 201 Mano Marks As the number of desktop and mobile platforms proliferates the cost of developing and maintaining multiple versions of an application continues to increase. This session illustrates how the JS Maps API can be used to simplify cross platform geospatial application development by enabling a single implementation to be shared across multiple platforms, while maintaining a native look and feel. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 8 0 ratings Time: 01:00:58 More in Science & Technology

    Read the article

  • DBCC CHECKDB (BatmanDb, REPAIR_ALLOW_DATA_LOSS) &ndash; Are you Feeling Lucky?

    - by David Totzke
    I’m currently working for a client on a PowerBuilder to WPF migration.  It’s one of those “I could tell you, but I’d have to kill you” kind of clients and the quick-lime pits are currently occupied by the EMC tech…but I’ve said too much already. At approximately 3 or 4 pm that day users of the Batman[1] application here in Gotham[1] started to experience problems accessing the application.  Batman[2] is a document management system here that also integrates with the ERP system.  Very little goes on here that doesn’t involve Batman in some way.  The errors being received seemed to point to network issues (TCP protocol error, connection forcibly closed by the remote host etc…) but the real issue was much more insidious. Connecting to the database via SSMS and performing selects on certain tables underlying the application areas that were having problems started to reveal the issue.  You couldn’t do a SELECT * FROM MyTable without it bombing and giving the same error noted above.  A run of DBCC CHECKDB revealed 14 tables with corruption.  One of the tables with issues was the Document table.  Pretty central to a “document management” system.  Information was obtained from IT that a single drive in the SAN went bad in the night.  A new drive was in place and was working fine.  The partition that held the Batman database is configured for RAID Level 5 so a single drive failure shouldn’t have caused any trouble and yet, the database is corrupted.  They do hourly incremental backups here so the first thing done was to try a restore.  A restore of the most recent backup failed so they worked backwards until they hit a good point.  This successful restore was for a backup at 3AM – a full day behind.  This time also roughly corresponds with the time the SAN started to report the drive failure.  The plot thickens… I got my hands on the output from DBCC CHECKDB and noticed a pattern.  What’s sad is that nobody that should have noticed the pattern in the DBCC output did notice.  There was a rush to do things to try and recover the data before anybody really understood what was wrong with it in the first place.  Cooler heads must prevail in these circumstances and some investigation should be done and a plan of action laid out or you could end up making things worse[3].  DBCC CHECKDB also told us that: repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB Yikes.  That means that the database is so messed up that you’re definitely going to lose some stuff when you repair it to get it back to a consistent state.  All the more reason to do a little more investigation into the problem.  Rescuing this database is preferable to having to export all of the data possible from this database into a new one.  This is a fifteen year old application with about seven hundred tables.  There are TRIGGERS everywhere not to mention the referential integrity constraints to deal with.  Only fourteen of the tables have an issue.  We have a good backup that is missing the last 24 hours of business which means we could have a “do-over” of yesterday but that’s not a very palatable option either. All of the affected tables had TEXT columns and all of the errors were about LOB data types and orphaned off-row data which basically means TEXT, IMAGE or NTEXT columns.  If we did a SELECT on an affected table and excluded those columns, we got all of the rows.  We exported that data into a separate database.  Things are looking up.  Working on a copy of the production database we then ran DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS and that “fixed” everything up.   The allow data loss option will delete the bad rows.  This isn’t too horrible as we have all of those rows minus the text fields from out earlier export.  Now I could LEFT JOIN to the exported data to find the missing rows and INSERT them minus the TEXT column data. We had the restored data from the good 3AM backup that we could now JOIN to and, with fingers crossed, recover the missing TEXT column information.  We got lucky in that all of the affected rows were old and in the end we didn’t lose anything.  :O  All of the row counts along the way worked out and it looks like we dodged a major bullet here. We’ve heard back from EMC and it turns out the SAN firmware that they were running here is apparently buggy.  This thing is only a couple of months old.  Grrr…. They dispatched a technician that night to come and update it .  That explains why RAID didn’t save us. All-in-all this could have been a lot worse.  Given the root cause here, they basically won the lottery in not losing anything. Here are a few links to some helpful posts on the SQL Server Engine blog.  I love the title of the first one: Which part of 'REPAIR_ALLOW_DATA_LOSS' isn't clear? CHECKDB (Part 8): Can repair fix everything? (in fact, read the whole series) Ta da! Emergency mode repair (we didn’t have to resort to this one thank goodness)   Dave Just because I can…   [1] Names have been changed to protect the guilty. [2] I'm Batman. [3] And if I'm the coolest head in the room, you've got even bigger problems...

    Read the article

  • libssh2 and simultaneous connections

    - by Florian Margaine
    I'm writing a node.js C++ module using the C library libssh2. The module is supposed to be a bridge to connect to SSH over HTTPS. Right now, I'm still in the design/learning phase of v8 API and C++, and I have a design question: libssh2 is a C library, all its methods are global. From what I see in the examples, libssh2 can only handle one connection at a time. If I want to allow simultaneous connections to different SSH servers, do I have to fork a process to completely separate the libssh2 "instances", or is forking a thread enough? I don't know enough of the separation limit used there. Any idea on how to handle this is appreciated.

    Read the article

  • SUPINFO International University in Mauritius

    Since a while I'm considering to pick up my activities as a student and I'd like to get a degree in Computer Science. Personal motivation I mean after all this years as a professional software (and database) developer I have the personal urge to complete this part of my education. Having various certifications by Microsoft and being awarded as an Microsoft Most Valuable Professional (MVP) twice looks pretty awesome on a resume but having a "proper" degree would just complete my package. During the last couple of years I already got in touch with C-SAC (local business school with degree courses), the University of Mauritius and BCS, the Chartered Institute for IT to check the options to enroll as an experienced software developer. Quite frankly, it was kind of alienating to receive that feedback: Start from scratch! No seriously? Spending x amount of years to sit for courses that might be outdated and form part of your daily routine? Probably being in an awkward situation in which your professional expertise might exceed the lecturers knowledge? I don't know... but if that's path to walk... Well, then I might have to go for it. SUPINFO International University Some weeks ago I was contacted by the General Manager, Education Recruitment and Development of Medine Education Village, Yamal Matabudul, to have a chat on how the local IT scene, namely the Mauritius Software Craftsmanship Community (MSCC), could assist in their plans to promote their upcoming campus. Medine went into partnership with the French-based SUPINFO International University and Mauritius will be the 36th location world-wide for SUPINFO. Actually, the concept of SUPINFO is very likely to the common understanding of an apprenticeship in Germany. Not only does a student enroll into the programme but will also be placed into various internships as part of the curriculum. It's a big advantage in my opinion as the person stays in touch with the daily procedures and workflows in the real world of IT. Statements like "We just received a 'crash course' of information and learned new technology which is equivalent to 1.5 months of lectures at the university" wouldn't form part of the experience of such an education. Open Day at the Medine Education Village Last Saturday, Medine organised their Open Day and it was the official inauguration of the SUPINFO campus in Mauritius. It's now listed on their website, too - but be warned, the site is mainly in French language although the courses are all done in English. Not only was it a big opportunity to "hang out" on the campus of Medine but it was great to see the first professional partners for their internship programme, too. Oh, just for the records, IOS Indian Ocean Software Ltd. will also be among the future employers for SUPINFO students. More about that in an upcoming blog entry. Open Day at Medine Education Village - SUPINFO International University in Mauritius Mr Alick Mouriesse, President of SUPINFO, arrived the previous day and he gave all attendees a great overview of the roots of SUPINFO, the general development of the educational syllabus and their high emphasis on their partnerships with local IT companies in order to assist their students to get future jobs but also feel the heartbeat of technology live. Something which is completely missing in classic institutions of tertiary education in Computer Science. And since I was on tour with my children, as usual during weekends, he also talked about the outlook of having a SUPINFO campus in Mauritius. Apart from the close connection to IT companies and providing internships to students, SUPINFO clearly works on an international level. Meaning students of SUPINFO can move around the globe and can continue their studies seamlessly. For example, you might enroll for your first year in France, then continue to do 2nd and 3rd year in Canada or any other country with a SUPINFO campus to earn your bachelor degree, and then live and study in Mauritius for the next 2 years to achieve a Master degree. Having a chat with Dale Smith, Expand Technologies, after his interesting session on Technological Entrepreneurship - TechPreneur More questions by other craftsmen of the Mauritius Software Craftsmanship Community And of course, this concept works in any direction, giving Mauritian students a huge (!) opportunity to live, study and work abroad. And thanks to this, Medine already announced that there will be new facilities near Cascavelle to provide dormitories and other facilities to international students coming to our island. Awesome! Okay, but why SUPINFO? Well, coming back to my original statement - I'd like to get a degree in Computer Science - SUPINFO has a process called Validation of Acquired Experience (VAE) which is tailor-made for employees in the field of IT, and allows you to enroll in their course programme. I already got in touch with their online support chat but was only redirected to some FAQs on their website, unfortunately. So, during the Open Day I seized the opportunity to have an one-on-one conversation with Alick Mouriesse, and he clearly encouraged me to gather my certifications and working experience. SUPINFO does an individual evaluation prior to their assignment regarding course level, and hopefully my chances of getting some modules ahead of studies are looking better than compared to the other institutes. Don't get me wrong, I don't want to go down the easy route but why should someone sit for "Database 101" or "Principles of OOP" when applying and preaching database normalisation and practicing Clean Code Developer are like flesh and blood? Anyway, I'll be off to get my transcripts of certificates together with my course assignments from the old days at the university. Yes, I studied Applied Chemistry for a couple of years before intersecting into IT and software development particularly... ;-)

    Read the article

  • Microsoft Press Deal of the day 4/Sep/2012 - Programming Microsoft® SQL Server® 2012

    - by TATWORTH
    Today's deal of the day from Microsoft Press at http://shop.oreilly.com/product/0790145322357.do?code=MSDEAL is Programming Microsoft® SQL Server® 2012 "Your essential guide to key programming features in Microsoft® SQL Server® 2012 Take your database programming skills to a new level—and build customized applications using the developer tools introduced with SQL Server 2012. This hands-on reference shows you how to design, test, and deploy SQL Server databases through tutorials, practical examples, and code samples. If you’re an experienced SQL Server developer, this book is a must-read for learning how to design and build effective SQL Server 2012 applications."

    Read the article

  • MeeGo revient dès le mois prochain sur le premier smartphone de Jolla, le successeur de l'OS mobile s'appelle « Sailfish »

    Un seul smartphone sous MeeGo, tel est le constat de Nokia après la fusion de Maemo avec Moblin (l'ex-OS mobile d'Intel), le N9 ; c'est ce même N9 qui a servi de base au Lumia 800, sous Windows Phone, marquant l'arrêt de mort de MeeGo sur les smartphones du géant finlandais en déroute. Certains diront même qu'il ne s'agit pas vraiment de MeeGo, mais plutôt d'un Harmattan, un OS prévu pour effectuer la transition entre Maemo et MeeGo : on prend un Maemo 5, on en éjecte GTK+ pour qu'il ne reste que Qt, on colle les API MeeGo dessus, ça fait un Harmattan, officiellement sous le nom de « MeeGo 1.2 Harmattan » ̵...

    Read the article

  • WPF: Timers

    - by Ilya Verbitskiy
    I believe, once your WPF application will need to execute something periodically, and today I would like to discuss how to do that. There are two possible solutions. You can use classical System.Threading.Timer class or System.Windows.Threading.DispatcherTimer class, which is the part of WPF. I have created an application to show you how to use the API.     Let’s take a look how you can implement timer using System.Threading.Timer class. First of all, it has to be initialized.   1: private Timer timer; 2:   3: public MainWindow() 4: { 5: // Form initialization code 6: 7: timer = new Timer(OnTimer, null, Timeout.InfiniteTimeSpan, Timeout.InfiniteTimeSpan); 8: }   Timer’s constructor accepts four parameters. The first one is the callback method which is executed when timer ticks. I will show it to you soon. The second parameter is a state which is passed to the callback. It is null because there is nothing to pass this time. The third parameter is the amount of time to delay before the callback parameter invokes its methods. I use System.Threading.Timeout helper class to represent infinite timeout which simply means the timer is not going to start at the moment. And the final fourth parameter represents the time interval between invocations of the methods referenced by callback. Infinite timeout timespan means the callback method will be executed just once. Well, the timer has been created. Let’s take a look how you can start the timer.   1: private void StartTimer(object sender, RoutedEventArgs e) 2: { 3: timer.Change(TimeSpan.Zero, new TimeSpan(0, 0, 1)); 4:   5: // Disable the start buttons and enable the reset button. 6: }   The timer is started by calling its Change method. It accepts two arguments: the amount of time to delay before the invoking the callback method and the time interval between invocations of the callback. TimeSpan.Zero means we start the timer immediately and TimeSpan(0, 0, 1) tells the timer to tick every second. There is one method hasn’t been shown yet. This is the callback method OnTimer which does a simple task: it shows current time in the center of the screen. Unfortunately you cannot simple write something like this:   1: clock.Content = DateTime.Now.ToString("hh:mm:ss");   The reason is Timer runs callback method on a separate thread, and it is not possible to access GUI controls from a non-GUI thread. You can avoid the problem using System.Windows.Threading.Dispatcher class.   1: private void OnTimer(object state) 2: { 3: Dispatcher.Invoke(() => ShowTime()); 4: } 5:   6: private void ShowTime() 7: { 8: clock.Content = DateTime.Now.ToString("hh:mm:ss"); 9: }   You can build similar application using System.Windows.Threading.DispatcherTimer class. The class represents a timer which is integrated into the Dispatcher queue. It means that your callback method is executed on GUI thread and you can write a code which updates your GUI components directly.   1: private DispatcherTimer dispatcherTimer; 2:   3: public MainWindow() 4: { 5: // Form initialization code 6:   7: dispatcherTimer = new DispatcherTimer { Interval = new TimeSpan(0, 0, 1) }; 8: dispatcherTimer.Tick += OnDispatcherTimer; 9: } Dispatcher timer has nicer and cleaner API. All you need is to specify tick interval and Tick event handler. The you just call Start method to start the timer.   private void StartDispatcher(object sender, RoutedEventArgs e) { dispatcherTimer.Start(); // Disable the start buttons and enable the reset button. } And, since the Tick event handler is executed on GUI thread, the code which sets the actual time is straightforward.   1: private void OnDispatcherTimer(object sender, EventArgs e) 2: { 3: ShowTime(); 4: } We’re almost done. Let’s take a look how to stop the timers. It is easy with the Dispatcher Timer.   1: dispatcherTimer.Stop(); And slightly more complicated with the Timer. You should use Change method again.   1: timer.Change(Timeout.InfiniteTimeSpan, Timeout.InfiniteTimeSpan); What is the best way to add timer into an application? The Dispatcher Timer has simple interface, but its advantages are disadvantages at the same time. You should not use it if your Tick event handler executes time-consuming operations. It freezes your window which it is executing the event handler method. You should think about using System.Threading.Timer in this case. The code is available on GitHub.

    Read the article

  • SQLAuthority News – #TechEdIn – TechEd India 2012 Memories and Photos

    - by pinaldave
    TechEd India 2012 was held in Bangalore last March 21 to 23, 2012. Just like every year, this event is bigger, grander and inspiring. Pinal Dave at TechEd India 2012 Family Event Every single year, TechEd is a special affair for my entire family.  Four months before the start of TechEd, I usually start to build the mental image of the event. I start to think  about various things. For the most part, what excites me most is presenting a session and meeting friends. Seriously, I start thinking about presenting my session 4 months earlier than the event!  I work on my presentation day and night. I want to make sure that what I present is accurate and that I have experienced it firsthand. My wife and my daughter also contribute to my efforts. For us, TechEd is a family event, and the two of them feel equally responsible as well. They give up their family time so I can bring out the best content for the Community. Pinal, Shaivi and Nupur at TechEd India 2012 Guinea Pigs (My Experiment Victims) I do not rehearse my session, ever. However, I test my demo almost every single day till the last moment that I have to present it already. I sometimes go over the demo more than 2-3 times a day even though the event is more than a month away. I have two “guinea pigs”: 1) Nupur Dave and 2) Vinod Kumar. When I am at home, I present my demos to my wife Nupur. At times I feel that people often backup their demo, but in my case, I have backup demo presenters. In the office during lunch time, I present the demos to Vinod. I am sure he can walk my demos easily with eyes closed. Pinal and Vinod at TechEd India 2012 My Sessions I’ve been determined to present my sessions in a real and practical manner. I prefer to present the subject that I myself would be eager to attend to and sit through if I were an audience. Just keeping that principle in mind, I have created two sessions this year. SQL Server Misconception and Resolution Pinal and Vinod at TechEd India 2012 We believe all kinds of stuff – that the earth is flat, or that the forbidden fruit is apple, or that the big bang theory explains the origin of the universe, and so many other things. Just like these, we have plenty of misconceptions in SQL Server as well. I have had this dream of co-presenting a session with Vinod Kumar for the past 3 years. I have been asking him every year if we could present a session together, but we never got it to work out, until this year came. Fortunately, we got a chance to stand on the same stage and present a single subject.  I believe that Vinod Kumar and I have an excellent synergy when we are working together. We know each other’s strengths and weakness. We know when the other person will speak and when he will keep quiet. The reason behind this synergy is that we have worked on 2 Video Learning Courses (SQL Server Indexes and SQL Server Questions and Answers) and authored 1 book (SQL Server Questions and Answers) together. Crowd Outside Session Hall This session was inspired from the “Laurel and Hardy” show so we performed a role-playing of those famous characters. We had an excellent time at the stage and, for sure, the audience had a wonderful time, too. We had an extremely large audience for this session and had a great time interacting with them. Speed Up! – Parallel Processes and Unparalleled Performance Pinal Dave at TechEd India 2012 I wanted to approach this session at level 400 and I was very determined to do so. The biggest challenge I had was that this was a total of 60 minutes of session and the audience profile was very generic. I had to present at level 100 as well at 400. I worked hard to tune up these demos. I wanted to make sure that my messages would land perfectly to the minds of the attendees, and when they walk out of the session, they could use the knowledge I shared on their servers. After the session, I felt an extreme satisfaction as I received lots of positive feedback at the event. At one point, so many people rushed towards me that I was a bit scared that the stage might break and someone would get injured. Fortunately, nothing like that happened and I was able to shake hands with everybody. Pinal Dave at TechEd India 2012 Crowd rushing to Pinal at TechEd India 2012 Networking This is one of the primary reasons many of us visit the annual TechEd event. I had a fantastic time meeting SQL Server enthusiasts. Well, it was a terrific time meeting old friends, user group members, MVPs and SQL Enthusiasts. I have taken many photographs with lots of people, but I have received a very few back. If you are reading this blog and have a photo of us at the event, would you please send it to me so I could keep it in my memory lane? SQL Track Speaker: Jacob and Pinal at TechEd India 2012 SQL Community: Pinal, Tejas, Nakul, Jacob, Balmukund, Manas, Sudeepta, Sahal at TechEd India 2012 Star Speakers: Amit and Balmukund at TechEd India 2012 TechED Rockstars: Nakul, Tejas and Pinal at TechEd India 2012 I guess TechEd is a mix of family affair and culture for me! Hamara TechEd (Our TechEd) Please tell me which photo you like the most! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Understanding The Very Nature Of Linux - Becoming Core Programmer

    - by MrWho
    Well, I want to know how I should exactly start and get into the right path to become a core programmer and also get decent understanding of Linux infrastructure and fundamentals. I know my question may seem general or something but that's not because of my inability to ask a question.I'm just confused, I've programmed in a few languages and have got my hand dirty to code so I'm aware of the big picture of what the programmers actually do.Now, I want to get deeper and start my studies in a different level than I used to learn before, I want to become advanced core programmer and learn where it really start from.I'd like to know the bit by bit of what the today's operating systems like linux have been built on. I DO really need good references, books would be preferred for learning the fundamentals.If someone tell me the general path of what I'm supposed to do, it would be really appreciated.

    Read the article

  • Web development starting a career [closed]

    - by user985482
    Hi I am in the 3rd and last year at college of informatics and I am interested to follow a career in web development when I finish(2 more months). From what I understand this days to get hired you need to be able to know a variety of technologies at least that is the case in Romania.Most of the jobs I have seen even at entry level asks you to know the following: HTML/CSS Javascript , a framework preferable jQuery , Ajax a server side language in my case PHP and a framework SQL and an RDBMS in my case MySql a CMS in my case Wordpress My question is how well should me or anyone looking to get hired as a web developer for there first job should we know this technologies in order to get hired and what else should we aim to learn to have a better chance of getting hired. I don't know if the question is right for this forum but I believe that this could help many of the students and anyone who is taking an interest in web development to know what they should expect from there employers when they try to find work.

    Read the article

  • Friday Fun: Splash Back

    - by Asian Angel
    The best part of the week has finally arrived, so why not take a few minutes to have some quick fun? In this week’s game you get to play with alien goo as you work to clear the game board and reach as high a level as possible Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Calvin and Hobbes Mix It Up in this Fight Club Parody [Video] Choose from 124 Awesome HTML5 Games to Play at Mozilla Labs Game On Gallery Google Translate for Android Updates to Include Conversation Mode and More Move Your Photoshop Scratch Disk for Improved Performance Winter Storm Clouds on the Horizon Wallpaper Existential Angry Birds [Video]

    Read the article

  • Advice on SCRUM for the solitary developer [closed]

    - by ProfK
    Possible Duplicate: Agile for the Solo Developer I am looking for advice on the SCRUM process for a solitary developer. Most SCRUM resources I see focus on its use in a team environment, hence my question here. I'd like some guidance on structuring and managing my projects for SCRUM, with me as a solitary developer and business owner, but still occasionally including my clients for input and feedback. Areas I'm not clear on include resolving my backlog into 'sprintable' project areas and stories, defining user stories properly with a view to being digested by developer level users, defining feasible sprints for a single developer etc. Essentially I'm looking for advice on moving from using scrum in a team/office environment, with colleagues and project manager, and using chaos/cowboy-coding on my own, to assuming the role of PM myself and adopting scrum for work on my own. Any advice is welcome.

    Read the article

  • Adding new events to be handled by script part without recompilation of c++ source code

    - by paul424
    As in title I want to write an Event system with handling methods written in external script language, that is Angelscript. The AS methods would have acess to game's world modifing API ( which has to be regsitered for Angelscript Machine as the game starts) . I have come to this problem : if we use the Angelsript for rapid prototyping of the game's world behavior , we would like to be able to define new Event Types, without recompiling the main binary. Is that ever possible , don't stick to Angelscript, I think about any possible scripting language. The main thought is this : If there's some really new Event possible to be constructed , we must monitor some information coming from the binary, for example some variable value changing and the Events are just triggered on some conditions on those changes , wer might be monitoring the Creatures HP, by having a function call on each change, there we could define new Events such as Creature hurt, creature killed, creature healed , creature killed and tormented to pieces ( like geting HP very much below 0 ) . Are there any other ideas of constructing new Events at the scripting side ?

    Read the article

  • AndEngine; Box2D - high speed body overlapping, prismatic joints

    - by Visher
    I'm trying to make good suspension for my car game, but I'm getting nervous of some problems with it. At the beginning, I've tried to make it out of one prismatic joint/revolute joint per one wheel only, but surprisingly prismatic joint that should only move in Y asix moves also in X axis, if car travels very fast, or even on low speeds if there's setContinuousPhysics = true. This causes wheels to "shift back", moving them away from axle. Now I've tried to add some bodies that will keep it in place: Suspension helper collides with spring only, wheel doesn't collide with spring&helper&vehicle body This is how I create those elements: rect = new Rectangle(1100, 1350, 200, 50, getVertexBufferObjectManager()); rect.setColor(Color.RED); scene.attachChild(rect); //rect.setRotation(90); Rectangle miniRect1 = new Rectangle(1102, 1355, 30, 50, getVertexBufferObjectManager()); miniRect1.setColor(0, 0, 1, 0.5f); miniRect1.setVisible(true); scene.attachChild(miniRect1); Rectangle miniRect2 = new Rectangle(1268, 1355, 30, 50, getVertexBufferObjectManager()); miniRect2.setColor(0, 0, 1, 0.5f); miniRect1.setVisible(true); scene.attachChild(miniRect2); rectBody = PhysicsFactory.createBoxBody( physicsWorld, rect, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 10.0f)); rectBody.setUserData("car"); Body miniRect1Body = PhysicsFactory.createBoxBody( physicsWorld, miniRect1, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 10.0f)); miniRect1Body.setUserData("suspension"); Body miniRect2Body = PhysicsFactory.createBoxBody( physicsWorld, miniRect2, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 10.0f)); miniRect2Body.setUserData("suspension"); physicsWorld.registerPhysicsConnector(new PhysicsConnector(rect, rectBody, true, true)); physicsWorld.registerPhysicsConnector(new PhysicsConnector(miniRect1, miniRect1Body, true, true)); physicsWorld.registerPhysicsConnector(new PhysicsConnector(miniRect2, miniRect2Body, true, true)); PrismaticJointDef miniRect1JointDef = new PrismaticJointDef(); miniRect1JointDef.initialize(rectBody, miniRect1Body, miniRect1Body.getWorldCenter(), new Vector2(0.0f, 0.3f)); miniRect1JointDef.collideConnected = false; miniRect1JointDef.enableMotor= true; miniRect1JointDef.maxMotorForce = 15; miniRect1JointDef.motorSpeed = 5; miniRect1JointDef.enableLimit = true; physicsWorld.createJoint(miniRect1JointDef); PrismaticJointDef miniRect2JointDef = new PrismaticJointDef(); miniRect2JointDef.initialize(rectBody, miniRect2Body, miniRect2Body.getWorldCenter(), new Vector2(0.0f, 0.3f)); miniRect2JointDef.collideConnected = false; miniRect2JointDef.enableMotor= true; miniRect2JointDef.maxMotorForce = 15; miniRect2JointDef.motorSpeed = 5; miniRect2JointDef.enableLimit = true; physicsWorld.createJoint(miniRect2JointDef); scene.attachChild(karoseriaSprite); Rectangle r1 = new Rectangle(1050, 1300, 52, 150, getVertexBufferObjectManager()); r1.setColor(0, 1, 0, 0.5f); r1.setVisible(true); scene.attachChild(r1); Body r1body = PhysicsFactory.createBoxBody(physicsWorld, r1, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.001f, 0.01f)); r1body.setUserData("suspensionHelper"); physicsWorld.registerPhysicsConnector(new PhysicsConnector(r1, r1body, true, true)); WeldJointDef r1jointDef = new WeldJointDef(); r1jointDef.initialize(r1body, rectBody, r1body.getWorldCenter()); physicsWorld.createJoint(r1jointDef); Rectangle r2 = new Rectangle(1132, 1300, 136, 150, getVertexBufferObjectManager()); r2.setColor(0, 1, 0, 0.5f); r2.setVisible(true); scene.attachChild(r2); Body r2body = PhysicsFactory.createBoxBody(physicsWorld, r2, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.001f, 0.01f)); r2body.setUserData("suspensionHelper"); physicsWorld.registerPhysicsConnector(new PhysicsConnector(r2, r2body, true, true)); WeldJointDef r2jointDef = new WeldJointDef(); r2jointDef.initialize(r2body, rectBody, r2body.getWorldCenter()); physicsWorld.createJoint(r2jointDef); Rectangle r3 = new Rectangle(1298, 1300, 50, 150, getVertexBufferObjectManager()); r3.setColor(0, 1, 0, 0.5f); r3.setVisible(true); scene.attachChild(r3); Body r3body = PhysicsFactory.createBoxBody(physicsWorld, r3, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(1f, 0.01f, 0.01f)); r3body.setUserData("suspensionHelper"); physicsWorld.registerPhysicsConnector(new PhysicsConnector(r3, r3body, true, true)); WeldJointDef r3jointDef = new WeldJointDef(); r3jointDef.initialize(r3body, rectBody, r3body.getWorldCenter()); physicsWorld.createJoint(r3jointDef); MouseJointDef md = new MouseJointDef(); Sprite wheel1 = new Sprite( miniRect1.getX()+miniRect1.getWidth()/2-wheelTexture.getWidth()/2, miniRect1.getY()+miniRect1.getHeight()-wheelTexture.getHeight()/2, wheelTexture, engine.getVertexBufferObjectManager()); scene.attachChild(wheel1); Body wheel1body = PhysicsFactory.createCircleBody( physicsWorld, wheel1, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 5.0f)); wheel1body.setUserData("wheel"); Shape wheel1shape = wheel1body.getFixtureList().get(0).getShape(); wheel1shape.setRadius(wheel1shape.getRadius()*(3.0f/4.0f)); physicsWorld.registerPhysicsConnector(new PhysicsConnector(wheel1, wheel1body, true, true)); Sprite wheel2 = new Sprite( miniRect2.getX()+miniRect2.getWidth()/2-wheelTexture.getWidth()/2, miniRect2.getY()+miniRect2.getHeight()-wheelTexture.getHeight()/2, wheelTexture, engine.getVertexBufferObjectManager()); scene.attachChild(wheel2); Body wheel2body = PhysicsFactory.createCircleBody( physicsWorld, wheel2, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 5.0f)); wheel2body.setUserData("wheel"); Shape wheel2shape = wheel2body.getFixtureList().get(0).getShape(); wheel2shape.setRadius(wheel2shape.getRadius()*(3.0f/4.0f)); physicsWorld.registerPhysicsConnector(new PhysicsConnector(wheel2, wheel2body, true, true)); RevoluteJointDef frontWheelRevoluteJointDef = new RevoluteJointDef(); frontWheelRevoluteJointDef.initialize(wheel1body, miniRect1Body, wheel1body.getWorldCenter()); frontWheelRevoluteJointDef.collideConnected = false; RevoluteJointDef rearWheelRevoluteJointDef = new RevoluteJointDef(); rearWheelRevoluteJointDef.initialize(wheel2body, miniRect2Body, wheel2body.getWorldCenter()); rearWheelRevoluteJointDef.collideConnected = false; rearWheelRevoluteJointDef.motorSpeed = 2050; rearWheelRevoluteJointDef.maxMotorTorque= 3580; physicsWorld.createJoint(frontWheelRevoluteJointDef); Joint j = physicsWorld.createJoint(rearWheelRevoluteJointDef); rearWheelRevoluteJoint = (RevoluteJoint)j; r1body.setBullet(true); r2body.setBullet(true); r3body.setBullet(true); miniRect1Body.setBullet(true); miniRect2Body.setBullet(true); rectBody.setBullet(true); at low speeds, it's OK, but on high speed vehicle can even flip around on flat ground.. Is there a way to make this work better?

    Read the article

  • Strategy to find bottleneck in a network

    - by Simone
    Our enterprise is having some problem when the number of incoming request goes beyond a certain amount. To make things simpler, we have N websites that uses, amongst other, a local web service. This service is hosted by IIS, and it's a .NET 4.0 (C#) application executed in a farm. It's REST-oriented, built around OpenRasta. As already mentioned, by stress testing it with JMeter, we've found that beyond a certain amount of request the service's performance drop. Anyway, this service is, amongst other, a client itself of other 3 distinct web services and also a client for a DB server, so it's not very clear what really is the culprit of this abrupt decay. In turn, these 3 other web services are installed in our farm too, and client of other DB servers (and services, possibly, that are out of my team control). What strategy do you suggest to try to locate where the bottleneck(s) are? Do you have any high-level suggestions?

    Read the article

  • 3d transformation of game world keeping gameplay 2d - COCOS2D 2.0

    - by samfisher
    Using: COCOS2D + iOS. I want to rotate the game world, may be loading another .tmx file for another dimensions when user want to switch dimension. the effect what I am looking for is something like this:CLICK HERE What I have thought of till now: rotating CCCamera will be mandatory. Question: How will I have the other part of the level in place while the camera rotates/rotating? I can load a CCSprite and rotate it accordingly to the 3rd dimension. phew..!! Question: When the camera and world is rotated, will the player controls work properly.. I think not...? I think a better option would be to checkout with COCOS3D... there I could implement 3d world... right?? Question: Not sure how well 2d dynamics will work there as I want to user Box2d as physics engine.. could anyone provide suggestions? Regards, Sam

    Read the article

  • How to achieve highly accurate car physics such as Liveforspeed?

    - by Kim Jong Woo
    Liveforspeed is a racing simulator, there is amazing amount of realistic physics. for example, tires get warm, tire actually deforms when you turn corners. You need to play this game with a mouse at the minimum because it almost drives like the real thing. Anyhow, how does one achieve that level of physics simulation? Are there off-the-shelf solutions out there? If not, how does one start with simulating real world physics as close as possible. I would love to be able to work on an opensource car physics focused game. Imagine, more passionate developers, it could keep things going.

    Read the article

  • Is there a plugin directory software like Firefox Addon or Wordpress Plugin Directory

    - by lulalala
    Nowadays browsers and content management systems all have the plugin/module/addon/theme systems for users to extend at their own will. Developers can also submit plugin to share the plugin with others. The most famous examples are Firefox Addon and Wordpress Plugin Directory. I want to ask that if there is an open source web system specifically designed for this need - to host plugins? Not just one plugin, but all plugins for one software? Ideally it should allow developers to upload the plugin, have a public version for each update, and client software can check if there are any plugin available through the directory's API. As an example, a CMS can have one such directory to host/display theme files. Also is there a better keyword to describe this kind of system?

    Read the article

< Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >