Search Results

Search found 59278 results on 2372 pages for 'time estimation'.

Page 351/2372 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Launch Invitation: Introducing Oracle WebLogic Server 12c

    - by JuergenKress
    Introducing Oracle WebLogic Server 12c, the #1 Application Server Across Conventional and Cloud Environments Please join Hasan Rizvi on December 1, as he unveils the next generation of the industry’s #1 application server and cornerstone of Oracle’s cloud application foundation—Oracle WebLogic Server 12c. Hear, with your fellow IT managers, architects, and developers, how the new release of Oracle WebLogic Server is: Designed to help you seamlessly move into the public or private cloud with an open, standards-based platform Built to drive higher value for your current infrastructure and significantly reduce development time and cost Optimized to run your solutions for Java Platform, Enterprise Edition (Java EE); Oracle Fusion Middleware; and Oracle Fusion Applications Enhanced with transformational platforms and technologies such as Java EE 6, Oracle’s Active GridLink for RAC, Oracle Traffic Director, and Oracle Virtual Assembly Builder Don’t miss this online launch event. Register now. Executive Overview Thurs., December 1, 2011 10 a.m. PT / 1 p.m. ET Presented by: Hasan Rizvi Senior Vice President, Product Development, Oracle Today most businesses have the ambition to move to a cloud infrastructure. However, IT needs to maintain and invest in their current infrastructure for supporting today’s business. With Oracle WebLogic, the #1 app server in the marketplace, we provide you with the best of both worlds. The enhancements contained in WebLogic 12c provide you with significant benefits that drive higher value for your current infrastructure, while significantly reducing development time and cost. In addition, with WebLogic you are cloud-ready. You can move your existing applications as-is to a high performance engineered system, Exalogic, and instantly experience performance and scalability improvements that are orders of magnitude higher. A WebLogic-Exalogic combination may provide your private cloud infrastructure. Moreover, you can develop and test your applications on the recently announced Oracle’s Public Cloud offering: the Java Cloud Service and seamlessly move these to your on-premise infrastructure for production deployments. Developer Deep-Dive Thurs., December 1, 2011 11 a.m. PT / 2 p.m. ET See demos and interact with experts via live chat. Presented by: Will Lyons Director, Oracle WebLogic Server Product Management, Oracle Modern Java development looks very different from even a few years ago. Technology innovation, the ecosystem of tools and their integration with Java standards are changing how development is done. Cloud Computing is causing developers to re-evaluate their development platforms and deployment options. Business users are demanding faster time to market, but without sacrificing application performance and reliability. Find out in this session how Oracle WebLogic Server 12c enables rapid development of modern, lightweight Java EE 6 applications. Learn how you can leverage the latest development technologies, tools and standards when deploying to Oracle WebLogic Server across both conventional and Cloud environments. Don’t miss this online launch event. Register now. For regular information become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Hasan Rizvi,Oracle,WebLogic 12c,OPN,WebLogic Community,Jürgen Kress

    Read the article

  • How to recognize special function keys on keyboard

    - by NikolaiDante
    I have a Microsoft Digital Media 3000 Keyboard. None of the function keys or other special keys seem to do anything, what do I need to do to get them working (at the very least f2, as not having a shortcut to rename a file is driving me mad) If I run xev and press f2 I get the following output in the terminal: KeyPress event, serial 36, synthetic NO, window 0x4800001, root 0x15d, subw 0x0, time 42858728, (674,456), root:(1034,588), state 0x10, keycode 139 (keysym 0xff65, Undo), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4800001, root 0x15d, subw 0x0, time 42858912, (674,456), root:(1034,588), state 0x10, keycode 139 (keysym 0xff65, Undo), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • Windows 8: Everything from design, build, and how to sell a Metro style app

    - by Thomas Mason
    For me, there are a lot of similarities between an application developed for Windows Phone and a Metro style app developed for Windows 8. A Windows Phone 7 application (rather than an XNA game) is built in .NET and XAML against a subset of the .NET framework and the application has a lifecycle which needs to be conscious of battery life and so is split out into foreground/background pieces. The application is sandboxed in terms of its interactions with the local device and is packaged with a manifest which describes those interactions. The app needs to be aware of network connectivity status and its work on the network is done asynchronously to preserve the user experience.The app is packaged and deployed to a Marketplace which the user browses to find the app, read reviews, perhaps purchase it and then install it and receive updates over time. Quite a lot of those statements are as true of a Windows 8 Metro style app as they are for a Windows Phone app and so a Windows Phone app developer already has a good head start when it comes to building Metro style apps for Windows 8. With that in mind, there is an event to help developers with a Windows Phone app in Marketplace to begin the process of looking at Windows 8 and whether you can get a quick win by bringing your Phone application onto Windows. The idea of the event was to provide a space where developers can get together over 2 days and take the time out to look at what it means to take their app from Windows Phone to Windows 8. Kicking off on Saturday 16th June at 10am, we are told they have plenty of power sockets, WiFi, whiteboards, drinks, pizza, games, prizes and some quiet space that you can work in. Including people on hand with Windows Phone and Windows 8 experience to help everything along the way. There will be an attendee-voted schedule of talks but we’ll keep these out of your way if you just want to get on and code. We’ll also provide information around submitting your app to the Windows Store If you have a Windows Phone app in Marketplace, now’s a great time to look at getting it onto Windows 8. Sign up. Bring your laptop. Bring your app. Bring Windows 8 and Visual Studio 11. And everyone will their best to help you get your app onto Windows 8. Location & Venue TBA but it will be in central London, accessible by major railway and underground transportation. Day 1 Saturday 16th June 10am – 9pm Day 2 Sunday 17th June 10am – 4pm

    Read the article

  • GDD-BR 2010 [2E] Building Business Apps using Google Web Toolkit and Spring Roo

    GDD-BR 2010 [2E] Building Business Apps using Google Web Toolkit and Spring Roo Speaker: Chris Ramsdale Track: Cloud Computing Time: 14:40 - 15:25 Room: sala[2] Level: 201 Who says you can't build rich web apps for your business? Follow along in this session to learn how you can use the latest integrated set of tools from Google and VMware to take your internal business apps into the cloud. We'll cover how to get started using GWT with Spring Roo and SpringSource Tool Suite (STS), as well as the new data presentation widgets and MVP framework that will be available in the 2.1 release of GWT. From: GoogleDevelopers Views: 69 0 ratings Time: 45:56 More in Science & Technology

    Read the article

  • C#: Why Decorate When You Can Intercept

    - by James Michael Hare
    We've all heard of the old Decorator Design Pattern (here) or used it at one time or another either directly or indirectly.  A decorator is a class that wraps a given abstract class or interface and presents the same (or a superset) public interface but "decorated" with additional functionality.   As a really simplistic example, consider the System.IO.BufferedStream, it itself is a descendent of System.IO.Stream and wraps the given stream with buffering logic while still presenting System.IO.Stream's public interface:   1: Stream buffStream = new BufferedStream(rawStream); Now, let's take a look at a custom-code example.  Let's say that we have a class in our data access layer that retrieves a list of products from a database:  1: // a class that handles our CRUD operations for products 2: public class ProductDao 3: { 4: ... 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: var results = new List<Product>(); 10:  11: // must create the connection 12: using (var con = _factory.CreateConnection()) 13: { 14: con.ConnectionString = _productsConnectionString; 15: con.Open(); 16:  17: // create the command 18: using (var cmd = _factory.CreateCommand()) 19: { 20: cmd.Connection = con; 21: cmd.CommandText = _getAllProductsStoredProc; 22: cmd.CommandType = CommandType.StoredProcedure; 23:  24: // get a reader and pass back all results 25: using (var reader = cmd.ExecuteReader()) 26: { 27: while(reader.Read()) 28: { 29: results.Add(new Product 30: { 31: Name = reader["product_name"].ToString(), 32: ... 33: }); 34: } 35: } 36: } 37: }            38:  39: return results; 40: } 41: } Yes, you could use EF or any myriad other choices for this sort of thing, but the germaine point is that you have some operation that takes a non-trivial amount of time.  What if, during the production day I notice that my application is performing slowly and I want to see how much of that slowness is in the query versus my code.  Well, I could easily wrap the logic block in a System.Diagnostics.Stopwatch and log the results to log4net or other logging flavor of choice: 1:     // a class that handles our CRUD operations for products 2:     public class ProductDao 3:     { 4:         private static readonly ILog _log = LogManager.GetLogger(typeof(ProductDao)); 5:         ... 6:         7:         // a method that would retrieve all available products 8:         public IEnumerable<Product> GetAvailableProducts() 9:         { 10:             var results = new List<Product>(); 11:             var timer = Stopwatch.StartNew(); 12:             13:             // must create the connection 14:             using (var con = _factory.CreateConnection()) 15:             { 16:                 con.ConnectionString = _productsConnectionString; 17:                 18:                 // and all that other DB code... 19:                 ... 20:             } 21:             22:             timer.Stop(); 23:             24:             if (timer.ElapsedMilliseconds > 5000) 25:             { 26:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 27:                     timer.ElapsedMillseconds); 28:             } 29:             30:             return results; 31:         } 32:     } In my eye, this is very ugly.  It violates Single Responsibility Principle (SRP), which says that a class should only ever have one responsibility, where responsibility is often defined as a reason to change.  This class (and in particular this method) has two reasons to change: If the method of retrieving products changes. If the method of logging changes. Well, we could “simplify” this using the Decorator Design Pattern (here).  If we followed the pattern to the letter, we'd need to create a base decorator that implements the DAOs public interface and forwards to the wrapped instance.  So let's assume we break out the ProductDAO interface into IProductDAO using your refactoring tool of choice (Resharper is great for this). Now, ProductDao will implement IProductDao and get rid of all logging logic: 1:     public class ProductDao : IProductDao 2:     { 3:         // this reverts back to original version except for the interface added 4:     } 5:  And we create the base Decorator that also implements the interface and forwards all calls: 1:     public class ProductDaoDecorator : IProductDao 2:     { 3:         private readonly IProductDao _wrappedDao; 4:         5:         // constructor takes the dao to wrap 6:         public ProductDaoDecorator(IProductDao wrappedDao) 7:         { 8:             _wrappedDao = wrappedDao; 9:         } 10:         11:         ... 12:         13:         // and then all methods just forward their calls 14:         public IEnumerable<Product> GetAvailableProducts() 15:         { 16:             return _wrappedDao.GetAvailableProducts(); 17:         } 18:     } This defines our base decorator, then we can create decorators that add items of interest, and for any methods we don't decorate, we'll get the default behavior which just forwards the call to the wrapper in the base decorator: 1:     public class TimedThresholdProductDaoDecorator : ProductDaoDecorator 2:     { 3:         private static readonly ILog _log = LogManager.GetLogger(typeof(TimedThresholdProductDaoDecorator)); 4:         5:         public TimedThresholdProductDaoDecorator(IProductDao wrappedDao) : 6:             base(wrappedDao) 7:         { 8:         } 9:         10:         ... 11:         12:         public IEnumerable<Product> GetAvailableProducts() 13:         { 14:             var timer = Stopwatch.StartNew(); 15:             16:             var results = _wrapped.GetAvailableProducts(); 17:             18:             timer.Stop(); 19:             20:             if (timer.ElapsedMilliseconds > 5000) 21:             { 22:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 23:                     timer.ElapsedMillseconds); 24:             } 25:             26:             return results; 27:         } 28:     } Well, it's a bit better.  Now the logging is in its own class, and the database logic is in its own class.  But we've essentially multiplied the number of classes.  We now have 3 classes and one interface!  Now if you want to do that same logging decorating on all your DAOs, imagine the code bloat!  Sure, you can simplify and avoid creating the base decorator, or chuck it all and just inherit directly.  But regardless all of these have the problem of tying the logging logic into the code itself. Enter the Interceptors.  Things like this to me are a perfect example of when it's good to write an Interceptor using your class library of choice.  Sure, you could design your own perfectly generic decorator with delegates and all that, but personally I'm a big fan of Castle's Dynamic Proxy (here) which is actually used by many projects including Moq. What DynamicProxy allows you to do is intercept calls into any object by wrapping it with a proxy on the fly that intercepts the method and allows you to add functionality.  Essentially, the code would now look like this using DynamicProxy: 1: // Note: I like hiding DynamicProxy behind the scenes so users 2: // don't have to explicitly add reference to Castle's libraries. 3: public static class TimeThresholdInterceptor 4: { 5: // Our logging handle 6: private static readonly ILog _log = LogManager.GetLogger(typeof(TimeThresholdInterceptor)); 7:  8: // Handle to Castle's proxy generator 9: private static readonly ProxyGenerator _generator = new ProxyGenerator(); 10:  11: // generic form for those who prefer it 12: public static object Create<TInterface>(object target, TimeSpan threshold) 13: { 14: return Create(typeof(TInterface), target, threshold); 15: } 16:  17: // Form that uses type instead 18: public static object Create(Type interfaceType, object target, TimeSpan threshold) 19: { 20: return _generator.CreateInterfaceProxyWithTarget(interfaceType, target, 21: new TimedThreshold(threshold, level)); 22: } 23:  24: // The interceptor that is created to intercept the interface calls. 25: // Hidden as a private inner class so not exposing Castle libraries. 26: private class TimedThreshold : IInterceptor 27: { 28: // The threshold as a positive timespan that triggers a log message. 29: private readonly TimeSpan _threshold; 30:  31: // interceptor constructor 32: public TimedThreshold(TimeSpan threshold) 33: { 34: _threshold = threshold; 35: } 36:  37: // Intercept functor for each method invokation 38: public void Intercept(IInvocation invocation) 39: { 40: // time the method invocation 41: var timer = Stopwatch.StartNew(); 42:  43: // the Castle magic that tells the method to go ahead 44: invocation.Proceed(); 45:  46: timer.Stop(); 47:  48: // check if threshold is exceeded 49: if (timer.Elapsed > _threshold) 50: { 51: _log.WarnFormat("Long execution in {0} took {1} ms", 52: invocation.Method.Name, 53: timer.ElapsedMillseconds); 54: } 55: } 56: } 57: } Yes, it's a bit longer, but notice that: This class ONLY deals with logging long method calls, no DAO interface leftovers. This class can be used to time ANY class that has an interface or virtual methods. Personally, I like to wrap and hide the usage of DynamicProxy and IInterceptor so that anyone who uses this class doesn't need to know to add a Castle library reference.  As far as they are concerned, they're using my interceptor.  If I change to a new library if a better one comes along, they're insulated. Now, all we have to do to use this is to tell it to wrap our ProductDao and it does the rest: 1: // wraps a new ProductDao with a timing interceptor with a threshold of 5 seconds 2: IProductDao dao = TimeThresholdInterceptor.Create<IProductDao>(new ProductDao(), 5000); Automatic decoration of all methods!  You can even refine the proxy so that it only intercepts certain methods. This is ideal for so many things.  These are just some of the interceptors we've dreamed up and use: Log parameters and returns of methods to XML for auditing. Block invocations to methods and return default value (stubbing). Throw exception if certain methods are called (good for blocking access to deprecated methods). Log entrance and exit of a method and the duration. Log a message if a method takes more than a given time threshold to execute. Whether you use DynamicProxy or some other technology, I hope you see the benefits this adds.  Does it completely eliminate all need for the Decorator pattern?  No, there may still be cases where you want to decorate a particular class with functionality that doesn't apply to the world at large. But for all those cases where you are using Decorator to add functionality that's truly generic.  I strongly suggest you give this a try!

    Read the article

  • higher cpu usage in ubuntu 12.04 than windows 7

    - by Medya
    Hi I have a Intel Core i5 with 6GB of Ram using ubuntu 12.04 64bit. I noticed that whenever I run chrome which is the faster browser for me in linux) when I watch youtube, the CPU usage for Chrome is at least 13% and sometimes even 30% . but in Windows 7 same thing (youtube on chrome) rarely uses more than 6% of my CPU usage. I also notice my laptop is so hot in ubuntu 12.04 and the fan is working all the time, while in windows the laptop is so silent and the fan doesnt make much noise all the time and not as warm as in linux. is it like that for every one or is it just me?

    Read the article

  • E-Business Suite - NLS and MLS

    - by Robert Story
    Upcoming WebcastsTitle: E-Business Suite - NLS and MLSDate: April 28, 2010 Time: 1:00 pm EDT, 10:00 am PDT, 07:00 pm CET, 06:00 pm UK Click here to register for this sessionDate: April 28, 2010 Time: 06:00 pm Australia, 5:00 pm Japan, 1:30 pm India, 10:00 am CET, 09:00 am UK Click here to register for this session Product Family: EBS Translations Summary This 1.5 hour session is recommended for technical and functional users who are interested to get an generic overview about the NLS and MLS implementation of the E-Business Suite. Topics will include: Introduction to NLS and MLS Translation synchronization patch Known issues A short, live demonstration (only if applicable) and question and answer period will be included. ....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Tellago announces SQL Server 2008 R2 BI quick adoption programs

    - by Vishal
    During the last year, we (Tellago) have been involved in various business intelligence initiatives that leverage some emerging BI techniques such as self-service BI or complex event processing (CEP). Specifically, in the last few months, we have partnered with Microsoft to deliver a series of events across the country where we present the different technologies of the SQL Server 2008 R2 BI stack such as PowerPivot, StreamInsight, Ad-Hoc Reporting and Master Data Services. As part of those events, we try to go beyond the traditional technology presentation and provide a series of best practices and lessons we have learned on real world BI projects that leverage these technologies. Now that SQL Server 2008 R2 has been released to manufacturing, we have launched a series of quick adoption programs that are designed to help customers understand how they can embrace the newest additions to Microsoft's BI stack as part of their IT initiatives. The programs are also designed to help customers understand how the new SQL Server features interact with established technologies such as SQL Server Analysis Services or SQL Server Integration Services. We try to keep these adoption programs very practical by doing a lot of prototyping and design sessions that will give our customers a practical glimpse of the capabilities of the technologies and how they can fit in their enterprise architecture roadmap. Here is our official announcement (you can blame my business partner, BI enthusiast, and Tellago's CEO Elizabeth Redding for the marketing pitch ;)): Tellago Marks Microsoft's SQL Server 2008 R2 Launch With Business Intelligence Quick Adoption Program Microsoft launched SQL Server 2008 R2 last week, which delivers several breakthrough business intelligence (BI) capabilities that enable organizations to:  Efficiently process, analyze and mine data Improve IT and developer efficiency Enable highly scalable and well-managed Business Intelligence on a self-service basis for business users The release offers a new feature called PowerPivot, which enables self service BI through connecting business users directly to enterprise data sources and providing improved reporting and analytics. The release also offers Master Data Management which helps enterprises centrally manage critical data assets company-wide and across diverse systems, enabling increased integrity of information over time. Finally, the release includes StreamInsight, which is a framework for implementing Complex Event Processing (CEP) applications on the Microsoft platform. With StreamInsight, IT organizations can implement the infrastructure to process a large volume of events near real time, execute continuous queries against event streams and enable real time business intelligence. As a thought leader in the Business Intelligence community, Tellago has recognized the occasion by launching a series of quick adoption programs to enable the adoption of this new BI technology stack in your enterprise. Our Quick Adoption programs are designed to help you: Brainstorm BI solution options  Architect initial infrastructure components Prototype key features of a solution As a 2-3 day program, our approach is more efficient and cost effective than a traditional Proof of Concept because it allows you to understand the new SQL Server 2008 R2 feature set  while seeing directly how you can leverage it for your business intelligence needs. If you are interested in learning more about the BI capabilities of Microsoft's Business Intelligence stack, including SQL Server 2008 R2, we can help.  As industry experts and software content advisers to Microsoft, Tellago is the place where ideas meet technology expertise.  Let us help you see for yourself the advantages that you can gain from Microsoft's  SQL Server 2008 R2. Email or call for more information - [email protected] or 847-925-2399.

    Read the article

  • How to set wifi driver settings to prefer 5 GHz channel above 2.4 GHz

    - by Wouter
    Currently I'm in a new building of my university. In this building my wifi often breaks down and then restores connection again. This is really irritating since it happens a lot. Now as a coincidence there were some tech guys running around here and where asking everyone if the wifi was doing fine. I told them that my wifi tears down all the time and then reconnects. They figured out that my wifi is switching all the time between the 2.4 GHz channel and 5 GHz channel. They asked me if I could acces the driver settings of my wireless card. Unfortunately I don't know how to do this is in either Linux or Windows. And unfortunately again they only knew the windows solution xD. So I hope somebody can tell me how I tell my wifi that it should stay on the 5 GHz network and not disconnect and switch to the 2.4 GHz channel?

    Read the article

  • How to become a solid python web developer [closed]

    - by Estarius
    Possible Duplicate: How do I learn Python from zero to web development? I have started Python recently with the goal to become a solid developer to make a web application eventually. However, as time goes by I am wondering if I am being optimal about how I will achieve my goal. I would compare it to a game for example, to be better you must spend time playing and trying new things... However, if you just log in and sit in the lobby chatting you are most likely not progressing. So far, this is my plan (feel free to comment or judge it): Review basic programmation concepts Start coding slowly in Python Once comfortable in Python, learn about web development in Python Learn about those things we heard about: SQLAlchemy, MVC, TDD, Git, Agile (Group project) To achieve these things, I started the Learn python the hard way exercises, which I am doing at the rate of 5 per days. I also started to read Think Python at the same time and planning to move on with Dive into python. As far as my research goes, these documentations along with Python documentation is usually what is the most recommended to learn Python. I consider this to get my point 1 and 2 done. While learning Python is really great, my goal remains to do quality web development. I know there are books about Django etc. however I would like to become comfortable with any Python web development. This means without Framework and with Framework... Any framework, then be able to choose the one which best fits our needs. For this I would like to know if some people have suggestions. Should I just get a book on Django and it should apply to everything ? What would be the best method to go from Python to Web Python and not end up creating crappy code which would turn into nightmares for other programmers ? Then finally, those "things we hear about". While I understand what they all do basically, I am fairly sure that like everything, there are good and wrong ways of making use of them. Should I go through at least a whole book on each before starting to use them or keep it at their respective online documentation ? Are there some kind of documentation which links their use to Python ? Also, from looking at Django and Pyramid they seems to use something else than MVC, while the Django model looks similar, the Pyramid one seems to cut a whole part of it... Is learning MVC still worth it ? Sorry for the wall of text, Thanks in advance !

    Read the article

  • How to recreate spfile on Exadata?

    - by Bandari Huang
    Copy spfile from the ASM diskgroup to local disk by using the ASMCMD command line tool.  ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> ls -l Type Redund Striped Time Sys Name Y CONTROLFILE/ Y DATAFILE/ Y ONLINELOG/ Y PARAMETERFILE/ Y TEMPFILE/ N spfileedwbase.ora => +DATA_DM01/EDWBASE/PARAMETERFILE/spfile.355.800017117 ASMCMD> cp +DATA_DM01/EDWBASE/spfileedwbase.ora /home/oracle/spfileedwbase.ora.bak Copy the context from spfileedwbase.ora.bak to initedwbase.ora except garbled character. Using above initedwbase.ora, start one of the RAC instances to the mount phase.   SQL> startup mount pfile=/home/oracle/initedwbase.ora Ensure one of the database instances is mounted before attempting to recreate the spfile.  SQL> select INSTANCE_NAME,HOST_NAME,STATUS from v$instance; INSTANCE_NAME HOST_NAME  STATUS ------------- ---------  ------ edwbase1      dm01db01   MOUNTED Create the new spfile. SQL> create spfile='+DATA_DM01/EDWBASE/spfileedwbase.ora' from pfile='/home/oracle/initedwbase.ora'; ASMCMD will show that a new spfile has been created as the alias spfilerac2.ora is now pointing to a new spfile under the PARAMETER directory in ASM. ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> ls -l Type Redund Striped Time Sys Name Y CONTROLFILE/ Y DATAFILE/ Y ONLINELOG/ Y PARAMETERFILE/ Y TEMPFILE/ N spfilerac2.ora => +DATA_DM01/EDWBASE/PARAMETERFILE/spfile.356.800013581  Shutdown the instance and restart the database using srvctl using the newly created spfile. SQL> shutdown immediate ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> exit [oracle@dm01db01 ~]$ srvctl start database -d edwbase [oracle@dm01db01 ~]$ srvctl status database -d edwbase Instance edwbase1 is running on node dm01db01 Instance edwbase2 is running on node dm01db02 ASMCMD will now show a number of spfiles exist in the PARAMETERFILE directory for this database. The spfile containing the parameter preventing startups should be removed from ASM. In this case the file spfile.355.800017117 can be removed because spfile.356.800013581 is the current spfile. ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> cd PARAMETERFILE ASMCMD> ls -l Type Redund Striped Time Sys Name PARAMETERFILE UNPROT COARSE FEB 19 08:00:00 Y spfile.355.800017117 PARAMETERFILE UNPROT COARSE FEB 19 08:00:00 Y spfile.356.800013581 ASMCMD> rm spfile.355.800017117 ASMCMD> ls spfile.356.800013581 Referenece: Recreating the Spfile for RAC Instances Where the Spfile is Stored in ASM [ID 554120.1]

    Read the article

  • Java 7 update 6 installation fails on Windows 7 when Chrome is default browser

    - by ali1234
    I am configuring a brand new Lenovo U410 system with Windows 7 Home Premium for a user. I received the system direct from the shop. As part of the configuration I installed Java using the online installer. This worked correctly. Later, due to a mistake I made, I needed to restore the system to factory default. The factory default FORMATS C:\ and puts back (supposedly) the exact factory configuration. However, after doing this, I was no longer able to install Java successfully using the same method I used before. Now, whenever I attempt to use the online Java installer, the following happens. First of all, a window always appears "Welcome to Java", "Downloading Java Installer...". After short time this window disappears and then one of three things happens: The very first time I do this after doing a factory reset, I get a Windows error report, which contains this information: Application Name: JavaSetup7u5.exe Application Version: 7.0.50.6 Application Timestamp: 4feacd84 Fault Module Name: JavaIC.dll Fault Module Version: 9.9.9.9 Fault Module Timestamp: 4f2343d6 Exception Offset: 000052cb Exception Code: c0000417 Exception Data: 00000000 OS Version: 6.1.7600.2.0.0.768.3 Locale ID: 1033 Additional Information 1: 773c Additional Information 2: 773cd78cf06816f8246f359fa270f3bb Additional Information 3: f51a Additional Information 4: f51aaea7d22f36fa9e3a626b5a5cd1c3 2. Subsequent runs produce either this error message: "Error: Java(TM) installer - Downloaded file C:\Users\\AppData\Local\Temp\fx-runtime.exe is corrupt." or Nothing happens at all. I Believe this is a red herring. Running the installer again causes a different error because the files were downloaded and the installer crashed before it could clean up. This isn't the actual problem, as when this happens the installer deletes the downloaded files, and then when you run it for the third time, it downloads everything again and does the javaic.dll crash. I suspect the downloader is appending to the existing files or something, causing the corruption. I have tried all of the above as Administrator and as a normal user. I have tried reseting the system to factory defaults several times. I have tried downloading with Chrome and Internet Explorer 9. I have tried uninstalling all anti-virus software and disabling the windows firewall entirely. The only thing which makes a difference is running the installer in Windows XP compatibility mode, which allows the installation to complete. I know I can workaround this error by using the offline installer so please don't post that as an answer. I am looking for an explanation of the root cause. Additionally, if I use the offline installer, the updater does not work. The updater also does not work if I install in XP mode. The updater fails because it works by just downloading the newest online setup and running it. Also remember that the installers are digitally signed. The signitures verify correctly so there is no way in hell that this is caused by corrupted downloads. Some theories I have: The Java setup files on java.com actually changed in between the first successful install and my later attempts. Seems unlikely as none of the version numbers have changed. However, I have seen a couple of reports of this error which showed up in the past 24 hours. This looks like the most likely explanation right now: http://www.oracle.com/us/corporate/press/1735645 - Oracle released 7 update 6 two days ago. Careful inspection of the installers reveal that they are in fact attempting to download .6, not .5 as the download page claims. Not actually correct. Only the update tool tries to install 7u6. The online installer still tries 7u5. However, 7u6 being released two days ago is too much of a coincidence to ignore. Update: The 7u6 online installer is available from Oracle technetwork. It crashes in exactly the same way. The factory reset software uses GMT-8 and I am on GMT-1. As a result, after factory reset, any software which cares to check would think that the system was restored 7 hours in the future, due to Window's awful policy of storing local time in the system clock. This could be confusing a certificate check or similar. Update: I discovered that this does cause Windows Update to fail. The workaround, setting the clock back before starting factory reset, does not enable Java to install correctly. The factory reset image isn't really the same as what is installed in the main partition when you buy the system. Naughty Lenovo. The installer appears to crash while installing or displaying something to do with the Ask.com toolbar. That seems to be what javaic.dll does. Microsoft Tuesday was the 14th. Some update in that could be causing this. However, I'm factory reseting the machine every time, so unless the patches get slipstreamed into the recovery image, or there is some mechanism by which they get silently installed even if updates are disabled, then I don't see how this can be the cause. Major breakthrough: The default browser on Lenovo systems is Google Chrome. I noticed that the JavaIC.dll "sponsor check" actually does a check on your default browser in order to decide which sponsor ad to display. Normally that would get you the Ask toolbar on IE9. But that toolbar doesn't work on Chrome, and so the installer tries to display a different ad. The different ad is what causes the crash. Changing the default browser to IE9 allows the installer to run correctly. So this looks like a genuine bug in the sponsor ad code in the installer, caused by a combination of Google Chrome default browser and not being in the US. (Installer also checks your location using IP geolocation service and displays different ads based on that.)

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • Google I/O 2012 - Designing for the Other Half: Sexy Isn't Always Pink

    Google I/O 2012 - Designing for the Other Half: Sexy Isn't Always Pink Leah Busque, Sepideh Nasiri, Jess Lee, Tracy Chou, Margaret Wallace Women control 80 percent of consumer spending and drive the majority of user activity on many of the largest social networks. Female gamers over 55 spend the most time online gaming among any demographic. Are you thinking about how your product or business is attracting and engaging women? Hear from our panel on the technologies winning over female users that aren't so pink. From: GoogleDevelopers Views: 15 1 ratings Time: 59:33 More in Science & Technology

    Read the article

  • Score Awesome Games on the Cheap with Humble Indie Bundle 6

    - by Jason Fitzpatrick
    It’s the Humble Indie Bundle time of year again; score six great games at a name-your-own-price including acclaimed action-RPG Torchlight. The Humble Indie Bundle combines games from independent development houses into a big promotional pack where gamers can name their own price and choose how much of that price goes towards the developers or gaming-related charities. Included in this bundle are: Dustforce, Rochard, Shatter, S.P.A.Z., Torchlight, and Vessel. Torchlight 2, the followup to the wildly popular Torchlight, is set for release in a scant two days–now is the perfect time to pick up a copy of Torchlight on the cheap and get yourself up to speed. The Humble Indie Bundle is cross-platform and DRM-free. Grab a copy and enjoy it on your Windows, Mac, or Linux machine without any registration hassles. The Humble Indie Bundle 6 How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Exam 70-541 - TS: Microsoft Windows SharePoint Services 3.0 - Application Development

    - by DigiMortal
    Today I passed Microsoft exam 70-541: Microsoft Windows SharePoint Services 3.0 - Application Development. This exam gives you MCTS certificate. In this posting I will talk about the exam and also give some suggestions about books to read when preparing for exam. About exam This exam was good one I think. The questions were not hard and also not too easy. Just enough to make sure you really know what you do when working with SharePoint. Or at least to make sure you how things work. After couple of years active SharePoint coding this exam needs no additional preparation. The questions covered very different topics like alerts, features, web parts, site definitions, event receivers, workflows, web services and deployments. There are 59 questions in the exam (this information is available in internet) and you have time a little bit more than two hours. It took me about 40 minutes to get questions answered and reviewed. I strongly suggest you to study the parts of WSS 3.0 you don’t know yet and write some code to find out how to use these things through SharePoint API. Good reading For guys with less experience there are some good books to suggest. Take one or both of these books because there are no official study materials or training kits available for this exam. One of my colleagues who is less experienced than me suggested Inside Microsoft Windows SharePoint Services 3.0 by Ted Pattison and Daniel Larson. He told me that he found this book most useful for him to pass this exam.   When I started with SharePoint Services 3.0 my first book was Developer’s Guide To The Windows SharePoint Services v3 Platform by Todd C. Bleeker. It helped me getting started and later it was my main handbook for some time. Of course, there are many other good books and I suggest you to take what you find. Of course, before buying something I suggest you to discuss with guys who have read the book before. And make sure you mention that you are preparing for exam.   Conclusion If you are experienced SharePoint developer then this exam needs no preparation. Okay, some preparation is always good but if you don’t have time you are still able to pass this exam. If you are not experienced SharePoint developer then study before taking this exam – it is not easy stuff for novices. But if you pass this exam you can proudly say – yes, I know something about SharePoint! :)

    Read the article

  • What should you bring to the table as a Software Architect?

    - by Ahmad Mageed
    There have been many questions with good answers about the role of a Software Architect (SA) on StackOverflow and Programmers SE. I am trying to ask a slightly more focused question than those. The very definition of a SA is broad so for the sake of this question let's define a SA as follows: A Software Architect guides the overall design of a project, gets involved with coding efforts, conducts code reviews, and selects the technologies to be used. In other words, I am not talking about managerial rest and vest at the crest (further rhyming words elided) types of SAs. If I were to pursue any type of SA position I don't want to be away from coding. I might sacrifice some time to interface with clients and Business Analysts etc., but I am still technically involved and I'm not just aware of what's going on through meetings. With these points in mind, what should a SA bring to the table? Should they come in with a mentality of "laying down the law" (so to speak) and enforcing the usage of certain tools to fit "their way," i.e., coding guidelines, source control, patterns, UML documentation, etc.? Or should they specify initial direction and strategy then be laid back and jump in as needed to correct the ship's direction? Depending on the organization this might not work. An SA who relies on TFS to enforce everything may struggle to implement their plan at an employer that only uses StarTeam. Similarly, an SA needs to be flexible depending on the stage of the project. If it's a fresh project they have more choices, whereas they might have less for existing projects. Here are some SA stories I have experienced as a way of sharing some background in hopes that answers to my questions might also shed some light on these issues: I've worked with an SA who code reviewed literally every single line of code of the team. The SA would do this for not just our project but other projects in the organization (imagine the time spent on this). At first it was useful to enforce certain standards, but later it became crippling. FxCop was how the SA would find issues. Don't get me wrong, it was a good way to teach junior developers and force them to think of the consequences of their chosen approach, but for senior developers it was seen as somewhat draconian. One particular SA was against the use of a certain library, claiming it was slow. This forced us to write tons of code to achieve things differently while the other library would've saved us a lot of time. Fast forward to the last month of the project and the clients were complaining about performance. The only solution was to change certain functionality to use the originally ignored approach despite early warnings from the devs. By that point a lot of code was thrown out and not reusable, leading to overtime and stress. Sadly the estimates used for the project were based on the old approach which my project was forbidden from using so it wasn't an appropriate indicator for estimation. I would hear the PM say "we've done this before," when in reality they had not since we were using a new library and the devs working on it were not the same devs used on the old project. The SA who would enforce the usage of DTOs, DOs, BOs, Service layers and so on for all projects. New devs had to learn this architecture and the SA adamantly enforced usage guidelines. Exceptions to usage guidelines were made when it was absolutely difficult to follow the guidelines. The SA was grounded in their approach. Classes for DTOs and all CRUD operations were generated via CodeSmith and database schemas were another similar ball of wax. However, having used this setup everywhere, the SA was not open to new technologies such as LINQ to SQL or Entity Framework. I am not using this post as a platform for venting. There were positive and negative aspects to my experiences with the SA stories mentioned above. My questions boil down to: What should an SA bring to the table? How can they strike a balance in their decision making? Should one approach an SA job (as defined earlier) with the mentality that they must enforce certain ground rules? Anything else to consider? Thanks! I'm sure these job tasks are easily extended to people who are senior devs or technical leads, so feel free to answer at that capacity as well.

    Read the article

  • TechEd 2012: Day 3 &ndash; Morning TFS

    - by Tim Murphy
    My morning sessions for day three were dominated by Team Foundation Server.  This has been a hot topic for our clients lately, so this topic really stuck a chord. The speaker for the first session was from Boeing.  It was nice to hear how how a company mixes both agile and waterfall project management.   The approaches that he presented were very pragmatic.  For their needs reporting is the crucial part of their decision to use TFS.  This was interesting since this is probably the last aspect that most shops would think about. The challenge of getting users to adopt TFS was brought up by the audience.  As with the other discussion point he took a very level headed stance.  The approach he was prescribing was to eat the elephant a bite at a time instead of all at once.  If you try to convert you entire shop at once the culture shock will most likely kill the effort. Another key point he reminded us of is that you need to make sure that standards and compliance are taken into account when you setup TFS.  If you don’t implement a tool and processes around it that comply with the standards bodies that govern your business you are in for a world of hurt. Ultimately the reason they chose TFS was because it was the first tool that incorporated all the ALM features that they needed. Reduced licensing cost because of all the different tools they would need to buy to complete the same tasks.  They got to this point by doing an industry evaluation.  Although TFS came out on top he said that it still has a big gap is in the Java area.  Of course in this market there are vendors helping to close that gap. The second session was on how continuous feedback in agile is a new focus in VS2012.  The problems they intended to address included cycle time and average time to repair, root cause analysis. The speakers fired features at us as if they were firing a machine gun.  I will just say that I am looking forward to digging into the product after seeing this presentation.  Beyond that I will simply list some of the key features that caught my attention. Feature – Ability to link documents into tasks as artifacts Web access portal PowerPoint storyboards Exploratory testing Request feedback (allows users to record notes, screen shots and video/audio) See you after the second half. del.icio.us Tags: TechEd,TechEd 2012,TFS,Team Foundation Server

    Read the article

  • GDD-BR 2010 [2F] Storage, Bigquery and Prediction APIs

    GDD-BR 2010 [2F] Storage, Bigquery and Prediction APIs Speaker: Patrick Chanezon Track: Cloud Computing Time slot: F [15:30 - 16:15] Room: 2 Level: 101 Google is expanding our storage products by introducing Google Storage for Developers. It offers a RESTful API for storing and accessing data at Google. Developers can take advantage of the performance and reliability of Google's storage infrastructure, as well as the advanced security and sharing capabilities. We will demonstrate key functionality of the product as well as customer use cases. Google relies heavily on data analysis and has developed many tools to understand large datasets. Two of these tools are now available on a limited sign-up basis to developers: (1) BigQuery: interactive analysis of very large data sets and (2) Prediction API: make informed predictions from your data. We will demonstrate their use and give instructions on how to get access. From: GoogleDevelopers Views: 1 0 ratings Time: 39:27 More in Science & Technology

    Read the article

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • Visual Web Developer 2010 Express, automated testing, and SVN

    - by Mr. Jefferson
    We have an HTML designer who is not a developer but needs to modify .aspx files from our ASP.NET 2.0 projects from time to time in order to get CSS to work properly with them. Currently, this involves giving her the .aspx page by itself, which she opens and edits via Visual Studio 2008 (her computer used to be a developer's). I'm considering getting her set up with Visual Web Developer 2010 Express and Subversion access so she can be more independent, but I wanted to make sure VS Express will work properly with what we do. So: Does VWD 2010 Express support automated tests? If no to the above, what happens when it opens a solution file that includes a test project, modifies it, and saves it? Are there any potential snags with setting up AnkhSVN with VWD 2010 Express?

    Read the article

  • How can I merge two SubVersion branches to one working copy without committing?

    - by Eric Belair
    My current SubVersion workflow is like so: The trunk is used to make small content changes and bug fixes to the main source code. Branches are used for adding/editing enhancements and projects. So, trunk changes are made, tested, committed and deployed pretty quickly. Whereas, enhancements and projects need additional user testing and approval. At time, I have two branches that need testing and approval at the same time. I don't want to merge to the trunk and commit until the changes are fully tested and approved. What I need to do is merge both branches to one working copy without any commits. I am using Tortoise SVN, and when I try to merge the second branch, I get an error message: Cannot merge into a working copy that has local modifications Is there a way that I can do this without committing either merge?

    Read the article

  • JRockit Virtual Edition Debug Key

    - by changjae.lee
    There are a few keys that can help the debugging of the JRVE env in console. you can type in each keys in JRVE console to see what's happening under the hood. key '0' : System information key '5' : Enable shutdown key '7' : Start JRockit Management Server (port 7091) key '8' : Statistics Counters key '9' : Full Thread Dump key '0' : Status of Debug-key Below is the sample out from each keys. Debug-key '1' pressed ============ JRockitVE System Information ============ JRockitVE version : 11.1.1.3.0-67-131044 Kernel version : 6.1.0.0-97-131024 JVM version : R27.6.6-28_o-125824-1.6.0_17-20091214-2104-linux-ia32 Hypervisor version : Xen 3.4.0 Boot state : 0x007effff Uptime : 0 days 02:04:31 CPU : uniprocessor @2327 Mhz CPU usage : 0% ctx/s: 285 preempt/s: 0 migrations/s: 0 Physical pages : 82379/261121 (321/1020 MB) Network info : 10.179.97.64 (10.179.97.64/255.255.254.0) GateWay : 10.179.96.1 MAC address : 00:16:3e:7e:dc:78 Boot options : vfsCwd : /application/user_projects/domains/wlsve_domain mainArgs : java -javaagent:/jrockitve/services/sshd/sshd.jar -cp /jrockitve/jrockit/lib/tools.jar:/jrockitve/lib/common.jar:/application/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/application/wlserver_10.3/server/lib/weblogic.jar -Dweblogic.Name=WlsveAdmin -Dweblogic.Domain=wlsve_domain -Dweblogic.management.username=weblogic -Dweblogic.management.password=welcome1 -Dweblogic.management.GenerateDefaultConfig=true weblogic.Server consLog : /jrockitve/log/jrockitve.log mounts : ext2 / dev0; posixLocale : en_US posixTimezone : Asia/Seoul posixEncoding : ISO-8859-1 Local disk : Size: 1024M, Used: 728M, Free: 295M ======================================================== Debug-key '5' pressed Shutdown enabled. Debug-key '7' pressed [JRockit] Management server already started. Ignoring request. Debug-key '8' pressed Starting stat recording Debug-key '8' pressed ========= Statistics Counters for the last second ========= dev.eth0_rx.cnt : 22 packets dev.eth0_rx_bytes.cnt : 2704 bytes dev.net_interrupts.cnt : 22 interrupts evt.timer_ticks.cnt : 123 ticks hyper.priv_entries.cnt : 144 entries schedule.context_switches.cnt : 271 switches schedule.idle_cpu_time.cnt : 997318849 nanoseconds schedule.idle_cpu_time_0.cnt : 997318849 nanoseconds schedule.total_cpu_time.cnt : 1000031757 nanoseconds time.system_time.cnt : 1000 ns time.timer_updates.cnt : 123 updates time.wallclock_time.cnt : 1000 ns ======================================= Debug-key '9' pressed ===== FULL THREAD DUMP =============== Fri Jun 4 08:22:12 2010 BEA JRockit(R) R27.6.6-28_o-125824-1.6.0_17-20091214-2104-linux-ia32 "Main Thread" id=1 idx=0x4 tid=1 prio=5 alive, in native, waiting -- Waiting for notification on: weblogic/t3/srvr/T3Srvr@0x646ede8[fat lock] at jrockit/vm/Threads.waitForNotifySignal(JLjava/lang/Object;)Z(Native Method) at java/lang/Object.wait(J)V(Native Method) at java/lang/Object.wait(Object.java:485) at weblogic/t3/srvr/T3Srvr.waitForDeath(T3Srvr.java:919) ^-- Lock released while waiting: weblogic/t3/srvr/T3Srvr@0x646ede8[fat lock] at weblogic/t3/srvr/T3Srvr.run(T3Srvr.java:479) at weblogic/Server.main(Server.java:67) at jrockit/vm/RNI.c2java(IIIII)V(Native Method) -- end of trace "(Signal Handler)" id=2 idx=0x8 tid=2 prio=5 alive, in native, daemon Open lock chains ================ Chain 1: "ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'" id=23 idx=0x50 tid=20 waiting for java/lang/String@0x630c588 held by: "ExecuteThread: '1' for queue: 'weblogic.socket.Muxer'" id=24 idx=0x54 tid=21 (active) ===== END OF THREAD DUMP =============== Debug-key '0' pressed Debug-keys enabled Happy Cloud Walking :)

    Read the article

  • XAF DSL Tool Needs a new Team Lead

    - by Patrick Liekhus
    I have enjoyed my time on this project and have used it in several production projects.  However, with the enhancements in Visual Studio 2010 and the Entity Framework, the DSL tool doesn’t make sense for me to support at this time.  With that said, I am looking for someone who has interest to continue the project if they so desire.  I have moved my attention to creating a new project at Entity Framework Extensions for XAF.  We are converting the current DSL tool into the Entity Framework extensions.  The same code generation and everything else work.  However, the visual design surface is so much easier to work with.  If you have any questions, please let me know.  Also, please take a moment to look at the new project.  This is where all my effort going forward will be focused. Thanks again for all the support on my vision this far and enjoy.

    Read the article

  • When will UDS sponsorship final list be announced?

    - by Manish Sinha
    UDS-O is being held at Budapest, Hungary and the sponsorships are open which closes on 28th of March. My question is when will the final list be announced? Will there be more than one month period for people to apply for Visa? e.g. I live in India and when we apply for Schengen Visa for the first time, it takes around 2 weeks for the visa interview(period depends on the rush). Then another week for interview and Visa to be stamped. AFAIK the application, interview and collection has to be done in person. This is all what I have heard. So will Canonical give one month time minimum for people in Eastern part of the world or countries where Schengen countries have tougher rules for visa? It would be great if they could finalize the names a bit faster and announce the list by first week of April. I am hoping Jorge or Jono might come and answer this.

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >