Search Results

Search found 17782 results on 712 pages for 'questions and answers'.

Page 331/712 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • Is throwing an error in unpredictable subclass-specific circumstances a violation of LSP?

    - by Motti Strom
    Say, I wanted to create a Java List<String> (see spec) implementation that uses a complex subsystem, such as a database or file system, for its store so that it becomes a simple persistent collection rather than an basic in-memory one. (We're limiting it specifically to a List of Strings for the purposes of discussion, but it could extended to automatically de-/serialise any object, with some help. We can also provide persistent Sets, Maps and so on in this way too.) So here's a skeleton implementation: class DbBackedList implements List<String> { private DbBackedList() {} /** Returns a list, possibly non-empty */ public static getList() { return new DbBackedList(); } public String get(int index) { return Db.getTable().getRow(i).asString(); // may throw DbExceptions! } // add(String), add(int, String), etc. ... } My problem lies with the fact that the underlying DB API may encounter connection errors that are not specified in the List interface that it should throw. My problem is whether this violates Liskov's Substitution Principle (LSP). Bob Martin actually gives an example of a PersistentSet in his paper on LSP that violates LSP. The difference is that his newly-specified Exception there is determined by the inserted value and so is strengthening the precondition. In my case the connection/read error is unpredictable and due to external factors and so is not technically a new precondition, merely an error of circumstance, perhaps like OutOfMemoryError which can occur even when unspecified. In normal circumstances, the new Error/Exception might never be thrown. (The caller could catch if it is aware of the possibility, just as a memory-restricted Java program might specifically catch OOME.) Is this therefore a valid argument for throwing an extra error and can I still claim to be a valid java.util.List (or pick your SDK/language/collection in general) and not in violation of LSP? If this does indeed violate LSP and thus not practically usable, I have provided two less-palatable alternative solutions as answers that you can comment on, see below. Footnote: Use Cases In the simplest case, the goal is to provide a familiar interface for cases when (say) a database is just being used as a persistent list, and allow regular List operations such as search, subList and iteration. Another, more adventurous, use-case is as a slot-in replacement for libraries that work with basic Lists, e.g if we have a third-party task queue that usually works with a plain List: new TaskWorkQueue(new ArrayList<String>()).start() which is susceptible to losing all it's queue in event of a crash, if we just replace this with: new TaskWorkQueue(new DbBackedList()).start() we get a instant persistence and the ability to share the tasks amongst more than one machine. In either case, we could either handle connection/read exceptions that are thrown, perhaps retrying the connection/read first, or allow them to throw and crash the program (e.g. if we can't change the TaskWorkQueue code).

    Read the article

  • A Generic Boolean Value Converter

    - by codingbloke
    On fairly regular intervals a question on Stackoverflow like this one:  Silverlight Bind to inverse of boolean property value appears.  The same answers also regularly appear.  They all involve an implementation of IValueConverter and basically include the same boilerplate code. The required output type sometimes varies, other examples that have passed by are Boolean to Brush and Boolean to String conversions.  Yet the code remains pretty much the same.  There is therefore a good case to create a generic Boolean to value converter to contain this common code and then just specialise it for use in Xaml. Here is the basic converter:- BoolToValueConverter using System; using System.Windows.Data; namespace SilverlightApplication1 {     public class BoolToValueConverter<T> : IValueConverter     {         public T FalseValue { get; set; }         public T TrueValue { get; set; }         public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             if (value == null)                 return FalseValue;             else                 return (bool)value ? TrueValue : FalseValue;         }         public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             return value.Equals(TrueValue);         }     } } With this generic converter in place it easy to create a set of converters for various types.  For example here are all the converters mentioned so far:- Value Converters using System; using System.Windows; using System.Windows.Media; namespace SilverlightApplication1 {     public class BoolToStringConverter : BoolToValueConverter<String> { }     public class BoolToBrushConverter : BoolToValueConverter<Brush> { }     public class BoolToVisibilityConverter : BoolToValueConverter<Visibility> { }     public class BoolToObjectConverter : BoolToValueConverter<Object> { } } With the specialised converters created they can be specified in a Resources property on a user control like this:- <local:BoolToBrushConverter x:Key="Highlighter" FalseValue="Transparent" TrueValue="Yellow" /> <local:BoolToStringConverter x:Key="CYesNo" FalseValue="No" TrueValue="Yes" /> <local:BoolToVisibilityConverter x:Key="InverseVisibility" TrueValue="Collapsed" FalseValue="Visible" />

    Read the article

  • SQL SERVER – Free eBook Download – EPUB, MOBI, PDF Format

    - by pinaldave
    Microsoft has released recently free eBooks on various Microsoft Technology. The best part is that all these books are available in ePub, Mobi and PDF. You can download them to your local machine or eBook reader and read them. This is a great start as many important subjects are now covered and converted into an eBook. I personally read through a few of the books and found they are very comprehensive and and detailed. The goal is not to cover complete technology in a single book but rather pick a single topic and discuss it in detail. The source of the book is white paper, Technet wiki as well book online and it is clearly listed right bellow the book title. Following are the books available for SQL Server Technology and I encourage all of you to have a look at them as they are great resources. Master Data Services Capacity Guidelines Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide QuickStart: Learn DAX Basics in 30 Minutes SQL Server 2012 Transact-SQL DML Reference You can download above eBooks from here. This is indeed a great attempt as each book talks about the a single subject in depth keeping author focus on the single and simple subject. I have previously written two books by focusing on the same subject and I had great pleasure writing it as well. Writing on focus subjects gives complete freedom to author to explore the a single subject without having burden to cover everything which is associated with that technology at large. Just like eBooks mentioned earlier my SQL Server Wait Stats was inspired from my article series on SQL Wait Stats. The latest book SQL Server Interview Question and Answers was derived from my article series on SQL Interview Q and A. Writing book is an absolutely different concept than writing blog posts. When I was converting my blog posts to books, I ended up writing 50% new material and end up removing many repetitive content which shows up in blog series. It was indeed fun to focused book at the same time it was a great learning experience as an individual. Reference: TechNet Wiki, Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Get 2 of My Books FREE at Koenig Tech Day – Where Technologies Converge!

    - by pinaldave
    As a regular reader of my blog – you must be aware of that I love to write books and talk about various subjects of my book. The founders of Koenig Solutions are my very old friends, I know them for many years. They have been my biggest supporter of my books. Coming weekend they have a technology event at their Bangalore Location. Every attendee of the technology event will get a set of two books worth Rs. 450 – ‘SQL Server Interview Questions And Answers‘ and ‘SQL Wait Stats Joes 2 Pros‘. I am going to cover a couple of topics of the books and present  as well. I am very confident that every attendee will be having a great time. I will be covering following subjects: SQL Server Tricks and Tips for Blazing Fast Performance Slow Running Queries (SQL) are the most common problem that developers face while working with SQL Server. While it is easy to blame the SQL Server for unsatisfactory performance, however the issue often persists with the way queries have been written, and how SQL Server has been set up. The session will focus on the ways of identifying problems that slow down SQL Servers, and tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. After the session is over – I will point to what exact location in the book where you can continue for the further learning. I am pretty excited, this is more like book reading but in entire different format. The one day event will cover four technologies in four separate interactive sessions on: Microsoft SQL Server Security VMware/Virtualization ASP.NET MVC Date of the event: Dec 15, 2012 9 AM to 6PM. Location of the event:  Koenig Solutions Ltd. # 47, 4th Block, 100 feet Road, 3rd Floor, Opp to Shanthi Sagar, Koramangala, Bangalore- 560034 Mobile : 09008096122 Office : 080- 41127140 Organizers have informed me that there are very limited seats for this event and technical session based on my book will start at Sharp 9 AM. If you show up late there are chances that you will not get any seats. Registration for the event is a MUST. Please visit this link for further information. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Any suggested approaches to track bugs/defects?

    - by deostroll
    What is the best way to track defect sources in tfs? We have various teams for a project like the vulnerability team, the customer, pre-sales, etc. We give a build and these teams independently test it. They do not have access to our tfs system. So they usually send in their defects via email. It will usually be send in an excel format. Our testing team takes these up and logs them into tfs. Sometimes they modify the original defect description (in excel) and add the expected/actual results. Sometimes they miss to cite the source. I am talking about managing the various sources as such. Is there a way we can add these sources into tfs, and actually link this particular source with the defects, with individual comments associated with them (saying where in the source we can find the actual material for the defect). Edit: I don't know if there is a way to manage various sources. Consider this: the vulnerability assessment team has come out with defects/suggestions. They captured it into an excel and passed that on to the testing team (in my case). The testing team takes the responsibility of elaborating the defect and logging it in tfs. Now say that the excel has come with 20 defect items. This is my source. (It answers the question where did this defect come from). So ultimately when I am looking at a bug I know from where it came from - I'll ultimately be looking at the email sent from the VA team which has the excel or the excel file itself sent by the VA team. It may be one of the 20 items in that excel. How should the tester link to this source just once? On the contrary, it does not make sense for the tester to attach the same excel 20 times (i.e. attach the same excel for the 20 defects while logging it into tfs) right? I hope you get my point.

    Read the article

  • What Would a CyberWar Do To Your Business?

    - by [email protected]
    In mid-February the Bipartisan Policy Center in the United States hosted Cyber ShockWave, a simulation of how the country might respond to a catastrophic cyber event. An attack takes place, they can't isolate where it came from or who did it, simulated press reports and market impacts...and the participants in the exercise have to brief the President and advise him/her on what to do. Last week, Former Department of Homeland Security Secretary Michael Chertoff who participated in the exercise summarized his findings in Federal Computer Weekly. The article, given FCW's readership and the topic is obviously focused on the public sector and US Federal policies. However, it touches on some broader issues that impact the private sector as well--which are applicable to any government and country/region-- such as: · How would the US (or any) government collaborate to identify and defeat such an attack? Chertoff calls this out as a current gap. How do the public and private sector collaborate today? How would the massive and disparate collection of agencies and companies act together in a crunch? · What would the impact on industries and global economies be? Chertoff, and a companion article in Government Computer News, only touch briefly on the subject--focusing on the impact on capital markets. "There's no question this has a disastrous impact on the economy," said Stephen Friedman, former director of the National Economic Council under President George W. Bush who played the role of treasury secretary. "You have financial markets shut down at this point, ordinary transactions are dramatically depleted, there's no question that this has a major impact on consumer confidence." That Got Me Thinking · How would it impact Oracle's customers? I know they have business continuity plans--is this one of their scenarios? What if it's not? How would it impact manufacturing lines, ATM networks, customer call centers... · How would it impact me and the companies I rely on? The supermarket down the street, my Internet Service Provider, the service station where I bought gas last night. I sure don't have any answers, and neither do Chertoff or the participants in the exercise. "I have to tell you that ... we are operating in a bit of unchartered territory." said Jamie Gorelick, a former deputy attorney general who played the role of attorney general in the exercise. But it is a good thing that governments and businesses are considering this scenario and doing what they can to prevent it from happening.

    Read the article

  • ArchBeat Link-o-Rama for November 8, 2012

    - by Bob Rhubart
    Webcast: Meeting Customer Expectations in the New Age of Retail Keep your eye on this live webcast as Sanjeev Sharma (Principal Product Director, Oracle Exalogic), Kelly Goetsch (Senior Principal Product Manager, Oracle Commerce), and Dan Conway (Senior Product Manager, Oracle Retail) offer real-world examples of business value derived by running customer-facing applications on Oracle Engineered Systems. Live, Thursday Nov 8, 10am PT/ 1pm ET. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." — Irving Wladawsky-Berger SOA Still Not Dead: Ratification of Governance Standard Highlights SOA’s Continued Relevance So just about the time I dig into Google Trends to learn that the conversation about governance peaked in 2004, along comes all this InfoQ article by Richard Seroter. And of course you've already listened to the OTN Archbeat Podcast about governance, right? Right? Implications of Java 6 End of Public Updates for Oracle E-Business Suite Users | Steven Chan The short version is: "Nothing will change for EBS users after February 2013." According to Steven Chan, "EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6." You'll find additional information on Steven's blog. ADF Mobile Custom Javascript – iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. How Data and BPM are married to get the right information to the right people at the right time | Leon Smiers "Business Process Management…supports a large group of stakeholders within an organization, all with different needs," says Oracle ACE Leon Smiers. "End-to-end processes typically run across departments, stakeholders and applications, and can often have a long life-span. So how do organizations provide all stakeholders with the information they need?" Leon provides answers in this post. Thought for the Day "(When) asking skilled architects…what they do when confronted with highly complex problems…(they) would most likely answer, 'Just use Common Sense.' (A) better expression than 'common sense' is 'contextual sense' — a knowledge of what is reasonable within a given content. Practicing architects through eduction, experience and examples accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • Should library classes be wrapped before using them in unit testing?

    - by Songo
    I'm doing unit testing and in one of my classes I need to send a mail from one of the methods, so using constructor injection I inject an instance of Zend_Mail class which is in Zend framework. Example: class Logger{ private $mailer; function __construct(Zend_Mail $mail){ $this->mail=$mail; } function toBeTestedFunction(){ //Some code $this->mail->setTo('some value'); $this->mail->setSubject('some value'); $this->mail->setBody('some value'); $this->mail->send(); //Some } } However, Unit testing demands that I test one component at a time, so I need to mock the Zend_Mail class. In addition I'm violating the Dependency Inversion principle as my Logger class now depends on concretion not abstraction. Does that mean that I can never use a library class directly and must always wrap it in a class of my own? Example: interface Mailer{ public function setTo($to); public function setSubject($subject); public function setBody($body); public function send(); } class MyMailer implements Mailer{ private $mailer; function __construct(){ $this->mail=new Zend_Mail; //The class isn't injected this time } function setTo($to){ $this->mailer->setTo($to); } //implement the rest of the interface functions similarly } And now my Logger class can be happy :D class Logger{ private $mailer; function __construct(Mailer $mail){ $this->mail=$mail; } //rest of the code unchanged } Questions: Although I solved the mocking problem by introducing an interface, I have created a totally new class Mailer that now needs to be unit tested although it only wraps Zend_Mail which is already unit tested by the Zend team. Is there a better approach to all this? Zend_Mail's send() function could actually have a Zend_Transport object when called (i.e. public function send($transport = null)). Does this make the idea of a wrapper class more appealing? The code is in PHP, but answers doesn't have to be. This is more of a design issue than a language specific feature

    Read the article

  • need some concrete examples on user stories, tasks and how they relate to functional and technical specifications

    - by gideon
    Little heads up, Im the only lonely dev building/planning/mocking my project as I go. Ive come up with a preview release that does only the core aspects of the system, with good business value and I've coded most of the UI as dirty throw-able mockups over nicely abstracted and very minimal base code. In the end I know quite well what my clients want on the whole. I can't take agile-ish cowboying anymore because Im completely dis-organized and have no paper plan and since my clients are happy, things are getting more complex with more features and ideas. So I started using and learning Agile & Scrum Here are my problems: I know what a functional spec is.(sample): Do all user stories and/or scenarios become part of the functional spec? I know what user stories and tasks are. Are these kinda user stories? I dont see any Business Value reason added to them. I made a mind map using freemind, I had problems like this: Actor : Finance Manager Can Add a Financial Plan into the system because well thats the point of it? What Business Value reason do I add for things like this? Example : A user needs to be able to add a blog article (in the blogger app) because..?? Its the point of a blogger app, it centers around that feature? How do I go into the finer details and system definitions: Actor: Finance Manager Action: Adds a finance plan. This adding is a complicated process with lots of steps. What User Story will describe what a finance plan in the system is ?? I can add it into the functional spec under definitions explaining what a finance plan is and how one needs to add it into the system, but how do I get to the backlog planning from there? Example: A Blog Article is mostly a textual document that can be written in rich text in the system. To add a blog article one must...... But how do you create backlog list/features out of this? Where are the user stories for what a blog article is and how one adds/removes it? Finally, I'm a little confused about the relations between functional specs and user stories. Will my spec contain user stories in them with UI mockups? Now will these user stories then branch out tasks which will make up something like a technical specification? Example : EditorUser Can add a blog article. Use XML to store blog article. Add a form to add blog. Add Windows Live Writer Support. That would be agile tasks but would that also be part of/or form the technical specs? Some concrete examples/answers of my questions would be nice!!

    Read the article

  • Ubuntu 10.10 forgets desktop theme.

    - by Marcelo Cantos
    (I posed this question on superuser.com and haven't received any answers or comments, then I came across this site, so my apologies to anyone who has seen this already.) I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matt white. This last time (just yesterday), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • Mock RequireJS define dependencies with config.map

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/18/mock-requirejs-define-dependencies-with-config.map.aspxI had a module dependency, that I’m pulling down with RequireJS that I needed to use and write tests against. In this case, I don’t care about the actual implementation of the module (it’s simple enough that I’m just avoiding some AJAX calls). EDIT: make sure you look at the bottom example after the edit before using the config.map approach. I found that there is an easier way. I did not want to change the constructor of the consumer as I had a chain of changes that would have to be made and that would have been to invasive for this task. I found a question on StackOverflow with a short, but helpful answer from “Artem Oboturov”. We can use the config.map from RequireJs to achieve this. Here is some code: A module example (“usefulModule” in Common/Modules/usefulModule.js): define([], function() { "use strict"; var testMethod = function() { ... }; // add more functionality of the module return { testMethod; } }); A consumer of usefulModule example: define([ "Commmon/Modules/usefulModule" ], function(usefulModule) { "use strict"; var consumerModule = function(){ var self = this; // add functionality of the module } }); Using config.map in the html of the test runner page (and in your Karma config –> I’m still trying to figure this out): map: {'*': { // replace usefulModule with a mock 'Common/Modules/usefulModule': '/Tests/Specs/Common/usefulModuleMock.js' } } With the new mapping, Require will load usefulModuleMock.js from Tests/Specs/Common instead of the real implementation. Some of the answers on StackOverflow mentioned Squire.js, which looked interesting, but I wasn’t ready to introduce a new library at this time. That’s all you need to be able to mock a depency in RequireJS. However, there are many good cases when you should pass it in through the constructor instead of this approach.   EDIT: After all that, here’s another, probably better way: The consumer class, updated: define([ "Commmon/Modules/usefulModule" ], function(UsefulModule) { "use strict"; var consumerModule = function(){ var self = this; self.usefulModule = new UsefulModule(); // add functionality of the module } }); Jasmine test: define([ "consumerModule", "/UnitTests/Specs/Common/Mocks/usefulModuleMock.js" ], function(consumerModule, UsefulModuleMock){ describe("when mocking out the module", function(){ it("should probably just override the property", function(){ var consumer = new consumerModule(); consumer.usefulModule = new UsefulModuleMock(); }); }); });   Thanks for letting me think out loud :-).

    Read the article

  • Controlling text that appears in google search results

    - by Mick
    I have recently made a simple (pure HTML) website. The most important key phrase that I want to capture is "full reserve banking". Currently, if I type "full reserve banking" (without quotes) into google, then my site appears as the 7th item on the first page. I am reasonably happy with this as the site is so new. But one frustration is that the text that google displays in relation to my site is rather misleading. The main message I would like to get across is that my site is "A collection of resources for anyone interested in this alternative monetary system." and I have this as the first line of text on the page. Unfortunately, this important sentence is nowhere to be seen in the google search result. So my question is - is there anything I can do to fix this error? Edit: I noticed that someone edited this question to remove the name of the website. I was very keen to leave it in because being able to look at it makes it far easier to diagnose what I did wrong. Indeed the answer suggested by "Su" clearly shows that they looked at my website and analyzed what it was doing which helped them give a clearer answer. If I am breaking some policy by including the name then please explain what this policy is in a comment. Edit: I have now made a series of changes to my meta descriptions as inspired by the answers given here. On the homepage I now have the text: <META NAME="description" CONTENT="A collection of resources for anyone interested in Full Reserve Banking. What it is, how it works, web resources, organisations, research papers etc."> I am now very excited to see what will happen after the next visit by the google robots. Edit: Result! I just did a google search for "full reserve banking", and the text that appeared was: Full Reserve Banking: The definitive resource. A collection of resources for anyone interested in Full Reserve Banking. What it is, how it works, web resources, organisations, research papers etc. www.fullreservebanking.com/ - Cached By the way, I did originally have a meta description - but it was too short, it just said "full reserve banking". Google obviously assumed this was too little and so chose to use its own algorithms to cook up a different sentence from the main text.

    Read the article

  • Is reliance on parametrized queries the only way to protect against SQL injection?

    - by Chris Walton
    All I have seen on SQL injection attacks seems to suggest that parametrized queries, particularly ones in stored procedures, are the only way to protect against such attacks. While I was working (back in the Dark Ages) stored procedures were viewed as poor practice, mainly because they were seen as less maintainable; less testable; highly coupled; and locked a system into one vendor; (this question covers some other reasons). Although when I was working, projects were virtually unaware of the possibility of such attacks; various rules were adopted to secure the database against corruption of various sorts. These rules can be summarised as: No client/application had direct access to the database tables. All accesses to all tables were through views (and all the updates to the base tables were done through triggers). All data items had a domain specified. No data item was permitted to be nullable - this had implications that had the DBAs grinding their teeth on occasion; but was enforced. Roles and permissions were set up appropriately - for instance, a restricted role to give only views the right to change the data. So is a set of (enforced) rules such as this (though not necessarily this particular set) an appropriate alternative to parametrized queries in preventing SQL injection attacks? If not, why not? Can a database be secured against such attacks by database (only) specific measures? EDIT Emphasis of the question changed slightly, in the light of the initial responses received. Base question unchanged. EDIT2 The approach of relying on paramaterized queries seems to be only a peripheral step in defense against attacks on systems. It seems to me that more fundamental defenses are both desirable, and may render reliance on such queries not necessary, or less critical, even to defend specifically against injection attacks. The approach implicit in my question was based on "armouring" the database and I had no idea whether it was a viable option. Further research has suggested that there are such approaches. I have found the following sources that provide some pointers to this type of approach: http://database-programmer.blogspot.com http://thehelsinkideclaration.blogspot.com The principle features I have taken from these sources is: An extensive data dictionary, combined with an extensive security data dictionary Generation of triggers, queries and constraints from the data dictionary Minimize Code and maximize data While the answers I have had so far are very useful and point out difficulties arising from disregarding paramaterized queries, ultimately they do not answer my original question(s) (now emphasised in bold).

    Read the article

  • How can I set the date format to my country setting?

    - by Jamina Meissner
    I am German, but I use only English software. Hence, I am also using English Ubuntu. It's not because I don't know how to install German Ubuntu. It's because I prefer to work with English software environment. However, I would like to keep date & time format in German format, just as I use a German keyboard layout in English Ubuntu. I can set the time format to 24h time. But how can I set the date format to German time format? It is irritating for me to have the day number before the time numbers: In other words, instead of "Oct 14 15:16" I want it to display "14 Okt" or (if only English language is available) "14 Oct 15:16" or "14th Oct 15:16". At least, the number of the day should be displayed before the month. In Windows, it was no problem to choose time/date/currency settings according to a chosen country. Where can I do this in Ubuntu? The best would be if I could freely enter the date/time format myself with variables (DD.MM hh.mm.ss etc). I found answers for Ubuntu 11.04, but not for Ubuntu 12.04. I am using Ubuntu 12.04, 64-bit. Keep in mind that I am a beginner. So I'd like to be able to do this via GUI, if possible. EDIT: I found the answer in a forum. Go to System Settings... and choose Language Support. There are two tabs, Language and Reginal Formats. You are by default on the Language tab. On the Language tab, click Install / Remove Languages. A window with a list of languages opens. Mark the language(s) you want to add for your time/date/currency format. Click Apply Changes. Ubuntu will now download and install the additional language files, as well as help files of other applications in this language. So don't be irritated. When Ubuntu has finished applying the changes, switch to Regional Formats tab. (Do not change the Language for menus and windows on the Language tab if you only want to change the date/time/unit format). There you can choose from the dropdown list the language for your preferred format for date/time/currency/unit. Log out and log in again to have the changes take effect.

    Read the article

  • I need to move an entity to the mouse location after i rightclick

    - by I.Hristov
    Well I've read the related questions-answers but still cant find a way to move my champion to the mouse position after a right-button mouse-click. I use this code at the top: float speed = (float)1/3; And this is in my void Update: //check if right mouse button is clicked if (mouse.RightButton == ButtonState.Released && previousButtonState == ButtonState.Pressed) { // gets the position of the mouse in mousePosition mousePosition = new Vector2(mouse.X, mouse.Y); //gets the current position of champion (the drawRectangle) currentChampionPosition = new Vector2(drawRectangle.X, drawRectangle.Y); // move champion to mouse position: //handles the case when the mouse position is really close to current position if (Math.Abs(currentChampionPosition.X - mousePosition.X) <= speed && Math.Abs(currentChampionPosition.Y - mousePosition.Y) <= speed) { drawRectangle.X = (int)mousePosition.X; drawRectangle.Y = (int)mousePosition.Y; } else if (currentChampionPosition != mousePosition) { drawRectangle.X += (int)((mousePosition.X - currentChampionPosition.X) * speed); drawRectangle.Y += (int)((mousePosition.Y - currentChampionPosition.Y) * speed); } } previousButtonState = mouse.RightButton; What that code does at the moment is on a click it brings the sprite 1/3 of the distance to the mouse but only once. How do I make it move consistently all the time? It seems I am not updating the sprite at all. EDIT I added the Vector2 as Nick said and with speed changed to 50 it should be OK. I tried it with if ButtonState.Pressed and it works while pressing the button. Thanks. However I wanted it to start moving when single mouse clicked. It should be moving until reaches the mousePosition. The Edit of Nick's post says to create another Vector2, But I already have the one called mousePosition. Not sure how to use another one. //gets a Vector2 direction to move *by Nick Wilson Vector2 direction = mousePosition - currentChampionPosition; //make the direction vector a unit vector direction.Normalize(); //multiply with speed (number of pixels) direction *= speed; // move champion to mouse position if (currentChampionPosition != mousePosition) { drawRectangle.X += (int)(direction.X); drawRectangle.Y += (int)(direction.Y); } } previousButtonState = mouse.RightButton;

    Read the article

  • What can I do to make sure my code gets maintained in a developer light environment?

    - by asjohnson
    I am a contract data analyst, so I bounce between jobs every 3-6 months, which I find to be a good fit for me, but it leads to some problems when it comes to coding. I mostly do statistics (I've asked a similar question on cross validated, but the answers there are not relevant here), but I have also found out that the business world loves excel and loves copying and pasting the same thing over and over again even more. This led me to learn how to write VBA scripts and then VB.NET programs to automate as many of these reports as I can. I am certain my programs are not the most elegant, but I put a good bit of effort into making sure they work under as many cases as I can test, I add in exceptions and try to code so the program can handle changes in the files that it processes, but there is a limit, if you remove a huge portion of the data, there is a good chance my program is going to trip up, which I accept will inevitably happen. Usually a pretty minor change in the code fixes the problem and I do try and comment my code and make it readable under the assumption that some other person will have to read it some day. My problem is that I generally get put on teams of folks with essentially no experience with programming (like VBA would be a huge stretch for anyone I work directly with). I am wondering what I should be doing as the person that wrote the code to do my best to keep it maintained. I have two approaches in mind (outlined next), but would be very happy to get any advice. Solution 1: Find the more tech savvy coworkers and run them through the programs and what basic changes can be made. Honestly automating excel is about as easy as it can get when it comes to programming, so I feel like I could teach someone the basics of maintaining it pretty quick. Solution 2: Get in touch with the IT department and show them what is going on and maybe they will be able to help. The problem here is that the IT department is constantly swamped (as I'm sure many of you know) and I feel like kind of a jerk for dumping more things on them. I do leave my personal email address with places and am willing to answer quick questions via email, but I view the need for more exhaustive maintenance as something of an inevitability and would like to make sure I do my due diligence to make sure it gets done. I imagine some combination of the two approaches outlined there, but is there any kind of heads up I should give IT? I feel like I would be annoyed if I started getting requests to fix a program that I had never seen from some random guy that is no longer there.

    Read the article

  • Is anyone doing "real" TDD with Visual-C++, and if yes, how do they do it?

    - by Martin
    Test Driven Development implies writing the test before the code and following a certain cycle: Write Test Check Test (run) Write Production Code Check Test (run) Clean up Production Code Check test (run) As far as I'm concerned, this is only possible if your development solution allows you to very quickly switch between the production and test code, and to execute the test for a certain production code part extremely quickly. Now, while there exist lots of Unit Testing Frameworks for C++ (I'm using Bost.Test atm.) it does seem that there doesn't really exist any decent (for native C++) Visual Studio (Plugin) solution that makes the TDD cycle bearable regardless of framework used. "Bearable" means that it's a one-click action to run a test for a certain cpp file without having to manually set up a separate testing project etc. "Bearable" also means that a simple test starts (linking!) and runs very quickly. So, what tools (plugins) and techniques are out there that make the TDD cycle possible for native C++ development with Visual Studio? Note: I'm fine with free or "commercial" tools. Please: No framework recommendations. (Unless the framework has a dedicated Visual Studio plugin and you want to recommend the plugin.) Edit Note: The answers so far have provided links on how to integrate a Unit Testing framework into Visual Studio. The resources more or less describe how to get the UT framework to compile and get your first Tests running. This is not what this question is about. I'm of the opinion that to really work productively, having the Unit Tests in a manually maintained(!), separate vcproj from your production classes will add so much overhead that TDD "isn't possible". As far as I am aware, you do not add extra "projects" to a Java or C# thing to enable Unit Tests and TDD, and for a good reason. This should be possible with C++ given the right tools, but it seems (this question is about) that there are very little tools for TDD/C++/VS. Googling around, I've found one tool, VisualAssert, that seems to aim in the right direction. However, afaiks, it doesn't seem to be in widespread use (compared to CppUnit, Boost.Test etc.). Edit: I would like to add a comment to the context for this question. I think it does a good summary of outlining (part of) the problem: (comment by Billy ONeal) Visual Studio does not use "build scripts" that are reasonably editable by the user. One project produces one binary. Moreover, Java has the property that Java never builds a complete binary -- the binary you build is just a ZIP of the class files. Therefore it's possible to compile separately then JAR together manually (using e.g. 7z). C++ and C# both actually link their binaries, so generally speaking you can't write a script like that. The closest you can get is to compile everything separately and then do two linkings (one for production, one for testing).

    Read the article

  • Rails - How to use modal form to add object in one model, then reflect that change on main page?

    - by Jim
    I'm working on a Rails app and I've come across a situation where I'm unsure of the cleanest way to proceed. I posted a question on SO with code samples and such - it has received no answers, and the more I think about the problem, the more I think I might be approaching this the wrong way. (See the SO question at http://stackoverflow.com/questions/9521319/how-to-reference-form-when-rendering-partial-from-js-erb-file) So, in more of a generic architecture type question: Right now I have a form where a user can add a new recipe. The form also allows the user to select ingredients (it uses a collection_select which contains Ingredient.all). The catch is - I'd like the user to be able to add a new ingredient on the fly, without leaving the recipe form. Using a hidden div and some jQuery/AJAX, I have a link the user can click to popup a modal form containing ingredients/new.html.erb which is a simple form. When that form is submitted, I call ingredients/create.js.erb to validate the ingredient was saved and hide the modal div. Now I am back to my recipe form, but my collection_select hasn't updated. It seems I have a few choices here: try and re-render the collection_select portion of the form so it grabs a new list of ingredients. This was the method I was attempting when I wrote the SO question. The problem I run into is the partial I use for the collection_select needs the parent form passed in, and when I try and render from the JS file I don't know how to pass it the form object. Reload the recipe form. This works (the collection_select now contains the new ingredient), but the user loses any progress they made on the recipe form. I would need a way to persist the form data - I thought about manually passing the values back and forth, but that is sloppy and there has to be a better way... Try and manually insert the tags using jQuery - this would be simple, but because I'm allowing for multiple ingredients to be added, I can't be certain what ID to target. Now, I can't be the only person to have this issue - so is there an easier way I'm missing? I like option 2 above, but I don't know if there's an easy way to grab the entire params hash as if I had submitted the main recipes form. Hopefully someone can point me in the right direction so I can find an answer to this... If this doesn't make any sense at all, let me know - I can post code samples if you want, but most of the pertinent code is up on the SO question. Thanks!

    Read the article

  • Avoiding Duplicate Content Penalties on a Corporate/Franchise website

    - by heath
    My question is really an extension of a previous question that was ported from stackoverflow and closed so I cannot edit it. The basic gist is a regional franchise company has decided to force all independent stores into one website look; they currently all have their own domains and completely different websites. After reading the helpful answers and looking over some links provided, I think my solution is to put a 301 on each franchise store site (acme-store1.com, acme-store2.com, etc) back to the main corporate site (acme.com). All of the company history, product info, etc (about 90% of the entire site) applies to all stores. However, each store should have some exclusive content such as staff, location pictures, exclusive events and promotions, etc. I originally thought that I would simply do something like acme.com/store1/staff, acme.com/store2/staff, etc for the store exclusive content and then acme.com/our-company, for example, would cover all stores. However, I now see two issues that I don't know how to solve. They want to see site stats based on what store site they came from. If a user comes from acme-store1.com, is redirected to acme.com and hits several pages, don't I need to somehow keep that original site in the new url to track each page in that user's session and show they originally came from acme-store1.com? Each store is still independently owned and is essentially still in competition with the other stores, albeit, in less competition than they are with other brands. This is important because each store would like THEIR contact info, links to their social media pages, their mailing list sign-up and customer requests on EVERY page. So if a user originally goes to acme-store1.com and is redirected to acme.com, it still should look to the user that it's all about store 1, even though 90% of the content will be exactly the same as it is in the store 2, store 3 and corporate site. For example, acme.com/our-company would have the same company history, same header/footer/navigation, BUT depending on the original site the user came from, it would display contact and links to THAT store. If someone came directly to the corporate site, it would display their contact and links (they have their own as well). I was considering that all redirects would be to store1.acme.com, store2.acme.com, etc (or acme.com/store1) and then I can dynamically add the contact info and appropriate links based on the subdomain or subfolder. But, then I have to worry about duplicate content penalties because, again, about 90% of the text in these "subdomains" are all the same. For reference, this is a PHP5 site. I've already written a compact framework utilizing templates and mod-rewrite that I've used for other sites. Is this an easy fix that I'm just not grasping? Any suggestions?

    Read the article

  • Installed LibreOffice 4 with ppa, how do I remove it and go back to LibreOffice 3?

    - by MMA
    EDIT This question is not at all a duplicate of How to downgrade from LibreOffice 4.0 to 3.6? The above mentioned question talks about downgrading from a specific version of LibreOffice, namely from 4.0 to 3.6. The solutions mentioned are not the ones I am looking for. They will work but I wanted a general solution without using PPA or downloading .deb files for from a higher version to a lower version. The above solutions suggest either downloading .deb files for LibreOffice 3.6 or adding repository for it. Furthermore, some of the answers put out-of-proportion~(applicable for the solution, however) stress on use of synaptic, not general command-line-solution. That made me wonder, at this very moment, if I take a fresh computer, and install Ubuntu 12.04, LibreOffice installation will work without a hitch. Then why I can not install LibreOffice in my 12.04 machine today from simple command line? This answer to my question, clarified everything. I need to use ppa-purge so that this resets all packages from a PPA to the standard versions released for my distribution. Basically it is like a way to restore my system back to the way it was before my installed packages from a PPA. This article further elaborates the idea. The above mentioned answer worked perfectly for me. Actually, this was an education for me since it taught me how do downgrade a package that was added via PPA. I had upgraded from LibreOffice 3 to LibreOffice 4 using the PPA. Now since I found that LibreOffice 4 has some issues, including handling my native language, I want to move back to LibreOffice 3. In order to accomplish this, I removed the LibreOffice config directory from my home and then purged LibreOffice from my machine. sudo apt-get purge libreoffice-* Then I removed the relevant PPA's using the sudo apt-add-repository --remove command. And then ran sudo apt-get update. Now, when I try to install LibreOffice using the command sudo apt-get install libreoffice I get an avalanche of output about unmet dependencies, something like, The following packages have unmet dependencies: libreoffice : Depends: libreoffice-core (= 1:3.5.7-0ubuntu4) but it is not going to be installed (snipped) If I dig the issue further, by using the command, sudo apt-get install libreoffice-core I get The following packages have unmet dependencies: libreoffice-core : Depends: libreoffice-common (> 1:3.5.7) but it is not going to be installed Depends: libexttextcat0 (>= 2.2-8) but it is not going to be installed Depends: ure (>= 3.5.7~) but it is not going to be installed E: Unable to correct problems, you have held broken packages. Could you please tell me how do I install LibreOffice 3 in my machine? I am using Ubuntu 12.04 LTS.

    Read the article

  • The Buzz at the JavaOne Bookstore

    - by Janice J. Heiss
    I found my way to the JavaOne bookstore, a hub of activity. Who says brick and mortar bookstores are dead? I asked what was hot and got two answers: Hadoop in Practice by Alex Holmes was doing well. And Scala for the Impatient by noted Java Champion Cay Horstmann also seemed to be a fast seller. Hadoop in PracticeHadoop is a framework that organizes large clusters of computers around a problem. It is touted as especially effective for large amounts of data, and is use such companies as  Facebook, Yahoo, Apple, eBay and LinkedIn. Hadoop in Practice collects nearly 100 Hadoop examples and presents them in a problem/solution format with step by step explanations of solutions and designs. It’s very much a participatory book intended to make developers more at home with Hadoop.The author, Alex Holmes, is a senior software engineer with more than 15 years of experience developing large-scale distributed Java systems. For the last four years, he has gained expertise in Hadoop solving Big Data problems across a number of projects. He has presented at JavaOne and Jazoon and is currently a technical lead at VeriSign.At this year’s JavaOne, he is presenting a session with VeriSign colleague, Karthik Shyamsunder called “Java: A Perfect Platform for Data Science” where they will explain how the Java platform has emerged as a perfect platform for practicing data science, and also talk about such technologies as Hadoop, Hive, Pig, HBase, Cassandra, and Mahout. Scala for the ImpatientSan Jose State University computer science professor and Java Champion Cay Horstmann is the principal author of the highly regarded Core Java. Scala for the Impatient is a basic, practical introduction to Scala for experienced programmers. Horstmann has a presentation summarizing the themes of his book on at his website. On the final page he offers an enticing summary of his conclusions:* Widespread dissatisfaction with Java + XML + IDEs               --Don't make me eat Elephant again * A separate language for every problem domain is not efficient               --It takes time to master the idioms* ”JavaScript Everywhere” isn't going to scale* Trend is towards languages with more expressive power, less boilerplate* Will Scala be the “one ring to rule them”?* Maybe              --If it succeeds in industry             --If student-friendly subsets and tools are created The popularity of both books echoed comments by IBM Distinguished Engineer Jason McGee who closed his part of the Sunday JavaOne keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers increasingly must know the strengths and weaknesses of such languages going forward.

    Read the article

  • How exactly is Google Webmaster Tools measuring "Site Performance"?

    - by Rémi
    I've been working for two months now on improving our response time (mainly server side) on a new forum (a brand new product on a technical point of view) we've launched in Germany a few month ago and I'm a lot surprised by the results I get. I monitor our response time using Apache logs and our own implementation of Boomerang beacon. Using my stats, I can see that our new product responds in about 680 ms where our old product was responding in about 1050 ms. On the other side, Google Webmaster Tool tells us that our pages have an average reponse time of about 1500 ms today where it was 700 three months ago with our old product. I've figured that GWT was taking client side metrics into account so I've added some measures on our Boomerang beacon and everything looks just fine. I've also ran some random pages on ySlow and Google's Page Speed and everything looks better than it was before. We event have a 82% on Google's Page Speed tool which is quite cool for a site with some ads in it :) Lately, we have signed a deal with Akamai to use two of their products : CDN for our static files (we were using another CDN before but it wasn't very effective) and RMA to improve Networks routes. We have also introduced a new agressive cache mecanism to ensure that most of the pages served to crawlers are cached by our memcache grid. After checking my metrics, it seems that this changes have improved from 650ms to about 500ms, which is good (still not great but it is definitly an improvement). But webmaster tools continues to report an increasing average response time where we see it decreasing in the same time. Have you ever had the same kind of wierd behavior on your sites while doing performance improvements ? Do you have any idea how to monitor the same thing Google does with Site Performance in Google Webmaster Tools so that we could improve our site and constantly check if it is what Google wants ? Edit 2011/07/26 : Thanks for your answers guys ! Nevertheless, I was not precise enough. The main issue we have is not with the Site Performance page but with the Crawl Stats one for now. We probably found an issue on our side with some very slow pages (around 3000 ms !!) and we are trying to fix them. I'll keep you posted as soon I'll have some infos. Thanks again !

    Read the article

  • Need help fixing DPKG errors after update from 12.04 to 12.10

    - by James Wulfe
    So I was doing fine then i upgraded my system to 12.10 and now i cant get my system to update all of its packages properly. no matter what i do, what is happening here and how do i fix this. if i would have thought 12.10 would be this much of a hassle i would have never upgraded..... here is a sampling of the code that returns from "apt-get -f install" It should also be noted that it is just these 6 packages only. no other packages have given me this kind of trouble. well i should say as of now. It was just 5, but them i got an update for unity, and now unity-common is added to the trouble makers. which prevents me from further upgrading the actual unity package as this package is a dependancy. Preparing to replace usb-modeswitch-data 20120120-0ubuntu1 (using .../usb-modeswitch-data_20120815-1_all.deb) ... /var/lib/dpkg/info/usb-modeswitch-data.prerm: 4: /var/lib/dpkg/info/usb-modeswitch-data.prerm: dpkg-maintscript-helper: Input/output error dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 4: /var/lib/dpkg/tmp.ci/prerm: dpkg-maintscript-helper: Input/output error dpkg: error processing /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 2 /var/lib/dpkg/info/usb-modeswitch-data.postinst: 7: /var/lib/dpkg/info/usb-modeswitch-data.postinst: dpkg-maintscript-helper: Input/output error dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/network-manager_0.9.6.0-0ubuntu7_i386.deb /var/cache/apt/archives/pcmciautils_018-8_i386.deb /var/cache/apt/archives/unity-common_6.10.0-0ubuntu2_all.deb /var/cache/apt/archives/whoopsie_0.2.7_i386.deb /var/cache/apt/archives/usb-modeswitch_1.2.3+repack0-1ubuntu3_i386.deb /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I would also like to note i have cleaned apt cashe both through the terminal and manualy, i have tried installing them manually through dpkg from both the /var/cache/apt/archives/ location and from my own manually downloaded .deb files. i have tried using dpkg-reconfigure and i have used bleachbit to clean my system. I have also tested both my HDD and memory and found no significant errors to lead to the input/output errors. Quite frankly i am just out of options and have grown tired of trying to google a solution to this mess but still do not wish to pursue backing up settings and reinstalling the system. Any help would be appreciated. I am only interested in answers, please leave your feeling towards grammar, punctuation, and bias towards how a "post should look" at the door. If you dont have something to contribute towards solving my problem then you are just doing nothing but contributing to it. Thank you.

    Read the article

  • How to move a website and domain name without experiencing downtime for emails or site?

    - by user4842
    Okay, I have a pretty complex problem, so I'll get right to it. I'm a designer who built a new website for my client. Their old site is hosted at GoDaddy, as well as their email. Problem is, the guy who built the original site decided to put the original domain name and hosting under HIS personal GoDaddy account. Well, that turned out to be a bad move for several reasons. Here's how it's all tied together. The original domain name, www.domainoriginal.com, was actually purchased at Network Solutions. The original web designer pointed the nameservers from Network Solutions to his GoDaddy account, where the email and hosting is setup. The new domain name, www.domainnew.com, was purchased under a new and separate GoDaddy account belonging to the company, and the new website was built under a 3rd party platform (Big Commerce). So, the www.domainnew.com is already pointed to the new website using A records at new GoDaddy account. All is fine there. However, they still need www.domainoriginal.com to point to the NEW website as well. (The old one can simply be deleted, it is NOT important). AND, they want to keep their old email addresses intact and working as well, but under the NEW GoDaddy account. Obviously, I have no DNS control at Network Solutions, and I have no idea what kind of control I have at GoDaddy under the old account because the web designer will not let me see inside his account. But, he and GoDaddy both tell me nothing can be done other than to repoint the nameservers to Network Solutions, and then repoint the A record to my new website, www.domainnew.com, and point the MX Records to GoDaddy. I'm told the downtime would be 24-48 hours if I do this. Ideally, we'd like to do a domain name transfer and get www.domainoriginal.com in the new GoDaddy account created by the company. But, I'm told this could take up to 7 days. Does this mean the site and email will be down for 7 days? And any emails sent during this time, would they be lost forever? If I do this, how long could I expect the site and email to go down? And, will the emails be permanently lost? I've gotten different answers from everybody at GoDaddy so I kind of don't trust them anymore... Any help would be greatly appreciated Thanks, Tyson

    Read the article

  • Oracle Service Cloud May 2014 Release – Focus on your driving by JP Saunders

    - by Tuula Fai
    The next time you’re twiddling dials on your car’s dashboard to get the air to blow in the right direction, and the right song to play on the stereo, while pulling on the wires to charge your phone and punching in passwords to re-sync your hands-free headset to take a call, consider this… Does having a better dashboard UI in your car improve your driving performance? The Tesla car has one of the most modern and intuitive dashboards in any commercial car today. It is actually based on the design of a smart phone, which can download apps and updates directly from the cloud.  The 17” touchscreen, Lynx-based dashboard totally integrates all channels and devices, allowing the driver to focus on the smooth driving and power of this luxury (toy) car.  What the folks at Tesla didn't do was avoid the complexity of our needs. Instead, they streamlined them. And, while we might not all be able to afford a Tesla, their approach demonstrates that a modern UI approach can ultimately make a positive difference in our lives and businesses.  This is why the productivity and effectiveness of a Modern Contact Center is many times greater than that of a traditional contact center. Agents in a Modern Contact Center get to focus on the task at hand, the customer engagement, rather than stumbling their way through Lego blocks of complexity.  The Oracle Service Cloud is a modern approach to customer service that empowers your agents to achieve greater focus on improving your operational and strategic success through streamlined business processes.  Here are some of the recent May 2014 release highlights to the Oracle Service Cloud: Performance Enhanced Desktop UI A modern agent desktop interface that optimizes clumsy tasks, logins, screens and workflows and is optimized for agent and system performance. Improvements include performance for drag-and-drop configurable views, saved searches, and improved caching for high-speed performance even during disconnected or slow internet access.  Customer Experience Routing A streamlined automatic way to connect the right customer need to the best agent skills, based on multidimensional variables such as product skills, language skills, workload, call volume to optimize the connection and resolution experience. On-The-Go Mobile Improvements to the Agent mobile app that extend connectivity to websites, and customer surveys that are mobile-ready and rendered for any device, and ensure the customer’s voice is captured while the insight is still top of mind.  Infused Social Engagement Enhancements to infused social capabilities allow agents to respond in social threads directly from within the agent desktop, with the information becoming part of the incident record for automatic actions (such as replay or escalate) triggered off the response. Front-End Siebel Contact Center The market leading online Web Customer Self-Service interface from the Oracle Service Cloud, is now out-of-the-box ready for Oracle Siebel customers. Deploy a new online web self-service interface in a matter of weeks to have customers self-serve and self-solve answers, with escalated incidents routed directly into the Oracle Siebel Contact Center. For more information on the latest enhancements for the Oracle Service Cloud, please see the Oracle Service Cloud May 2014 Capabilities and Benefits. Related blogs: Oracle Service Cloud Feb 2014

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >