Search Results

Search found 39786 results on 1592 pages for 'back button'.

Page 430/1592 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • REST and PayPal

    - by Nikolay Fominyh
    Is it ok to query REST API and get redirect to third party from it, or it is only about resources? Let's look at following scenario: User gets to payment page User clicks on "Pay using paypal button" API query PayPal for redirect url API returns redirect url in response. Client side redirect goes here. User does PayPal routine and returns with token User query API with token API do token check and adds money Is this scenario complex for REST architecture?

    Read the article

  • Restore default keyboard shortcut for Workspace Switcher/Show Desktop

    - by To Do
    I tried setting the default keyboard shortcut to Hide normal windows (Show desktop) to Super + S. It didn't work and now whenever I press Super + S, I get the workspace switcher. I tried setting Hide normal windows back to Ctrl + Super + S, but it doesn't work. I'm still getting the Workspace switcher. How can I reset these two settings? I use the Show Desktop quite a lot and it is quite annoying not being able to do it.

    Read the article

  • What causes Nautilus to restart whenever I kill it?

    - by fred.bear
    In htop, I kill Nautilus, and within one second, it's back, with a new PID! The restarted Nautilus shows in the Processes list, but has no GUI until I manually launch Nautilus... I've heard mention of Nautilus works in lockstep with the desktop... maybe that is the reason(?). Is there some sort of "watchdog" program keeping an eye on some distro-critical programs? Monitoring Nautilus doesn't seem like a Linux kernel issue, so I just wonder what is happening here?

    Read the article

  • Can not login after removing broken packages

    - by devin
    I just updated my ubuntu to the latest version. After updating, everytime I try to remove or add anything, I get this error: errors were encountered while processing: E: Sub-process /usr/bin/dpkg returned an error code (1) Package manager notified me that all my gnome packages were broken and I couldn't make any updates until I deleted the gnome packages. So, I deleted all the gnome packages. Now I can not login anymore, after entering my password, it flashes right back to the login screen.

    Read the article

  • My program at #MIX10

    - by Laurent Bugnion
    Getting ready to fly to Vegas and MIX10 is really an exciting time! It is also a very busy time, because we are working on a few projects that will be shown on stage, I have my presentation to prepare, and of course as always the book… though these days it has been a bit on the back burner to be honest ;) I arrive in Vegas on Sunday evening around 10PM, so I won’t be able to make it to the traditional IdentityMine dinner this year. I am sure it will be fun nonetheless! My session: Understanding the MVVM pattern http://live.visitmix.com/MIX10/Sessions/EX14 My session is scheduled on the first day, which is awesome, so I am crossing my fingers and hoping that the MIX team doesn’t change it at the last minute… The session will take place on Monday, the 15th of March, 2PM, Room Lagoon F Important: remember that the USA are moving to Summer time on Sunday, so don’t forget to adjust your watches!! Ask the Experts On Monday evening, I will attend the Ask the Experts event, which is taking place between 5Pm and 6:30PM in the main meal hall. This will be a great occasion to grab a beer and talk about code. The Commons MIX has a great place called the Commons, a great location to chill between sessions, and meet tons of interesting people. I love the Commons and plan to spend a lot of time there to meet as many people as I can. Parties I was invited to a few parties, and will do my best to avoid conflicts :) I plan to be at the following events: Silverlight Mixers on Monday evening Insiders MIX party on Tuesday Silverlight partner happy hour on Tuesday too This is a lot of fun, but at the same time we all know that the best value of a conference is to meet people face to face. This is just the right occasion.  And on Thursday… On Thursday I will be attending a Silverlight event at the Luxor. It will be a very busy day, perfect way to end the conference. I fly back home on Friday morning, but due to a long stop in Washington DC (where I intend to go downtown and take pictures… except if the weather is bad, in which case I will probably go to the museum of flight), I will reach home only on Sunday. Getting hold of me The best way to reach me during MIX is to send me a message on Twitter. I will regularly tweet my location at the conference, so make sure to come and meet me. I am eager to make new friends, to talk about the fantastic jobs we did in WPF and Silverlight over the past year and hear your war stories! http://www.twitter.com/lbugnion   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • SharePoint Designer 2010 Workflow Email Link To Item

    - by Brian Jackett
    In this post I’ll walk you through the process of sending an email that contains a link to the current item from a SharePoint Designer 2010 workflow.  This is a process that has been published on many other forums and blogs, but many that I have seen are more complex than seems necessary. Problem     A common request from SharePoint users is to get an email which contains a link to review/approve/edit the workflow item.  SharePoint list items contain an automatic property for Url Path, but unfortunately that Url is not properly formatted to retrieve the item if you include it directly on the message body.  I tried a few solutions suggested from other blogs or forums that took a substring of the Url Path property, concatenated the display form view Url, and mixed in some other strings.  While I was able to get this working in some scenarios I still had issues in general. Solution     My solution involved adding a hyperlink to the message body.  This ended up being far easier than I had expected and fairly intuitive once I found the correct property to use.  Follow these steps to see what I did.     First add a “Send an Email” action to your workflow.  Edit the action to pull up the email configuration dialog.  Click the “Add hyperlink” button seen below. When prompted for the address of the link click the fx button to perform a lookup.  Choose Workflow Context from the “data source” dropdown.  Choose Current Item URL from the “field from source” dropdown.  Click OK. Your Edit Hyperlink dialog should now look something like this. The end result will be a hyperlink added to your email pointing to the current workflow item.  Note: this link points to the non-modal dialog display form (display form similar to what you had in 2007). Conclusion     In this post I walked you through the steps to create a SharePoint Designer 2010 workflow with an email that contains a link to the current item.  While there are many other options for accomplishing this out on the web I found this to be a more concise process and easy to understand.  Hopefully you found this helpful as well.  Feel free to leave any comments or feedback if you’ve found other ways that were helpful to you.         -Frog Out

    Read the article

  • indexing and crawling

    - by ricky
    hello mate my site is dailytopup.com...earlier my site was indexed imediately i post anything but last month my website was crashed due to sever problem and i adont have back up at that time so i recover everything from cached copies but before doing that i remove old urls from the webmaster and then repost again.but after that my website is not indexed properly reaults in no optimsation.everytime i have to use fetch as google but this is not that effective..can you please tell where um lacking or what should i do now?

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Goodbye my beloved Nexus One, hello Windows Phone 7

    - by George Clingerman
    Last night my wife’s Nexus One finally bit the dust. You may not know but I’ve been nursing her Nexus One one along for quite a while after her screen shattered. I was able to replace it on my own (go me!) but little quirks have been popping up and the phone was quickly deteriorating. Lately it’s been the power button. Wifey would often have to press the power button several times to get her phone to turn on and last night it just wouldn’t wake up again. I took it apart and tried my best to see if I could somehow make it live once again but no luck this time. It was finally ready to retire. We looked at first for a replacement phone for her but she wasn’t really seeing anything she liked. So I decided to make the ultimate sacrifice and offer up my much loved Nexus One and I would then get a new Windows Phone 7 device. I love T-Mobile for my service so my choices were immediately limited to basically just a single phone. The HTC HD7. I read reviews and they were all over the board from people loving to people hating the phone but I decided, hey, why not, let’s take this plunge. And I did. I’ve only had the phone for about two days now so below is my list of first reaction pros/cons. These are basically things I’ve missed or things I’ve noticed that I really like about my new Windows Phone. Cons: * No Google Talk – I used this a LOT on my Nexus. I’ve found an application called “Flory” but it’s just an ok substitute, not the same as the full featured GTalk I had on my Nexus. * Seesmic is limited– I loved the way Seesmic worked on my Nexus. It was my mobile twitter client of choice. Everything about it worked really well. On Windows Phone 7 it’s just ok. I don’t get notification of new tweets, it’s several clicks to even see a new tweet. It’s definitely got some more development before it has the same features as it did on my Nexus. * Buttons don’t give great feedback – I’d read this on the reviews about the HTC HD7 and I’m finding it true myself. Pressing the buttons on the side of the phone and the power button on the top is finicky and I have to be looking at my phone to make sure I actually got them to press. * Web browsing is slow – I’m not sure what’s up with this, I’m connected to my wireless network at my house but it’s noticeably slower on my WP7 device than my Nexus. I even switched back to verify and it’s definitely true. Retrieving tweets, hitting up the XNA forums and just general web activities are all much slower on my WP7. I can’t think of any reason this would be true but it almost seems like it’s not using my wireless for everything.   Pros: * It’s pretty – the phone is really gorgeous. I loved the form of my Nexus One by the HTC HD7 is just as pretty, maybe even prettier! It’s got a nice large, bright screen. It feels good in my hand. And it even has a little kickstand to set the phone up for movie watching. Definitely a gorgeous phone. * LIVE integration – I lost a lot of nice integration with Google services but I gained a lot of integration with LIVE services that I also use. Now I can see when I get new GMail messages AND Hotmail messages. And having the Xbox LIVE integration is admittedly cool as well. * Tile notification rock – The Windows Phone 7 commercials are TRYING to get this message out but they’re doing a really poor job of this. Tile notifications really do save you from your phone. I have a whole little mini-informational dashboard at a glance. I unlock my phone and at a glace I can see new IMs, new mail messages, software updates etc. All just letting me know in the tiles I have arranged. That’s pretty cool. * The interface works really well – I feel super hip and cool swiping and sliding things around on my Windows Phone 7. Everything works that way and it’s great and fast and really good looking. I’m all about me feeling cool. * I’m gaming more – I had gotten a few games on my Nexus One but there really weren’t a lot of good developers flocking to the service. Just browsing through the Windows Phone 7 marketplace I’m already seeing a ton of games I want to try and buy. And I sat down and bet Pixel Man 0 just yesterday on my phone. I’m already gaming more than I did on my Nexus One. * Netflix integration is fantastic - It works just like it does on my Xbox 360 and I love having this feature on my phone. * It’s basically a Zune – I’ve been taking my Zune to work and listening to music off of that while I code. I no longer need to take it with me, now I just sync songs onto my phone and it’s my new Zune. I freaking love that. One less device to carry around.   All in all my cons have really little to do with the phone (just the buttons and the web browsing) and more to do with the applications needing to catch up a bit to what I’m used to. And the Pros are things that ARE phone specific so I’m seeing that as a good sign that I’m going to be very happy with my Windows Phone 7. So Wifey is happy having her Nexus One again, I’m happy with my new Windows Phone 7. Life is good. Now I just need to make a game to pay for it….

    Read the article

  • Are there any drawbacks to the Major.Minor.YMDD.Build version strategy?

    - by Chu
    I'm trying to come up with a good version strategy to fit our specific needs. We've proposed settling on this and I wanted to ask the question to see if anyone's experience would suggest avoiding this or altering it in any way. Here's our proposal: Versions are released in this format: MAJOR.MINOR.YMDD.BN. Here it is broken out: MAJOR & MINOR are typical; we'll increase MINOR when we feel code and new feature sets warrants it; once every few months most likely. MAJOR will increase ~yearly. YMDD: Y will be the last digit of the current year, so "1" for 2011, "2" for 2012, etc. A non-padded month will be used to keep the number smaller (9 instead of 09 for example). DD of course is the day, padded with a zero for days under 10. BN: BN is the build number and increases by one anytime we make a change to a branch of the code represented by the build, for example: If were to make a build today, our release would be version 5.0.1707.1. I release to QA today and 3 days from now QA finds that a change broke the save functionality on a page. Instead of me changing our current development code, I'd go back to the code that I used to create version 5.0.1707.1, make the fix there, then increase the BN portion of the version and would then re-release 5.0.1707.2 back to QA. In short, anytime a change is made to a branched version that isn't the active dev branch, we'd use the original version number and increase only the BN portion (even if the change happened 3 days, 3 weeks or 3 months from the initial release of that version). Anytime we make a new release from our Active dev branch, we'd come up with a new version based on the M/D of the release using the outlined strategy. We do this once every 2-3 weeks. Are there holes or pitfalls with this? If so, what are they? Thanks EDIT To clarify one point that I didn't get out very well - Oct/Nov/Dec will be two digits, it's only the year that won't be. So 9 for Sept, 10 for Oct, 11 for Nov, etc.

    Read the article

  • Saving user details on OnSuspending event for Metro Style Apps

    - by nmarun
    I recently started getting to know about Metro Style Apps on Windows 8. It looks pretty interesting so far and VS2011 definitely helps making it easier to learn and create Metro Style Apps. One of the features available for developers is the ability to save user data so it can be retrieved the next time the app is run after being closed by the user or even launched from back suspended state. Here’s a little history on this whole ‘suspended’ state of a Metro Style app: Once the user say, ‘alt+tab...(read more)

    Read the article

  • Cannot add custom keyboard shortcut

    - by krasilich
    I'm trying to assign custom shortcut to launch my own script (I don't need it in terminal window, just launch it) So, I went to Settings - Keyboard - Shortcuts - Custom Shortcuts, pressed the button with "+" and entered the name of shortcut and the command itself. But then I cannot bind keys to that shortcut. I selected a new row and pressed needed keys (Shift + F3) and nothing happens. I can change the keys for system shortcuts, but do not have any luck with my own. Any ideas?

    Read the article

  • How to make changes in gconf-editor permanent

    - by Kristal
    Every time I open my computer, I need to manually unset the value of button_layout in gconf-editor in /apps/metacity/general to make to the close, minimize and maximize buttons on the right side of the window, but every time I restart my computer it changes back to the left side. I've tried to right-click the setting and choose "set as default" - but this doesn't work. How can I make this be permanent?

    Read the article

  • File / Application association using a custom command is gone?

    - by Christian Vielma
    In previous Ubuntus when you want to select/change an application to open a specific file (right-click/open with other application or properties) you were able to write a custom command to open the file. This was very useful, but now in 11.10 I can't find this option, it only shows me a list of applications and a button to look for applications in Internet. Is there a way to restore the command line to write custom commands to open files?

    Read the article

  • Gnome Shell Thunderbird Mail Notification

    - by Nerdfest
    Does anyone know of a way to get persistent Gnome 3 panel notifications in Gnome 3 in Oneiric? It's one of the few things holding me back from using Gnome 3 regularly. I've actually found a way of moving the notifications from the (usually) hidden bottom bar to the top, but it does not move the Thunderbird icon. The icon also only tends to appear the first time mail is received. I'm very surprised this basic piece of functionality doesn't exist for Gnome Shell.

    Read the article

  • Cannot install Skype on Ubuntu 12.04 64-bit. Help?

    - by Yogesh Dhamija
    When i try to install it from the Software Center, it displays an error: The following packages have unmet dependencies: skype: Depends: skype-bin but it is a virtual package Running sudo apt-get install skype gives the same error. When I download it from the official website, the .deb file opens in Software Center and displays "Cannot install 'ia32-libs'" and doesn't even show the install button. Please help. How can I install skype?

    Read the article

  • Why is my system freezing when I switch users

    - by ZeroDivide
    Hello I've recently upgraded from 13.04 to 13.10 64bit. I'm running AMD graphics with the proprietary drivers. I have two user accounts. Mine(administrator) and my girlfriend's(standard) My girlfriend clicks "switch user" from my lock screen and logs in fine. I then try to click "switch user" from her lock screen and everything goes black. Then the monitor blinks on and off with just a single cursor. I have no way to access the terminal, the system is unresponsive and I have to hit the power button. Even ctrl + alt + f4 or ctrl + alt + t doesn't get me a terminal. When I press the power button on my system, it does start printing out the shutdown sequence on the monitor. Here is my .xsession-errors Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. Here is hers: init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd respawning too fast, stopped init: logrotate main process (4726) killed by TERM signal init: upstart-dbus-session-bridge main process (4865) terminated with status 1 init: gnome-settings-daemon main process (4843) terminated with status 1 init: gnome-session main process (4852) terminated with status 1 init: unity-panel-service main process (4863) killed by KILL signal I found some advice in a forum to look for at-spi2-registryd in my system logs. Perhaps it will be useful. executing this: sudo grep -r at-spi2-registryd /var/log/* produces this: /var/log/lightdm/x-1-greeter.log:** (at-spi2-registryd:4384): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-1-greeter.log:** (at-spi2-registryd:4384): WARNING **: Unable to register client with session manager /var/log/lightdm/x-2-greeter.log.old:** (at-spi2-registryd:7447): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-2-greeter.log.old:** (at-spi2-registryd:7447): WARNING **: Unable to register client with session manager /var/log/lightdm/x-0-greeter.log:** (at-spi2-registryd:1378): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-0-greeter.log:** (at-spi2-registryd:1378): WARNING **: Unable to register client with session manager /var/log/lightdm/x-0-greeter.log.old:** (at-spi2-registryd:1357): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-0-greeter.log.old:** (at-spi2-registryd:1357): WARNING **: Unable to register client with session manager Any ideas what is going on?

    Read the article

  • The Best How-To Geek Articles for March 2012

    - by Asian Angel
    March was a busy month here at HTG where we covered topics such as properly scanning photos (and getting better images), the best tips for securing your data, identifying network abuse with Wireshark, and more. Join us as we look back at the most popular articles from this past month. How to Own Your Own Website (Even If You Can’t Build One) Pt 1 What’s the Difference Between Sleep and Hibernate in Windows? Screenshot Tour: XBMC 11 Eden Rocks Improved iOS Support, AirPlay, and Even a Custom XBMC OS

    Read the article

  • Sound works but cannot change settings or volume

    - by W. Conrad Walden
    Volume is always on max. The volume indicator is showing three dashes and does not show anything when I click on it. Clicking the "sound" button in settings causes it to recursively reopen settings. pulseaudio --start doesn't help. EDIT: This has only started happening after the upgrade to 13.10 EDIT 2: This is in Unity, yes. EDIT 3: killall unity-panel-service works and does reload the panels, but previous problems persist.

    Read the article

  • Why do we (really) program to interfaces?

    - by Kyle Burns
    One of the earliest lessons I was taught in Enterprise development was "always program against an interface".  This was back in the VB6 days and I quickly learned that no code would be allowed to move to the QA server unless my business objects and data access objects each are defined as an interface and have a matching implementation class.  Why?  "It's more reusable" was one answer.  "It doesn't tie you to a specific implementation" a slightly more knowing answer.  And let's not forget the discussion ending "it's a standard".  The problem with these responses was that senior people didn't really understand the reason we were doing the things we were doing and because of that, we were entirely unable to realize the intent behind the practice - we simply used interfaces and had a bunch of extra code to maintain to show for it. It wasn't until a few years later that I finally heard the term "Inversion of Control".  Simply put, "Inversion of Control" takes the creation of objects that used to be within the control (and therefore a responsibility of) of your component and moves it to some outside force.  For example, consider the following code which follows the old "always program against an interface" rule in the manner of many corporate development shops: 1: ICatalog catalog = new Catalog(); 2: Category[] categories = catalog.GetCategories(); In this example, I met the requirement of the rule by declaring the variable as ICatalog, but I didn't hit "it doesn't tie you to a specific implementation" because I explicitly created an instance of the concrete Catalog object.  If I want to test the functionality of the code I just wrote I have to have an environment in which Catalog can be created along with any of the resources upon which it depends (e.g. configuration files, database connections, etc) in order to test my functionality.  That's a lot of setup work and one of the things that I think ultimately discourages real buy-in of unit testing in many development shops. So how do I test my code without needing Catalog to work?  A very primitive approach I've seen is to change the line the instantiates catalog to read: 1: ICatalog catalog = new FakeCatalog();   once the test is run and passes, the code is switched back to the real thing.  This obviously poses a huge risk for introducing test code into production and in my opinion is worse than just keeping the dependency and its associated setup work.  Another popular approach is to make use of Factory methods which use an object whose "job" is to know how to obtain a valid instance of the object.  Using this approach, the code may look something like this: 1: ICatalog catalog = CatalogFactory.GetCatalog();   The code inside the factory is responsible for deciding "what kind" of catalog is needed.  This is a far better approach than the previous one, but it does make projects grow considerably because now in addition to the interface, the real implementation, and the fake implementation(s) for testing you have added a minimum of one factory (or at least a factory method) for each of your interfaces.  Once again, developers say "that's too complicated and has me writing a bunch of useless code" and quietly slip back into just creating a new Catalog and chalking any test failures up to "it will probably work on the server". This is where software intended specifically to facilitate Inversion of Control comes into play.  There are many libraries that take on the Inversion of Control responsibilities in .Net and most of them have many pros and cons.  From this point forward I'll discuss concepts from the standpoint of the Unity framework produced by Microsoft's Patterns and Practices team.  I'm primarily focusing on this library because it questions about it inspired this posting. At Unity's core and that of most any IoC framework is a catalog or registry of components.  This registry can be configured either through code or using the application's configuration file and in the most simple terms says "interface X maps to concrete implementation Y".  It can get much more complicated, but I want to keep things at the "what does it do" level instead of "how does it do it".  The object that exposes most of the Unity functionality is the UnityContainer.  This object exposes methods to configure the catalog as well as the Resolve<T> method which is used to obtain an instance of the type represented by T.  When using the Resolve<T> method, Unity does not necessarily have to just "new up" the requested object, but also can track dependencies of that object and ensure that the entire dependency chain is satisfied. There are three basic ways that I have seen Unity used within projects.  Those are through classes directly using the Unity container, classes requiring injection of dependencies, and classes making use of the Service Locator pattern. The first usage of Unity is when classes are aware of the Unity container and directly call its Resolve method whenever they need the services advertised by an interface.  The up side of this approach is that IoC is utilized, but the down side is that every class has to be aware that Unity is being used and tied directly to that implementation. Many developers don't like the idea of as close a tie to specific IoC implementation as is represented by using Unity within all of your classes and for the most part I agree that this isn't a good idea.  As an alternative, classes can be designed for Dependency Injection.  Dependency Injection is where a force outside the class itself manipulates the object to provide implementations of the interfaces that the class needs to interact with the outside world.  This is typically done either through constructor injection where the object has a constructor that accepts an instance of each interface it requires or through property setters accepting the service providers.  When using dependency, I lean toward the use of constructor injection because I view the constructor as being a much better way to "discover" what is required for the instance to be ready for use.  During resolution, Unity looks for an injection constructor and will attempt to resolve instances of each interface required by the constructor, throwing an exception of unable to meet the advertised needs of the class.  The up side of this approach is that the needs of the class are very clearly advertised and the class is unaware of which IoC container (if any) is being used.  The down side of this approach is that you're required to maintain the objects passed to the constructor as instance variables throughout the life of your object and that objects which coordinate with many external services require a lot of additional constructor arguments (this gets ugly and may indicate a need for refactoring). The final way that I've seen and used Unity is to make use of the ServiceLocator pattern, of which the Patterns and Practices team has also provided a Unity-compatible implementation.  When using the ServiceLocator, your class calls ServiceLocator.Retrieve in places where it would have called Resolve on the Unity container.  Like using Unity directly, it does tie you directly to the ServiceLocator implementation and makes your code aware that dependency injection is taking place, but it does have the up side of giving you the freedom to swap out the underlying IoC container if necessary.  I'm not hugely concerned with hiding IoC entirely from the class (I view this as a "nice to have"), so the single biggest problem that I see with the ServiceLocator approach is that it provides no way to proactively advertise needs in the way that constructor injection does, allowing more opportunity for difficult to track runtime errors. This blog entry has not been intended in any way to be a definitive work on IoC, but rather as something to spur thought about why we program to interfaces and some ways to reach the intended value of the practice instead of having it just complicate your code.  I hope that it helps somebody begin or continue a journey away from being a "Cargo Cult Programmer".

    Read the article

  • Acer Allionone Z5810 Touchscreen Issues 12.04

    - by Johannes
    I have an Acer Allionone Z5810, and I can't get the touchscreen to work after I install 12.04. Here is the lsusb output: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 0596:0508 MicroTouch Systems, Inc. Bus 001 Device 004: ID 04b8:0005 Seiko Epson Corp. Printer Bus 001 Device 005: ID 07ca:1336 AVerMedia Technologies, Inc. Bus 002 Device 003: ID 04ca:0058 Lite-On Technology Corp. Bus 002 Device 004: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) Bus 002 Device 005: ID 04f2:b23f Chicony Electronics Co., Ltd xinput --list Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Lite-On Technology Corp. Wireless Device id=9 [slave pointer (2)] ? ? Lite-On Technology Corp. Wireless Device id=10 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Power Button id=7 [slave keyboard (3)] ? Lite-On Technology Corp. Wireless Device id=8 [slave keyboard (3)] ? USB 2.0 camera id=11 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=12 [slave keyboard (3)] My xorg.conf contains: nvidia-xconfig: X configuration file generated by nvidia-xconfig nvidia-xconfig: version 304.48 (buildmeister@swio-display x86-rhel47-04.nvidia.com) Sun Sep 9 21:31:39 PDT 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" InputDevice "TouchScreen" EndSection Section "Files" EndSection Section "InputDevice" Identifier "TouchScreen" Driver "microtouch" Option "Type" "finger" Option "Device" "/dev/ttyS3" Option "ScreenNo" "0" Option "MinX" "0" Option "MaxX" "16383" Option "MinY" "0" Option "MaxY" "16383" Option "SendCoreEvents" "yes" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Behavior Driven Development (BDD) and DevExpress XAF

    - by Patrick Liekhus
    So in my previous posts I showed you how I used EDMX to quickly build my business objects within XPO and XAF.  But how do you test whether your business objects are actually doing what you want and verify that your business logic is correct?  Well I was reading my monthly MSDN magazine last last year and came across an article about using SpecFlow and WatiN to build BDD tests.  So why not use these same techniques to write SpecFlow style scripts and have them generate EasyTest scripts for use with XAF.  Let me outline and show a few things below.  I plan on releasing this code in a short while, I just wanted to preview what I was thinking. Before we begin… First, if you have not read the article in MSDN, here is the link to the article that I found my inspiration.  It covers the overview of BDD vs. TDD, how to write some of the SpecFlow syntax and how use the “Steps” logic to create your own tests. Second, if you have not heard of EasyTest from DevExpress I strongly recommend you review it here.  It basically takes the power of XAF and the beauty of your application and allows you to create text based files to execute automated commands within your application. Why would we do this?  Because as you will see below, the cucumber syntax is easier for business analysts to interpret and digest the business rules from.  You can find most of the information you will need on Cucumber syntax within The Secret Ninja Cucumber Scrolls located here.  The basics of the syntax are that Given X When Y Then Z.  For example, Given I am at the login screen When I enter my login credentials Then I expect to see the home screen.  Pretty easy syntax to follow. Finally, we will need to download and install SpecFlow.  You can find it on their website here.  Once you have this installed then let’s write our first test. Let’s get started… So where to start.  Create a new testing project within your solution.  I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e.  AlbumManager.FunctionalTests).  Remove the basic test that is created for you.  We will not use the default test but rather create our own SpecFlow “Feature” files.  Add a new item to your project and select the SpecFlow Feature file under C#.  Name your feature file as you do your class files after the test they are performing. Now you can crack open your new feature file and write the actual test.  Make sure to have your Ninja Scrolls from above as it provides valuable resources on how to write your test syntax.  In this test below you can see how I defined the documentation in the Feature section.  This is strictly for our purposes of readability and do not effect the test.  The next section is the Scenario Outline which is considered a test template.  You can see the brackets <> around the fields that will be filled in for each test.  So in the example below you can see that Given I am starting a new test and the application is open.  This means I want a new EasyTest file and the windows application generated by XAF is open.  Next When I am at the Albums screen tells XAF to navigate to the Albums list view.  And I click the New:Album button, tells XAF to click the new button on the list grid.  And I enter the following information tells XAF which fields to complete with the mapped values.  And I click the Save and Close button causes the record to be saved and the detail form to be closed.  Then I verify results tests the input data against what is visible in the grid to ensure that your record was created. The Scenarios section gives each test a unique name and then fills in the values for each test.  This way you can use the same test to make multiple passes with different data. Almost there.  Now we must save the feature file and the BDD tests will be written using standard unit test syntax.  This is all handled for you by SpecFlow so just save the file.  What you will see in your Test List Editor is a unit test for each of the above scenarios you just built. You can now use standard unit testing frameworks to execute the test as you desire.  As you would expect then, these BDD SpecFlow tests can be automated into your build process to ensure that your business requirements are satisfied each and every time. How does it work? What we have done is to intercept the testing logic at runtime to interpret the SpecFlow syntax into EasyTest syntax.  This is the basic StepDefinitions that we are working on now.  We expect to put these on CodePlex within the next few days.  You can always override and make your own rules as you see fit for your project.  Follow the MSDN magazine above to start your own.  You can see part of our implementation below. As you can gather from the MSDN article and the code sample below, we have created our own common rules to build the above syntax. The code implementation for these rules basically saves your information from the feature file into an EasyTest file format.  It then executes the EasyTest file and parses the XML results of the test.  If the test succeeds the test is passed.  If the test fails, the EasyTest failure message is logged and the screen shot (as captured by EasyTest) is saved for your review. Again we are working on getting this code ready for mass consumption, but at this time it is not ready.  We will post another message when it is ready with all details about usage and setup. Thanks

    Read the article

  • Retrieve Windows 8 Product Key from mainboard

    - by Brewer Gorge
    My new laptop came preinstalled with Windows 8. Naively, as I am, I just formatted the harddrive and installed fine old Ubuntu. Now I want to install Windows 8 for dual boot again, but I have no DVD and do download the ISO one needs a product key. That key is not on the back of the laptop anymore but somewhere on the mainboard. Is there any way to recover the product key from the mainboard using Ubuntu?

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >