Search Results

Search found 14987 results on 600 pages for 'standard dialog'.

Page 171/600 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Internet Explorer 9 Cannot open file on download CTRL + J doesn’t work can’t open list of downloads

    - by simonsabin
    If any of the above symptoms are causing a problem, i.e. 1. You download a file and the download dialog disappears. 2. You select open when you download a file and nothing happens. 3. The View Downloads doesn’t work 4. CTRL + J doesn’t work (view downloads) The solution is to clear your download history See IE9 - View downloads / Ctrl+J do not open. I cannot open any file. But SAVE function still work fine. 64 bit version. for details the answer is provided by Steven. S on June 20th. I hope that...(read more)

    Read the article

  • Screen magnifier wrecks login screen — how to get rid of it?

    - by Jonik
    In the Ubuntu / GNOME login screen, I tried some of the different accessibility settings out of curiosity. Big mistake! Enabling the "use screen magnifier" (or whatever) option breaks down the view horribly, and makes it impossible to even access the settings again: (Right click & View Image for full size) Yes, I tried to access every corner of the screen using the mouse, but there's just chaos everywhere. Fortunately, I can still log in (pressing "Esc" makes the normal login dialog appear on the left-hand monitor). My question is, how to disable the "magnifier" option outside of the login screen itself? (By editing some config file perhaps?) I don't care about getting the magnifier mode to work properly - just make it go away altogether, please.

    Read the article

  • Change the Default Font Size in Word

    - by Matthew Guay
    Are you frustrated by always having to change the font size before you create a document it Word?  Here’s how you can end that frustration and set your favorite default font size for once and for all! Microsoft changed the default font font to 11 point Calibri in Word 2007 after years of 12 point Times New Roman being the default.  Although it can be easily overlooked, there are ways in Word to change the default settings to anything you want.  Whether you want to change your default to 12 point Calibri or to 48 point Comic Sans…here’s how to change your default font settings in Word 2007 and 2010. Changing Default Fonts in Word To change the default font settings, click the small box with an arrow in the right left corner of the Font section of the Home tab in the Ribbon.   In the Font dialog box, choose the default font settings you want.  Notice in the Font box it says “+Body”; this means that the font will be chosen by the document style you choose, and you are only selecting the default font style and size.  So, if your style uses Calibri, then your font will be Calibri at the size and style you chose.  If you’d prefer to choose a specific font to be the default, just select one from the drop-down box and this selection will override the font selection in your document style. Here we left all the default settings, except we selected 12 point font in the Latin text box (this is your standard body text; users of Asian languages such as Chinese may see a box for Asian languages).  When you’ve made your selections, click the “Set as Default” button in the bottom left corner of the dialog. You will be asked to confirm that you want these settings to be made default.  In Word 2010, you will be given the option to set these settings for this document only or for all documents.  Click the bullet beside “All documents based on the Normal.dotm template?”, and then click Ok. In Word 2007, simply click Ok to save these settings as default. Now, whenever you open Word or create a new document, your default font settings should be set exactly to what you want.  And simply repeat these steps to change your default font settings again if you want. Editing your default template file Another way to change your default font settings is to edit your Normal.dotm file.  This file is what Word uses to create new documents; it basically copies the formatting in this document each time you make a new document. To edit your Normal.dotm file, enter the following in the address bar in Explorer or in the Run prompt: %appdata%\Microsoft\Templates This will open your Office Templates folder.  Right-click on the Normal.dotm file, and click Open to edit it.  Note: Do not double-click on the file, as this will only create a new document based on Normal.dotm and any edits you make will not be saved in this file.   Now, change any font settings as you normally would.  Remember: anything you change or enter in this document will appear in any new document you create using Word. If you want to revert to your default settings, simply delete your Normal.dotm file.  Word will recreate it with the standard default settings the next time you open Word. Please Note: Changing your default font size will not change the font size in existing documents, so these will still show the settings you used when these documents were created.  Also, some addins can affect your Normal.dotm template.  If Word does not seem to remember your font settings, try disabling Word addins to see if this helps. Conclusion Sometimes it’s the small things that can be the most frustrating.  Getting your default font settings the way you want is a great way to take away a frustration and make you more productive. And here’s a quick question: Do you prefer the new default 11 point Calibri, or do you prefer 12 point Times New Roman or some other combination?  Sound off in the comments, and let the world know your favorite font settings. Similar Articles Productive Geek Tips Change the Default Font in Excel 2007Add Emphasis to Paragraphs with Drop Caps in Word 2007Keep Websites From Using Tiny Fonts in SafariMake Word 2007 Always Save in Word 2003 FormatStupid Geek Tricks: Enable More Fonts for the Windows Command Prompt TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Spyware Blaster v4.3 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player

    Read the article

  • How to Fix "Read-only file system" error when I run something as sudo and try to make a folder/file?

    - by Andrew
    When I try to save something or rename a file/folder it say this error " Read-only file system" or run something as root in the terminal it say this error sudo: unable to open /var/lib/sudo/"My User Name"/0: Read-only file system W: Not using locking for read only lock file /var/lib/dpkg/lock E: Unable to write to /var/cache/apt/ E: The package lists or status file could not be parsed or opened. When I make a Folder the error dialog in the details with Nautilus is this: Error creating directory: Read-only file system I would show you I picture of it but it isn't even letting my save onto my flash drive. Please help me.

    Read the article

  • Azure Storage Explorer

    - by kaleidoscope
    Azure Storage Explorer –  an another way to Deploy the services on Cloud Azure Storage Explorer is a useful GUI tool for inspecting and altering the data in your Azure cloud storage projects including the logs of your cloud-hosted applications. All three types of cloud storage can be viewed: blobs, queues, and tables. You can also create or delete blob/queue/table containers and items. Text blobs can be edited and all data types can be imported/exported between the cloud and local files. Table records can be imported/exported between the cloud and spreadsheet CSV files. Why Azure Storage Explorer Azure Storage Explorer is a licensed CodePlex project provided by Neudesic – a Microsoft partner.  It is a simple UI that requires you to input your blob storage name, access key and endpoints in the Storage Settings dialog. For more details please refer to the link: http://azurestorageexplorer.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=35189   Anish, S

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • OCS 2007 R2 User Properties Error Message

    - by BWCA
    When I attempted to configure one of our user’s Meeting settings using the Microsoft Office Communications Server 2007 R2 Administration Tool   I received an Validation failed – Validation failed with HRESULT = 0XC3EC7E02 dialog box error message. I received the same error message when I tried to configure the user’s Telephony and Other settings. Using ADSI Edit, I compared the settings of an user that I had no problems configuring and the user that I had problems configuring.  For the user I had problems configuring, I noticed a trailing space after the last phone number digit for the user’s msRTCSIP-Line attribute. After I removed the trailing space for the attribute and waited for Active Directory replication to complete, I was able to configure the user’s Meeting settings (and Telephony/Other settings) without any problems. If you get the error message, check your user’s msRTCSIP-xxxxx attributes in Active Directory using ADSI Edit for any trailing spaces, typos, or any other mistakes.

    Read the article

  • IntelliTrace Causing Slow WPF Debugging in Visual Studio 2010

    - by WeigeltRo
    Just a quick note to myself (and others that may stumble across this blog entry via a web search): If a WPF application is running slow inside the debugger of Visual Studio 2010, but perfectly fine without a debugger (e.g. by hitting Ctrl-F5), then the reason may be Intellitrace. In my case switching off Intellitrace (only available in the Ultimate Edition of Visual Studio 2010) helped gitting rid of the sluggish behavior of a DataGrid. In the “Tools” menu select “Options”, on the Options dialog click “Intellitrace” and then uncheck “Enable Intellitrace”. Note that I do not have access to Visual Studio 2012 at the time of this writing, thus I cannot make a statement about its debugging behavior.

    Read the article

  • NX/SSH remote access with Remmina

    - by Niklas
    After many days and a lot of frustration, I managed to get freenx to work on my home server. I can connect to it with nomachine's linux client, but I want to use Remmina for this purpose. The problem is that I don't exactly know how to connect to a NX-server with the program. In the connection dialog, I've chosen SSH as the protocol, and I've correctly added the IP and port. Under "SSH Authentication" I've added my user name on the server, and I choose "identity file" and selected the ssh-key I generated (which works with nxclient). (When am I supposed to provide my password for the user on the server?) When I try to connect I get the message: SSH public key authentication failed: Public key file doesn't exist Why do I get this message? How shall I proceed correctly to get the authentication working? Thank you for your time!

    Read the article

  • Ubuntu 11.04:Add right click menu as "Compress as ZIP"

    - by Ananthavel Sundararajan
    Step 1: I wanted to Add a menu as "Compress as ZIP" on right click. I know i can use change default compress format as "ZIP" using gconf-editor. But I wanted to add a new Menu Item for Compressing as ZIP without opening any other option dialog. Step 2: I wanted to Compress a file as ZIP and Rename it as a "epub". Please let me know is it possible to zip&rename by adding single menu Item? Im using Ubuntu 11.04 and Installed "Nautilus-Action-Configurations", but unsuccessful. N.B. I have read this Ask Ubuntu Q&A; I dont want to open a new window to choose me the format. It should be straight away saved as ZIP.

    Read the article

  • How to cancel Update Manager downloading flashplugin-installer?

    - by kev
    Update Manger has frozen for about 60min when downloading flashplugin-installer. And the Cancel button of Applying Changes dialog is disabled. When I click the top-left x, it doesn't response. How to cancel downloading flashplugin-installer? ...SKIP... Setting up libxatracker1 (8.0.2-0ubuntu3.1) ... Setting up update-manager-core (1:0.156.14.5) ... Setting up update-manager (1:0.156.14.5) ... Setting up update-notifier-common (0.119ubuntu8.4) ... flashplugin-installer: downloading http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.2.202.236.orig.tar.gz

    Read the article

  • Issue 15: SVP Focus

    - by rituchhibber
         SVP FOCUS FOCUS -- Chris Baker SVP Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners’ business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. RESOURCES -- Oracle PartnerNetwork (OPN) OPN Solutions Catalog Oracle Exastack Program Oracle Exastack Optimized Oracle Cloud Computing Oracle Engineered Systems Oracle and Java SUBSCRIBE FEEDBACK PREVIOUS ISSUES "By taking part in marketing activities, our partners accelerate their sales cycles." -- Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best—building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best-in-class application platform so ISVs are free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success. How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use of our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme. How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry. What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data—a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle. What opportunities are immediately opened to new ISV partners joining the OPN? As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation. Finally, are there any other messages that you would like to share with the Oracle ISV community? The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: "I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud". The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them. Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • Windows 8 Live Accounts and the actual Windows Account

    - by Rick Strahl
    As if Windows Security wasn't confusing enough, in Windows 8 we get thrown yet another curve ball with Windows Live accounts to logon. When I set up my Windows 8 machine I originally set it up with a 'real', non-live account that I always use on my Windows machines. I did this mainly so I have a matching account for resources around my home and intranet network so I could log on to network resources properly. At some point later I decided to set up Windows Live security just to see how changes things. Windows wants you to use Windows Live Windows 8 logins are required in order for the Windows RT account info to work. Not that I care - since installing Windows 8 I've maybe spent 10 minutes with Windows RT because - well it's pretty freaking sucky on the desktop. From shitty apps to mis-managed screen real estate I can't say that there's anything compelling there to date, but then I haven't looked that hard either. Anyway… I set up the Windows Live account to see if that changes things. It does - I do get all my live logins to work from Live Account so that Twitter and Facebook posts and pictures and calendars all show up on live tiles on the start screen and in the actual apps. That's nice-ish, but hardly that exciting given that all of the apps tied to those live tiles are average at best. And it would have been nice if all of this could be done without being forced into running with a Windows Live User Account - this all feels like strong-arming you into moving into Microsofts walled garden… and that's probably what it's meant to do. Who am I? The real problem to me though is that these Windows Live and raw Windows User accounts are a bit unpredictable especially when it comes to developer information about the account and which credentials to use. So for example Windows reports folder security like this: Notice it's showing my Windows Live account. Now if I go to Edit and try to add my Windows user account (rstrahl) it'll just automatically show up as the live account. On the other hand though the underlying system sees everything as my real Windows account. After I switched to a Windows Live login account and I have to login to Windows with my Live account, what do you suppose this returns?Console.WriteLine(Environment.UserName); It returns my raw Windows user account (rstrahl). All my permissions, all my actual settings and the desktop console altogether run under that account. If I look in TaskManager (or Process Explorer for me) I see: Everything running on the desktop shell with my login running under my Windows user account. I suppose it makes sense, but where is that association happening? When I switched to a Windows Live account, nowhere did I associate my real account with the Live account - it just happened. And looking through the account configuration dialogs I can't find any reference to the raw Windows account. Other than switching back I see no mention anywhere of the raw Windows account - everything refers to the Live account. Right then, clear as potato soup! So this is who you really are! The problem is that in some situations this schizophrenic account behavior gets a bit weird. Today I was running a local Web application in IIS that uses Windows Authentication - I tried to log-in with my real Windows account login because that's what I'm used to using with WINDOWS freaking Authentication through IIS. But… it failed. I checked my IIS settings, my apps login settings and I just could not for the life of me get into the site with my Windows username. That is until I finally realized that I should try using my Windows Live credentials instead. And that worked. So now in this Windows Authentication dialog I had to type in my Live ID and password, which is - just weird. Then in IIS if I look at a Trace page (or in my case my app's Status page) I see that the logged on account is - my Windows user account. What's really annoying about this is that in some places it uses the live account in other places it uses my Windows account. If I remote desktop into my Web server online - I have to use the local authentication dialog but I have to put in my real Windows credentials not the Live account. Oh yes, it's all so terribly intuitive and logical… So in summary, when you log on with a Live account you are actually mapped to an underlying Windows user. In any application if you check the user name it'll be the underlying user account (not sure what happens in a Windows RT app or even what mechanism is used there to get the user name info).  When logging on to local machine resource with user name and password you have to use your Live IDs even if the permissions on the resources are mapped to your underlying Windows account. Easy enough I suppose, but still not exactly intuitive behavior…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Polishing the MonologFX API

    - by HecklerMark
    Earlier this week, I released "into the wild" a new JavaFX 2.x dialog library, MonologFX, that incorporated some elements of DialogFX and new features I'd been working on over time. While I did try to get the API to a point of reasonable completion (nothing is ever truly "finished", of course!), there was one bit of functionality that I'd included without providing any real "polish": that of the button icons. Good friend and fellow JFXtras teammate José Pereda Llamas suggested I fix that oversight and provide an update (thanks much, José!), thus this post. If you'd like to take a peek at the new streamlined syntax, I've updated the earlier post; please click here if you'd like to review it. If you want to give MonologFX a try, just point your browser to GitHub to download the updated code and/or .jar. All the best,Mark

    Read the article

  • JSON error Caused by: java.lang.NullPointerException

    - by user3821853
    im trying to make a register page on android using JSON. everytime i press register button on avd, i get an error "unfortunately database has stopped". i have a error on my logcat that i cannot understand. this my code. please someone help me. this my register.java import android.app.Activity; import android.app.ProgressDialog; import android.os.AsyncTask; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import java.util.ArrayList; import java.util.List; public class Register extends Activity implements OnClickListener{ private EditText user, pass; private Button mRegister; // Progress Dialog private ProgressDialog pDialog; // JSON parser class JSONParser jsonParser = new JSONParser(); //php register script //localhost : //testing on your device //put your local ip instead, on windows, run CMD > ipconfig //or in mac's terminal type ifconfig and look for the ip under en0 or en1 // private static final String REGISTER_URL = "http://xxx.xxx.x.x:1234/webservice/register.php"; //testing on Emulator: private static final String REGISTER_URL = "http://10.0.2.2:1234/webservice/register.php"; //testing from a real server: //private static final String REGISTER_URL = "http://www.mybringback.com/webservice/register.php"; //ids private static final String TAG_SUCCESS = "success"; private static final String TAG_MESSAGE = "message"; @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.register); user = (EditText)findViewById(R.id.username); pass = (EditText)findViewById(R.id.password); mRegister = (Button)findViewById(R.id.register); mRegister.setOnClickListener(this); } @Override public void onClick(View v) { // TODO Auto-generated method stub new CreateUser().execute(); } class CreateUser extends AsyncTask<String, String, String> { @Override protected void onPreExecute() { super.onPreExecute(); pDialog = new ProgressDialog(Register.this); pDialog.setMessage("Creating User..."); pDialog.setIndeterminate(false); pDialog.setCancelable(true); pDialog.show(); } @Override protected String doInBackground(String... args) { // TODO Auto-generated method stub // Check for success tag int success; String username = user.getText().toString(); String password = pass.getText().toString(); try { // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("username", username)); params.add(new BasicNameValuePair("password", password)); Log.d("request!", "starting"); //Posting user data to script JSONObject json = jsonParser.makeHttpRequest( REGISTER_URL, "POST", params); // full json response Log.d("Registering attempt", json.toString()); // json success element success = json.getInt(TAG_SUCCESS); if (success == 1) { Log.d("User Created!", json.toString()); finish(); return json.getString(TAG_MESSAGE); }else{ Log.d("Registering Failure!", json.getString(TAG_MESSAGE)); return json.getString(TAG_MESSAGE); } } catch (JSONException e) { e.printStackTrace(); } return null; } protected void onPostExecute(String file_url) { // dismiss the dialog once product deleted pDialog.dismiss(); if (file_url != null){ Toast.makeText(Register.this, file_url, Toast.LENGTH_LONG).show(); } } } } this is JSONparser.java import android.util.Log; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.methods.HttpPost; import org.apache.http.client.utils.URLEncodedUtils; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONException; import org.json.JSONObject; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.UnsupportedEncodingException; import java.util.List; public class JSONParser { static InputStream is = null; static JSONObject jObj = null; static String json = ""; // constructor public JSONParser() { } public JSONObject getJSONFromUrl(final String url) { // Making HTTP request try { // Construct the client and the HTTP request. DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); // Execute the POST request and store the response locally. HttpResponse httpResponse = httpClient.execute(httpPost); // Extract data from the response. HttpEntity httpEntity = httpResponse.getEntity(); // Open an inputStream with the data content. is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { // Create a BufferedReader to parse through the inputStream. BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); // Declare a string builder to help with the parsing. StringBuilder sb = new StringBuilder(); // Declare a string to store the JSON object data in string form. String line = null; // Build the string until null. while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } // Close the input stream. is.close(); // Convert the string builder data to an actual string. json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // Try to parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // Return the JSON Object. return jObj; } // function get json from url // by making HTTP POST or GET mehtod public JSONObject makeHttpRequest(String url, String method, List<NameValuePair> params) { // Making HTTP request try { // check for request method if(method == "POST"){ // request method is POST // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); httpPost.setEntity(new UrlEncodedFormEntity(params)); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); }else if(method == "GET"){ // request method is GET DefaultHttpClient httpClient = new DefaultHttpClient(); String paramString = URLEncodedUtils.format(params, "utf-8"); url += "?" + paramString; HttpGet httpGet = new HttpGet(url); HttpResponse httpResponse = httpClient.execute(httpGet); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } and this my error 08-18 23:40:02.381 2000-2018/com.example.blackcustomzier.database E/Buffer Error? Error converting result java.lang.NullPointerException: lock == null 08-18 23:40:02.381 2000-2018/com.example.blackcustomzier.database E/JSON Parser? Error parsing data org.json.JSONException: End of input at character 0 of 08-18 23:40:02.391 2000-2018/com.example.blackcustomzier.database W/dalvikvm? threadid=15: thread exiting with uncaught exception (group=0xb0f37648) 08-18 23:40:02.391 2000-2018/com.example.blackcustomzier.database E/AndroidRuntime? FATAL EXCEPTION: AsyncTask #4 java.lang.RuntimeException: An error occured while executing doInBackground() at android.os.AsyncTask$3.done(AsyncTask.java:299) at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:352) at java.util.concurrent.FutureTask.setException(FutureTask.java:219) at java.util.concurrent.FutureTask.run(FutureTask.java:239) at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) at java.lang.Thread.run(Thread.java:841) Caused by: java.lang.NullPointerException at com.example.blackcustomzier.database.Register$CreateUser.doInBackground(Register.java:108) at com.example.blackcustomzier.database.Register$CreateUser.doInBackground(Register.java:74) at android.os.AsyncTask$2.call(AsyncTask.java:287) at java.util.concurrent.FutureTask.run(FutureTask.java:234)             at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230)             at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080)             at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573)             at java.lang.Thread.run(Thread.java:841) 08-18 23:40:02.501 2000-2000/com.example.blackcustomzier.database W/EGL_emulation? eglSurfaceAttrib not implemented 08-18 23:40:02.591 2000-2000/com.example.blackcustomzier.database W/EGL_emulation? eglSurfaceAttrib not implemented 08-18 23:40:02.981 2000-2000/com.example.blackcustomzier.database E/WindowManager? Activity com.example.blackcustomzier.database.Register has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView{b1294c60 V.E..... R......D 0,0-1026,288} that was originally added here android.view.WindowLeaked: Activity com.example.blackcustomzier.database.Register has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView{b1294c60 V.E..... R......D 0,0-1026,288} that was originally added here at android.view.ViewRootImpl.<init>(ViewRootImpl.java:345) at android.view.WindowManagerGlobal.addView(WindowManagerGlobal.java:239) at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:69) at android.app.Dialog.show(Dialog.java:281) at com.example.blackcustomzier.database.Register$CreateUser.onPreExecute(Register.java:85) at android.os.AsyncTask.executeOnExecutor(AsyncTask.java:586) at android.os.AsyncTask.execute(AsyncTask.java:534) at com.example.blackcustomzier.database.Register.onClick(Register.java:70) at android.view.View.performClick(View.java:4240) at android.view.View.onKeyUp(View.java:7928) at android.widget.TextView.onKeyUp(TextView.java:5606) at android.view.KeyEvent.dispatch(KeyEvent.java:2647) at android.view.View.dispatchKeyEvent(View.java:7343) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchKeyEvent(PhoneWindow.java:1933) at com.android.internal.policy.impl.PhoneWindow.superDispatchKeyEvent(PhoneWindow.java:1408) at android.app.Activity.dispatchKeyEvent(Activity.java:2384) at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchKeyEvent(PhoneWindow.java:1860) at android.view.ViewRootImpl$ViewPostImeInputStage.processKeyEvent(ViewRootImpl.java:3791) at android.view.ViewRootImpl$ViewPostImeInputStage.onProcess(ViewRootImpl.java:3774) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:3483) at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:3406) at android.view.ViewRootImpl$AsyncInputStage.apply(ViewRootImpl.java:3540) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:3406) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:3516) at android.view.ViewRootImpl$ImeInputStage.onFinishedInputEvent(ViewRootImpl.java:3666) at android.view.inputmethod.InputMethodManager$PendingEvent.run(InputMethodManager.java:1982) at android.view.inputmethod.InputMethodManager.invokeFinishedInputEventCallback(InputMethodManager.java:1698) at android.view.inputmethod.InputMethodManager.finishedInputEvent(InputMethodManager.java:1689) at android.view.inputmethod.InputMethodManager$ImeInputEventSender.onInputEventFinished(InputMethodManager.java:1959) at android.view.InputEventSender.dispatchInputEventFinished(InputEventSender.java:141) at android.os.MessageQueue.nativePollOnce(Native Method) at android.os.MessageQueue.next(MessageQueue.java:132) at android.os.Looper.loop(Looper.java:124) at android.app.ActivityThread.main(ActivityThread.java:5103) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:525) at com.android.internal.os.ZygoteInit$MethodAndArgsCal please help me to solve this thx

    Read the article

  • 7 Ubuntu File Manager Features You May Not Have Noticed

    - by Chris Hoffman
    The Nautilus file manager included with Ubuntu includes some useful features you may not notice unless you go looking for them. You can create saved searches, mount remote file systems, use tabs in your file manager, and more. Ubuntu’s file manager also includes built-in support for sharing folders on your local network – the Sharing Options dialog creates and configures network shares compatible with both Linux and Windows machines. How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • Oracle Database Information Now Available on the Oracle Mobile Application

    - by jgelhaus
    Oracle Database Information Now Available on the Oracle Mobile Application Now, wherever you are, you can stay connected to the Oracle Database team by downloading the free Oracle mobile application.  It will help you to keep up with the latest Oracle Database news, blog, social media, video, plus much more while you are on the move! News—Track Oracle Database news. Blogs—Participate in an on-going dialog with our Oracle Database bloggers. Social—Keep up with events, webcasts and other announcements via the Oracle Database social channels Video—See clips of Webcasts, executive addresses and keynotes, Oracle Database customers, and much, much more.

    Read the article

  • kile does not get keyboard input, respond to Alt+F

    - by vishvAs vAsuki
    I use 11.10 with Unity. I use two kde apps a lot - kile and kate. Since the latest upgrade, I have observed two problems: a] I can never use Alt+F to select the file menu (works with other apps), yet when I press Alt, kile's menu comes up in the status bar. b] A few times every day, while using kile (haven't used kate as much recently), the keyboard stops responding (just in the 'editor' portion, rather than in structure-tree navigation). I cannot enter any text, but mouse works fine. I usually restart kile to continue working. One inconsistent way to reproduce it (multiple times today): I cut text from one open document using Ctrl-X switched to another document in the same kile window using Ctrl-Tab, then this happened. c] Sometimes, in kate, I can type in the editor but the keyboard won't work in the 'File open dialog'. How do I fix the above?

    Read the article

  • playonlinux is unable to find 32bits / 64bits OpenGL library

    - by footy
    When I open a fresh instalation of playonlinux, it gives 2 dialog box as mentioned in title: playonlinux is unable to find 32bits OpenGL library playonlinux is unable to find 64bits OpenGL library I am using Ubuntu 12.04 (and new to it) and would like to know how to solve this problem EDIT TERMINAL OUTPUT ~$ playonlinux [main] Message: PlayOnLinux (4.1.8) is starting [clean_tmp] Message: Cleaning temp directory Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". [Check_OpenGL] Warning: Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". [Check_OpenGL] Warning: [main] Message: Filesystem is compatible [install_plugins] Message: Checking plugin: Capture... [maj_check] Message: Web version : 1349866727 [maj_check] Message: Current local version : 1349563245 [maj_check] Message: Updating list [install_plugins] Message: Checking plugin: ScreenCap... [install_plugins] Message: Checking plugin: PlayOnLinux Vault... /usr/share/playonlinux/bash/startup_after_server: line 38: [: : integer expression expected /usr/share/playonlinux/bash/startup_after_server: line 38: [: : integer expression expected [POL_Config_Write] Message: Config write: LAST_TIMESTAMP 1349866727

    Read the article

  • Disk utility running for a long time

    - by Nik
    I had a spare, old external usb hard disk which stopped working long time back due to improper removal of disk. I tried fixing it in windows (when I did not have ubuntu yet), however I couldn't fix it. So I tried formatting the disk and creating a new partition using the disk utility in ubuntu. However it seems to keep running without any indication of the progress or whatsoever. I have no idea what is happening. I have attached a screenshot below to show what I mean. As you can from the screenshot above it seems to be working on it but no indication on the progress or what it is doing. Any attempt to exit it or do any other task like safe removal or format partition leads to the the following dialog box. What should I do? Wait or restart the computer..???

    Read the article

  • SOA Suite 11g Database Growth Management

    - by JuergenKress
    This whitepaper “Oracle SOA Suite 11g Database Growth Management”  has been written to highlight the need to implement an appropriate strategy to manage the growth the of SOA 11g database. The advice presented should facilitate better dialog between SOA and Database administrators when planning database and host requirements Whitepaper Oracle SOA Suite 11g Database Growth Management Advisor Webcast “Oracle SOA Suite 11g Database Growth Management” April 11th 2012 Author: Michael Bousamra Contributing Authors: Deepak Arora Sai Sudarsan Pogaru SOA Partner Community For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community Forum,SOA Specialization,purging,Michael Bousamra Contributing Authors: Deepak Arora Sai Sudarsan Pogaru,SOA Suite 11g Database Growth Management

    Read the article

  • Sharing my home folders with other users on the same PC

    - by Stephen Myall
    After reviewing similar questions on the same subject Im still none the wiser. I want to share my music, pictures and video folders with other users on my pc. I am using 11.10 and will be upgrading to 12.04. The method I have tried is to right click on the folder (as Administrator), select "Sharing Options" check all the necessary fields and give the share a name like "music-shared". Another dialog pops up then and I select "Set nautilus Permissions". When the other user logs on they go to their Home folder click on the network and can see the "music-shared" folder, but they get a message that the do not have the necessary permissions to view the content. Im sure I'm missing something simple. My Home folder is encrypted and i am willing to unencrypt to make this work Unlike other questions on this site, I dont have a partition etc. i would be grateful for any help.

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Thunderbird/Firefox with shared profiles (Lubuntu+WinXP)

    - by Ray
    I use Thunderbird and Firefox both on WinXP and Lubuntu 11.10. The profile folders are on the NTFS-partition of Windows and I'm sure that I've edited the profile paths correctly. Windows doesn't show any problems but Lubuntu shows the error dialog saying that another instance of Thunderbird/Firefox is already running..." The funny thing is, that as soon as I've opened the profile folders in my explorer, firefox and thunderbird can be started. I don't have to change any files or something (Have made some experiments with .parentlock) Do you have an idea how I could solve this problem, as I don't want to open the profile folders after every reboot. Thanks in advance!

    Read the article

  • Package Manager Console For More Than Managing Packages

    - by Steve Michelotti
    Like most developers, I prefer to not have to pick up the mouse if I don’t have to. I use the Executor launcher for almost everything so it’s extremely rare for me to ever click the “Start” button in Windows. I also use shortcuts keys when I can so I don’t have to pick up the mouse. By now most people know that the Package Manager Console that comes with NuGet is PowerShell embedded inside of Visual Studio. It is based on its PowerConsole predecessor which was the first (that I’m aware of) to embed PowerShell inside of Visual Studio and give access to the Visual Studio automation DTE object. It does this through an inherent $dte variable that is automatically available and ready for use. This variable is also available inside of the NuGet Package Manager console. Adding a new class file to a Visual Studio project is one of those mundane tasks that should be easier. First I have to pick up the mouse. Then I have to right-click where I want it file to go and select “Add –> New Item…” or “Add –> Class…”   If you know the Ctrl+Shift+A shortcut, then you can avoid the mouse for adding a new item but you have to manually assign a shortcut for adding a new class. At this point it pops up a dialog just so I can enter the name of the class I want. Since this is one of the most common tasks developers do, I figure there has to be an easier way and a way that avoids picking up the mouse and popping up dialogs. This is where your embedded PowerShell prompt in Visual Studio comes in. The first thing you should do is to assign a keyboard shortcut so that you can get a PowerShell prompt (i.e., the Package Manager console) quickly without ever picking up the mouse. I assign “Ctrl+P, Ctrl+M” because “P + M” stands for “Package Manager” so it is easy to remember:   At this point I can type this command to add a new class: PM> $dte.ItemOperations.AddNewItem("Code\Class", "Foo.cs") which will result in the class being added: At this point I’ve satisfied my original goal of not having to pick up a mouse and not having the “Add New Item” dialog pop up. However, having to remember that $dte method call is not very user-friendly at all. The best thing to do is to make this a re-usable function that always loads when Visual Studio starts up. There is a $profile variable that you can use to figure out where that location is for your machine: PM> $profile C:\Users\steve.michelotti\Documents\WindowsPowerShell\NuGet_profile.ps1 If the NuGet_profile.ps1 file does not already exist, you can just create it yourself and place it in the directory. Now you can put a function inside like this: 1: function addClass($className) 2: { 3: if ($className.EndsWith(".cs") -eq $false) { 4: $className = $className + ".cs" 5: } 6: 7: $dte.ItemOperations.AddNewItem("Code\Class", $className) 8: } Since it’s in the NuGet_profile.ps1 file, this function will automatically always be available for me after starting Visual Studio. Now I can simply do this: PM> addClass Foo At this point, we have a *very* nice developer experience. All I did to add a new class was: “Ctrl-P, Ctrl-M”, then “addClass Foo”. No mouse, no pop up dialogs, no complex commands to remember. In fact, PowerShell gives you auto-completion as well. If I type “addc” followed by [TAB], then intellisense pops up: You can see my custom function appear in intellisense above. Now I can type the next letter “c” and [TAB] to auto-complete the command. And if that’s still too many key strokes for you, then you can create your own PowerShell custom alias for your function like this: PM> Set-Alias addc addClass PM> addc Foo While all this is very useful, I did run into some issues which prompted me to make even further customization. This command will add the new class file to the current active directory. Depending on your context, this may not be what you want. For example, by convention all view model objects go in the “Models” folder in an MVC project. So if the current document is in the Controllers folder, it will add your class to that folder which is not what you want. You want it to always add it to the “Models” folder if you are adding a new model in an MVC project. For this situation, I added a new function called “addModel” which looks like this: 1: function addModel($className) 2: { 3: if ($className.EndsWith(".cs") -eq $false) { 4: $className = $className + ".cs" 5: } 6: 7: $modelsDir = $dte.ActiveSolutionProjects[0].UniqueName.Replace(".csproj", "") + "\Models" 8: $dte.Windows.Item([EnvDTE.Constants]::vsWindowKindSolutionExplorer).Activate() 9: $dte.ActiveWindow.Object.GetItem($modelsDir).Select([EnvDTE.vsUISelectionType]::vsUISelectionTypeSelect) 10: $dte.ItemOperations.AddNewItem("Code\Class", $className) 11: } First I figure out the path to the Models directory on line #7. Then I activate the Solution Explorer window on line #8. Then I make sure the Models directory is selected so that my context is correct when I add the new class and it will be added to the Models directory as desired. These are just a couple of examples for things you can do with the PowerShell prompt that you have available in the Package Manager console. As developers we spend so much time in Visual Studio, why would you not customize it so that you can work in whatever way you want to work?! The next time you’re not happy about the way Visual Studio makes you do a particular task – automate it! The sky is the limit.

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >