Search Results

Search found 8637 results on 346 pages for 'mind blank'.

Page 165/346 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Reliance on Outlook (been a looong time, I know)

    - by AndyScott
    Do you feel that your development group too reliant on Outlook? Have you reached a point that you have to search your email for pertinent information when asked? What are you using? I realized things had gotten out of hand a couple weeks ago over a weekend. I was at my in-laws house (in the country, no PC/laptop, no internet connection; and I get an email on my phone that I needed to reply to, but I couldn't send without deleting items from my inbox/sent items/etc. Now mind you, I have rules set up to move stuff into folders, and files more than a month old are automatically moved to the PST; but generally don't manually move items to a PST until I have had a chance to 'work' the item. Please don't bother mocking my process, it's just the way I work. That being said, it was a frustrating process of 'I need all this information, what can I afford to lose'. I work on an International project (think lots of customers), and conversations in 9 or 10 different directions about 10-20 different things are not abnormal for a given day. I have found myself looking data up in Outlook because that's where it is. I think that I have reached the point now, where I don't feel that Outlook is up to the task of organizing the data that it contains.   When you have that many emails (200 or so a day), information seems to get lost at times, and I find that Outlook's search capabilities are lacking. Additionally, I find that any sort of organizational 'system' of sorting emails that can cover multiple topics is a lost cause. But at the same time, the old process of taking the information that I got from emails and moving it into another 'notes' type of program has proved to be too time consuming. Anyone out there have some better type of system? (Comments about the capacity of my brain, and it's ability to recall information not needed.)

    Read the article

  • nautilus will not start, no right click menu, apps will not run due to 64bit libicui18n.so.48 overwritten with 32bit, ubuntu 12.04

    - by Dewb
    yeah so i broke my system trying to install support for 32bit apps. Somehow the 32bit set of libicu*.so.48 files replaced my 64bit files and now nothing works. I know what the problem is and how to fix it , what i need is some help. I need some kind soul running a 64bit install of Ubuntu 12.04 to simply go into their /usr/lib dir and copy the files that have the form libicu*.so.48* (8 of them i think, maybe 16 i dunno how many there is supposed to be since they were overwritten). Anyway if you could put those files in an archive, zip or tar preferred and then email/share the link. I was able to get these files installed but they're from the latest edition of fedora and while i can now use my computer , it's not that stable things don't start right smoothly every time like they used to .... and well its a simple ordeal to put things back the way they were if someone wouldn't mind helping out. also any help setting up the libraries related to 32bit apps would be great as well as i apparently screwed that up royally. Thanks for any help or advice

    Read the article

  • How can I activate a Panel-icon via a script (or get its screen co-ordinates; to click it)?

    - by fred.bear
    This question is in the context of Lucid 10.04 desktop (ie. no Unity). I do most screen navigation via the keyboard (not the mouse), so I'm looking for a script solution to re-activating an app which has been "minimized" to the Panel's Notification Area. I'll use Skype as an example. wmctrl allows me enough access to normally-minimized windows, but when Skype is "minimized" to the Notification Area, it simply goes "off the radar" as far as wmctrl is concerned. Bearing in mind that icon positions in the Notification Area can vary, is there some way to determine the screen co-ordinates of Skype's Panel icon, so I can "click" it using xdotool (or a similar utility)? ...or maybe there is a more direct way to activate the "dormant" Skype? ... (and I don't mean the mouse ;) Here is the script, so far. Hopefully it will make clear what I'm trying to do: #!/bin/bash procname="skype-wrapper" windmask="Skype™" if [[ $(pgrep -x -n -c "$procname") == 1 ]] ; then wintitle="$(wmctrl -l |grep "$windmask" |head -n 1 |sed -n "s/^.\+${HOSTNAME} \(.*\)/\1/p")" if [ "$wintitle" = "" ] ; then echo "Click on Skype's Panel-icon to show the main window" ############################################################### # How can I find the screen co-ordinates of Skype's Panel Icon ############################################################### else # Skype is running, and has (at least) one visible window which matches $windmask. Activate it. wmctrl -a "$wintitle" fi else # The process is not currently running. Start it. ("$procname" &) fi

    Read the article

  • No boot option after installing Ubuntu 12.04 LTS on Acer Aspire 4736z with Live CD

    - by utamaku
    I am facing a problem. After installing Ubuntu 12.04 LTS using Live CD, at the reboot there is no boot option to choose the OS. I directly get logged into my Windows 7. Before that i was having an issue with the 'nomodeset' if not mistaken .After ticking [x] on the nomodeset, i can install my Ubuntu, but got stucked again at choosing the partition option. So i did 2 partitions for Ubuntu, 1 partition as ext3 for / and the other 1 for swap. Which enabled to proceed until the finished installation dialogue, and after that system wants to reboot .It takes some time and gets stuck at the screen doing nothing, doesn't reboot at all. I did forced shutdown then rebooted again which directly logs me in windows 7 without boot option. In win7 the partitions for Ubuntu is gone. I tried the boot-repair thing and it doesn't help .It just shows up the _ Blank Cursor (terminal thing i guess). I typed boot repair but still the same result. I am using Acer Aspire 4736z. Please somebody help me with this issue.

    Read the article

  • UCS Documentation: Home1 or Home2? You Decide.

    - by joesciallo
    How you go about finding information can be a very personal affair. We each have our own style of locating content, we each have our particular bent. But when it comes to getting started finding technical information, sometimes the simplest ways are best, especially if you are relatively new to a product or technology, and you just need to get going quickly with that Installation Guide or those Release Notes. With that in mind, I recently created an alternative home page for the Unified Communications Suite documentation. You can now have your Home2, in addition to the original, more wiki-fied Home1. I would recommend Home2 for those who are used to, and are more comfortable with, the spreadsheet-like view of manuals: here you'll find those familiar titles and the option to either view on the wiki or download the equivalent PDF file. Once you get familiar with what guides are available for the UCS component products, then move on to the Home1 layout, and start using some of the more advanced techniques for finding content in a wiki. Either way, you should be able to locate the "thing" you are looking for.

    Read the article

  • Now that Apple's intending to deprecate Java on OS X, what language should I focus on?

    - by Smalltown2000
    After getting shot down on SO, I'll try this here: I'm sure you'll all know of Apple's recent announcement to deprecate Java on OS X (such as discussed here). I've recently come back to programming in the last year or so since I originally learnt on ye olde BASIC many years ago. I have a Mac at home and a PC at work and whilst I have got Windows and Ubuntu installed on my Mac as VMs, I chose to focus my "relearning" on VB first (as it was closest to BASIC) and then rapidly moved to Java as it was cross platform (with minimal effort) and so it was easiest to work on code from both OSes. So my question, if the winds of change on Mac are blowing away from Java and in this post-Sun era, what would be the best language to focus my new efforts on? Please note, this isn't a general "which language is better?" thread and or an opportunity for the associated flame-war. There's plenty of those and it's not the point. I realise that in the long term one shouldn't be allegiant to an individual language so, taking this as an excuse, the question is specifically which is going to be the most quick to be productive on given the background whilst bearing in mind minimum portability rewrites (aspiration rather then requirement) and with a long term value of usage. To that I see the main options as: C# - Closest in "style" to Java but M$ dependent (unless you consider Mono of course) C++ - Hugely complex but if even slightly conquered, then a win? Is it worth the climb up the learning curve? VB.Net - Already have background so easiest to go back to but who uses VB for .Net these days? Surely if using a CLI language I should use C#... Python - Cross-platform but what about UI for the end-user? EDIT: As a usage priority, I envision desktop application programming. Though the ability to branch in the future is always desirable. I guess graphics are the next direction once basics are in place.

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user12707
    I'm working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. I am currently coding in MATLAB. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to be either real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 20 and SD 40. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do it. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • Hosting woes

    Unfortunately quite a few people have noticed our recent hosting problems, but if you are reading this they should all be over, so please accept our apologies. Our former web host decided migrate to a new platform, it had all sorts or great features, but on reflection hosting wasn’t one of them. We knew it was coming, and had even been proactive and requested several dates on their migration control panel so I could be around to check it afterwards. The dates came and went without anything happening, so we sat back and carried on on for a couple of months thinking they’d get back to us when they were ready. Then out of the blue I get an email saying it has happened! Now this is what I call timing, I had client work to complete, a 50 minute presentation to write and there was a little conference called SQLBits that I help organise at the end of the week, and then our hosting provider decides to migrate our sites. Unfortunately they only migrated parts of the sites, they forgot things like the database for SQLDTS. The database eventually appeared, but the data didn’t. Then the data pitched up but without the stored procedures. I was even asked if I could perform a backup and send it to them, as they were getting timeout errors. Never mind the issues of performing a native backup on a hosted server, whilst I could have done something, the question actually left me speechless. So you cannot access your own SQL server and you expect me to be able to help? This site was there, but hadn’t been set as an IIS application so all path references were wrong which meant no CSS and all the internal navigation and links were wrong. The new improved hosting platform Control Panel didn't appear to like setting applications. It said it would, you’d have to wait 2 hours of course, then just decided not to bother after all. So needless to say after a very successful SQLBits I focused my attention on finding a new web host, and here we are again. Sorry it took so long.

    Read the article

  • How do I get my Intel HD graphics to work alongside my HD7850, as my second(HDMI out) monitor?

    - by AlexTes
    Title says it all. Further info: Motherboard: http://www.asrock.com/mb/Intel/Z77%20Pro3/ Processor: http://ark.intel.com/products/65520/Intel-Core-i5-3570K-Processor-%286M-Cache-up-to-3_80-GHz%29 So currently my main screen is running on my HD7850. Got drivers from the amd website. I have looked through dozens of questions here. I'm about to try booting Ubuntu from a stick and seeing if the xorg-edgers drivers might help. When booting, all action goes down on the very screen I'm trying to get to work.*EDIT never mind this. Seems to be special boot magic. As the screen only displays whiteline errors once the gui of ubuntu has kicked in and everything graphic is happening through my graphics card again. Connected through HDMI(motherboard)-DVI. So unless having multiple displays is a huge deal the solution hopefully isn't that complicated. I just feel I'm missing something simple. If this really is complicated, I should probably just hook up the display to my graphics card. My CPU is usually the one chilling out though so I'd like to try to get that to work. Also just because I don't want to buy an extra cable and this set up makes me feel warm and fuzzy inside. Tell me what to try or look up, I'll be most appreciative. Thank you! **UPDATE The x-swat ppa installed some intel stuff. Booting with one monitor plugged into the motherboard gives nothing. Doing it with the pc already on gives the purple "Ubuntu" with 5 dots boot/shutdown screen.

    Read the article

  • Trouble installing from disk

    - by SuperNatural
    I'm writing this in desperation, Windows is slowly killing me and i need to change my home pc os to Ubuntu 11.04 as soon as possible. I created a USB flash drive to install ubuntu, twice, and both times they failed to begin install on restart of my pc. i read on another forum that you might have to change some boot sequence in BIOS but when pressing F2 to enter it didnt work. After a lot of cursing, I made myself an UBUNTU install cd and booted. To my excitement, it now displayed... try ubuntu and install ubuntu. i clicked install ubuntu which lead me to the preparing to install ubuntu display, i checked download updates while installing and clicked forward. The very next display is ' allocate drive space ' i assume there are meant to be options of drives provided but mine is just a blank box and underneath all the options to create a new partition table, add, change, delete and revert are all greyed out. There is a drop down menu labelled 'device for boot loader installlation' but the only option is /dev/sda. when i click install, a no root file system error comes up telling me to please correct from the partitioning menu. I am extremely frustrated. please!! can anyone help me...

    Read the article

  • Password correct? then redirect [migrated]

    - by RevCity
    So I have this code and I need it to only redirect when the correct password is added. Only problem is that I dont know what to add that will make it only redirect if the password is "hello" I understand that using "view-source" would reveal the password, I don't mind that, its the way I want it. I basically just need to know what to add and where to add it. Sooo: Redirects if "hello" is typed into password field. Does nothing if anything else is put into the password field. <div class="wrapper"> <form class="form1" action="http://google.com"> <div class="formtitle">Enter the password to proceed</div> <div class="input nobottomborder"> <div class="inputtext">Password: </div> <div class="inputcontent"> <input type="password" /> <br/> </div> </div> <div class="buttons"> <input class="orangebutton" type="submit" value="Login" /> </div> </div> I hope this was clear and could be understood.

    Read the article

  • What is a quick and easy way to make a minimal news blog that pulls rss feeds? I have 2 days [on hold]

    - by user44188
    My boss wants me to make a website that pulls news from various rss feeds from all over the web. I need to pull something together, that looks pro, quick! I started by going to themeforest and looked around forever, but nothing really looked right. I need something mostly built like this already that I can just alter into our site. I can do most cms, photoshop, some code, I used to do it like this freelance years ago, but it's not really my job now. It just sort of came up suddenly, so I wanna pull through. This is a good example of the overall structure I had in mind, but it just isn't clean enough. All of the news feeds will essentially be about the same criteria, but will pertain to different geographic areas. It would be a huge plus if I could segregate the news visually in some clever way based on geography. (Like a map?) I'm definitely open to all suggestions. I have to get this done by friday!

    Read the article

  • What makes Ubuntu awesome [closed]

    - by Shagun
    My question may sound stupid or inappropriate for this site in which case I apologize before hand. This thing has bothered me for quiet some time so please correct me if there is anything inappropriate: I have been using Ubuntu for past 1 year and I know how awesome it is and in what terms is it better than windows.But around 2 weeks ago some of my friends asked me to show them something on Ubuntu or tell something about Ubuntu that makes people prefer it over windows. I tried to convince them by telling things like its open-source, that most of the super-computers run on Linux, that its unaffected by virus and other stuff but they seemed unconvinced. Maybe what they we looking for was some mind-boggling feature which only Ubuntu (Linux) has. Since that day I have been thinking but yet don't have anything that will show them the true powers of Linux. Please suggest your response to such a situation as it troubles me that I am not able to explain them one thing that I myself believe in. Thank you. PS : I am not looking for a theoretical answer but would like to hear of one such application which it and only it provides.

    Read the article

  • Installed geany has no options

    - by arundex
    I'm new to geany IDE. I installed geany from ubuntu software centre, and the window has no options other than opening a new file. I can't file any preference, tools option too for configuring. I heard it is a full fledged IDE. Also, from the screenshots available from the software center, it seems my Geany installation is missing almost everything. I'm not able to post the screenshots, but my interface just has 3 buttons. create a new file, open an existing file and a quit button. Everything else is inactive. I accidentally closed the sidepane, and I can't find any options to bring back that too. EDIT What am I missing in my Geany installation? PS: I tried installing from source from geany website. But, it posted some error saying GTK files not found. But, I removed geany from software centre and reinstalled several times. It installed Geany without problems, but with afore mentioned problems, that is I have nothing in my interface. Also, even after reinstalling, somehow Geany remembers to hide the sidepane by default, which I'm not able to see at all. I also added Geany ppa repository manually for latest fixes, but still when I reinstall from software centre I get a plain blank Geany interface. Thanks.

    Read the article

  • My computer is broken after recent update attempt to 14.04

    - by user317550
    So it all started on a day much like today, because it is today but that's not the point, when I got a notification telling me I haven't upgraded to 14.04. Not due to lack of trying, however. It offered to upgrade me itself. Now keep in mind, I've tried very hard to upgrade my is from 12.04 to 14.04. Many times, I believe, due to tinkering where there shouldn't be tinkering, my BIOS are messed up. So upgrading is essentially impossible, but I wasn't about to stop it from updating for me, thinking it didn't have too much to do with BIOS as it doesn't reboot until after. So I let it go about its business and some time later I look back at it, and my unity sidebar is gone, and anytime there's text on screen it shows as those box things. The real bottom line is that I want to know my options. All of them. I would love to be able to keep the stuff on my hard drive so a hard drive swap may be an option if you guys say that would work. I just need my computer back. Let me know if I left anything out. Peace! B^)

    Read the article

  • Trouble installing Ubuntu.

    - by CV13
    I have a blank 1TB hard drive that I have run ubuntu on before. I recently formatted it and am trying to reinstall Ubuntu 12.04 LTS on there. Using the Universal USB installer and the 64-bit iso file, I booted up my computer with only my 1TB hard drive connected. I go through the installation process normally, until I restart my computer at the end. Once restarted, I start to experience the problem deal with here. When I run it normally, it goes to a black screen with a blinking cursor. When I select the "Recovery Mode" option, a bunch of lines scroll across the screen, the last of which is "hostap_pci: Registered netdevice wifi0". It then stops there with a blinking cursor. When I follow the instructions on the page I linked to (replacing "quiet splash" with "nomodeset") and bunch of lines scroll through after I press Ctrl+x. The last line displayed is Adding 8386556k swap on /dev/sda5. Priority:-1 extents:1 across: 8386556k It then stops there with a blinking cursor. How do I fix this problem?

    Read the article

  • What is the fastest way for me to become a full stack developer? [on hold]

    - by user136368
    I run a small webdesign firm. I have an overview of HTML, CSS, JS, PHP(laravel), MySQL. I did a few courses on code academy. I wanted to build a web app in the company. I find that I am severely crippled by the lack of programming expertise. I want to become a full stack developer who can build a prototype on his own. I cannot spend 5-10k USD on the boot camp courses. Can someone suggest structured courses which can help me become a full stack developers? I found the following websites but I donot know if they can help me become one. My goal: Be able to make a working prototype of the ideas I come up with.(This is my primary goal. I do not want to be the lead developer. I just want to be able to make a prototype.) Several questions I have in mind: Will it be fine if I stick to PHP(laravel)? Should I be using ROR? I have come across a few online resources: Codeacademy, codeschool,teamtreehouse,and theodinproject. These are within my affordability range. I can commit to a 2-3 months intensively to learn programming. What do you suggest I do?

    Read the article

  • Social Network ( Help) [on hold]

    - by brunocascio
    I am in a great "problem" so to speak , and I need opinions to decide. The problem is to create a social network without knowing the number of users who use it (but if thinking if they were sufficient ) . The question is which language and framework to use .... I do not mind having to learn new technologies and / or languages ??. I am among PHP ( Laravel - Symfony - other? ) Ruby ( Ruby on Rails 4? ) Javascript ( Ember , express, locomotive , other? ) Python ( Django ) Java ( Grails , Play, other?) I have experience in both PHP and frameworks. In Symfony developed part of it, but I got tired having to do a thousand configurations for all . I know very little about Ruby , but I saw very easy . I do not know are saying the performance. Javascript costs me to get used to their paradigm , and do not know if at all sure to cover everything with Javascript. Django and python ( very poor knowledge ) Java , experience in data structure and android , but not web . Regarding the / s databases: In my head I have to MongoDB and costs change of opinion by another database with respect to documentation and EASE performance . But .......... frameworks have no support at all clear . I also thought of mixing technologies for using a tecnlogía backend and the frontend other. As I read in the new social network Origo . They use Symfony for REST and javascript for the frontend . ( Backbone , Underscore and RequireJS ) What do you recommend me ?

    Read the article

  • What language and tools can I use to create a simple game with child-lock (capture all key press) for Windows? [closed]

    - by scw
    I'm writing an open source program that changes colors & plays sounds when keys are pressed. I want it to run in full screen mode and have a child-lock so kids can't exit accidentally. I want it to capture all keys including ctrl alt delete. (So it's partially a game, but partially windows utility.) My target OS is Windows 7 (32 & 64 bit), keeping Windows 8 in mind. My options: Visual Studio using .net C# Windows Forms - the devil I know. But not a "game" platform, which is why I'm asking this question. Visual Studio & XNA - have never used XNA, not sure of capabilities or support future Python - What flavor, what modules, what IDE? I've never done anything with Python but I found a couple of similar open source projects in python. Something else that I don't know about? Any input is appreciated.

    Read the article

  • Quick guide to Oracle IRM 11g: Configuring SSL

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index So far in this guide we have an IRM Server up and running, however I skipped over SSL configuration in the previous article because I wanted to focus in more detail now. You can, if you wish, not bother with setting up SSL, but considering this is a security technology it is worthwhile doing. Contents Setting up a one way, self signed SSL certificate in WebLogic Setting up an official SSL certificate in Apache 2.x Configuring Apache to proxy traffic to the IRM server There are two common scenarios in which an Oracle IRM server is configured. For a development or evaluation system, people usually communicate directly to the WebLogic Server running the IRM service. However in a production environment and for some proof of concept evaluations that require a setup reflecting a production system, the traffic to the IRM server travels via a web server proxy, commonly Apache. In this guide we are building an Oracle Enterprise Linux based IRM service and this article will go over the configuration of SSL in WebLogic and also in Apache. Like in the past articles, we are going to use two host names in the configuration below,irm.company.com will refer to the public Apache server irm.company.internal will refer to the internal WebLogic IRM server Setting up a one way, self signed SSL certificate in WebLogic First lets look at creating just a simple self signed SSL certificate to be used in WebLogic. This is a quick and easy way to get SSL working in your environment, however the downside is that no browsers are going to trust this certificate you create and you'll need to manually install the certificate onto any machine's communicating with the server. This is fine for development or when you have only a few users evaluating the system, but for any significant use it's usually better to have a fully trusted certificate in use and I explain that in the next section. But for now lets go through creating, installing and testing a self signed certificate. We use a library in Java to create the certificates, open a console and running the following commands. Note you should choose your own secure passwords whenever you see password below. [oracle@irm /] source /oracle/middleware/wlserver_10.3/server/bin/setWLSEnv.sh [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irm /] java utils.CertGen -selfsigned -certfile MyOwnSelfCA.cer -keyfile MyOwnSelfKey.key -keyfilepass password -cn "irm.oracle.demo" [oracle@irm /] java utils.ImportPrivateKey -keystore MyOwnIdentityStore.jks -storepass password -keypass password -alias trustself -certfile MyOwnSelfCA.cer.pem -keyfile MyOwnSelfKey.key.pem -keyfilepass password [oracle@irm /] keytool -import -trustcacerts -alias trustself -keystore TrustMyOwnSelf.jks -file MyOwnSelfCA.cer.der -keyalg RSA We now have two Java Key Stores, MyOwnIdentityStore.jks and TrustMyOwnSelf.jks. These contain keys and certificates which we will use in WebLogic Server. Now we need to tell the IRM server to use these stores when setting up SSL connections for incoming requests. Make sure the Admin server is running and login into the WebLogic Console at http://irm.company.intranet:7001/console and do the following; In the menu on the left, select the + next to Environment to expose the submenu, then click on Servers. You will see two servers in the list, AdminServer(admin) and IRM_server1. If the IRM server is running, shut it down either by hitting CONTROL + C in the console window it was started from, or you can switch to the CONTROL tab, select IRM_server1 and then select the Shutdown menu and then Force Shutdown Now. In the Configuration tab select IRM_server1 and switch to the Keystores tab. By default WebLogic Server uses it's own demo identity and trust. We are now going to switch to the self signed one's we've just created. So select the Change button and switch to Custom Identity and Custom Trust and hit save. Now we have to complete the resulting fields, the setting's i've used in my evaluation server are below. IdentityCustom Identity Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/MyOwnIdentityStore.jks Custom Identity Keystore Type: JKS Custom Identity Keystore Passphrase: password Confirm Custom Identity Keystore Passphrase: password TrustCustom Trust Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/TrustMyOwnSelf.jks Custom Trust Keystore Type: JKS Custom Trust Keystore Passphrase: password Confirm Custom Trust Keystore Passphrase: password Now click on the SSL tab for the IRM_server1 and enter in the alias and passphrase, in my demo here the details are; IdentityPrivate Key Alias: trustself Private Key Passphrase: password Confirm Private Key Passphrase: password And hit save. Now lets test a connection to the IRM server over HTTPS using SSL. Go back to a console window and start the IRM server, a quick reminder on how to do this is... [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/bin [oracle@irm /] ./startManagedWeblogic IRM_server1 Once running, open a browser and head to the SSL port of the server. By default the IRM server will be listening on the URL https://irm.company.intranet:16101/irm_rights. Note in the example image on the right the port is 7002 because it's a system that has the IRM services installed on the Admin server, this isn't typical (or advisable). Your system is going to have a separate managed server which will be listening on port 16101. Once you open this address you will notice that your browser is going to complain that the server certificate is untrusted. The images on the right show how Firefox displays this error. You are going to be prompted every time you create a new SSL session with the server, both from the browser and more annoyingly from the IRM Desktop. If you plan on always using a self signed certificate, it is worth adding it to the Windows certificate store so that when you are accessing sealed content you do not keep being informed this certificate is not trusted. Follow these instructions (which are for Internet Explorer 8, they may vary for your version of IE.) Start Internet Explorer and open the URL to your IRM server over SSL, e.g. https://irm.company.intranet:16101/irm_rights. IE will complain that about the certificate, click on Continue to this website (not recommended). From the IE Tools menu select Internet Options and from the resulting dialog select Security and then click on Trusted Sites and then the Sites button. Add to the list of trusted sites a URL which mates the server you are accessing, e.g. https://irm.company.intranet/ and select OK. Now refresh the page you were accessing and next to the URL you should see a red cross and the words Certificate Error. Click on this button and select View Certificates. You will now see a dialog with the details of the self signed certificate and the Install Certificate... button should be enabled. Click on this to start the wizard. Click next and you'll be asked where you should install the certificate. Change the option to Place all certificates in the following store. Select browse and choose the Trusted Root Certification Authorities location and hit OK. You'll then be prompted to install the certificate and answer yes. You also need to import the root signed certificate into the same location, so once again select the red Certificate Error option and this time when viewing the certificate, switch to the Certification Path tab and you should see a CertGenCAB certificate. Select this and then click on View Certificate and go through the same process as above to import the certificate into the store. Finally close all instances of the IE browser and re-access the IRM server URL again, this time you should not receive any errors. Setting up an official SSL certificate in Apache 2.x At this point we now have an IRM server that you can communicate with over SSL. However this certificate isn't trusted by any browser because it's path of trust doesn't end in a recognized certificate authority (CA). Also you are communicating directly to the WebLogic Server over a non standard SSL port, 16101. In a production environment it is common to have another device handle the initial public internet traffic and then proxy this to the WebLogic server. The diagram below shows a very simplified view of this type of deployment. What i'm going to walk through next is configuring Apache to proxy traffic to a WebLogic server and also to use a real SSL certificate from an official CA. First step is to configure Apache to handle incoming requests over SSL. In this guide I am configuring the IRM service in Oracle Enterprise Linux 5 update 3 and Apache 2.2.3 which came with OpenSSL and mod_ssl components. Before I purchase an SSL certificate, I need to generate a certificate request from the server. Oracle.com uses Verisign and for my own personal needs I use cheaper certificates from GoDaddy. The following instructions are specific to Apache, but there are many references out there for other web servers. For Apache I have OpenSSL and the commands are; [oracle@irm /] cd /usr/bin [oracle@irm bin] openssl genrsa -des3 -out irm-apache-server.key 2048 Generating RSA private key, 2048 bit long modulus ............................+++ .........+++ e is 65537 (0x10001) Enter pass phrase for irm-apache-server.key: Verifying - Enter pass phrase for irm-apache-server.key: [oracle@irm bin] openssl req -new -key irm-apache-server.key -out irm-apache-server.csr Enter pass phrase for irm-apache-server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:San Francisco Organization Name (eg, company) [My Company Ltd]:Oracle Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:irm.company.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:testing An optional company name []: You must make sure to remember the pass phrase you used in the initial key generation, you will need this when later configuring Apache. In the /usr/bin directory there are now two new files. The irm-apache-server.csr contains our certificate request and is what you cut and paste, or upload, to your certificate authority when you purchase and validate your SSL certificate. In response you will typically get two files. Your server certificate and another certificate file that will likely contain a set of certificates from your CA which validate your certificate's trust. Next we need to configure Apache to use these files. Typically there is an ssl.conf file which is where all the SSL configuration is done. On my Oracle Enterprise Linux server this file is located in /etc/httpd/conf.d/ssl.conf and i've added the following lines. <VirtualHost irm.company.com> # Setup SSL for irm.company.com ServerName irm.company.com SSLEngine On SSLCertificateFile /oracle/secure/irm.company.com.crt SSLCertificateKeyFile /oracle/secure/irm.company.com.key SSLCertificateChainFile /oracle/secure/gd_bundle.crt </VirtualHost> Restarting Apache (apachectl restart) and I can now attempt to connect to the Apache server in a web browser, https://irm.company.com/. If all is configured correctly I should now see an Apache test page delivered to me over HTTPS. Configuring Apache to proxy traffic to the IRM server Final piece in setting up SSL is to have Apache proxy requests for the IRM server but do so securely. So the requests to Apache will be over HTTPS using a legitimate certificate, but we can also configure Apache to proxy these requests internally across to the IRM server using SSL with the self signed certificate we generated at the start of this article. To do this proxying we use the WebLogic Web Server plugin for Apache which you can download here from Oracle. Download the zip file and extract onto the server. The file extraction reveals a set of zip files, each one specific to a supported web server. In my instance I am using Apache 2.2 32bit on an Oracle Enterprise Linux, 64 bit server. If you are not sure what version your Apache server is, run the command /usr/sbin/httpd -V and you'll see version and it its 32 or 64 bit. Mine is a 32bit server so I need to extract the file WLSPlugin1.1-Apache2.2-linux32-x86.zip. The from the resulting lib folder copy the file mod_wl.so into /usr/lib/httpd/modules/. First we want to test that the plug in will work for regular HTTP traffic. Edit the httpd.conf for Apache and add the following section at the bottom. LoadModule weblogic_module modules/mod_wl.so <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16100    WLLogFile /tmp/wl-proxy.log </IfModule> <Location /irm_rights>    SetHandler weblogic-handler </Location> <Location /irm_desktop>    SetHandler weblogic-handler </Location> <Location /irm_sealing>    SetHandler weblogic-handler </Location> <Location /irm_services>    SetHandler weblogic-handler </Location> Now restart Apache again (apachectl restart) and now open a browser to http://irm.company.com/irm_rights. Apache will proxy the HTTP traffic from the port 80 of your Apache server to the IRM service listening on port 16100 of the WebLogic Managed server. Note above I have included all four of the Locations you might wish to proxy. http://irm.company.internalirm_rights is the URL to the management website, /irm_desktop is the URL used for the IRM Desktop to communicate. irm_sealing is for web services based document sealing and irm_services is for IRM server web services. The last two are typically only used when you have the IRM server integrated with another application and it is unlikely you'd be accessing these resources from the public facing Apache server. However, just in case, i've mentioned them above. Now let's enable SSL communication from Apache to WebLogic. In the ZIP file we extracted were some more modules we need to copy into the Apache folder. Looking back in the lib that we extracted, there are some more files. Copy the following into the /usr/lib/httpd/modules/ folder. libwlssl.so libnnz11.so libclntsh.so.11.1 Now the documentation states that should only need to do this, but I found that I also needed to create an environment variable called LD_LIBRARY_PATH and point this to the folder /usr/lib/httpd/modules/. If I didn't do this, starting Apache with the WebLogic module configured to SSL would throw the error. [crit] (20014)Internal error: WL SSL Init failed for server: (null) on 0 So I had to edit the file /etc/profile and add the following lines at the bottom. You may already have the LD_LIBRARY_PATH variable defined, therefore simply add this path to it. LD_LIBRARY_PATH=/usr/lib/httpd/modules/ export LD_LIBRARY_PATH Now the WebLogic plug in uses an Oracle Wallet to store the required certificates.You'll need to copy the self signed certificate from the IRM server over to the Apache server. Copy over the MyOwnSelfCA.cer.der into the same folder where you are storing your public certificates, in my example this is /oracle/secure. It's worth mentioning these files should ONLY be readable by root (the user Apache runs as). Now lets create an Oracle Wallet and import the self signed certificate from the IRM server. The file orapki was included in the bin folder of the Apache 1.1 plugin zip you extracted. orapki wallet create -wallet /oracle/secure/my-wallet -auto_login_only orapki wallet add -wallet /oracle/secure/my-wallet -trusted_cert -cert MyOwnSelfCA.cer.der -auto_login_only Finally change the httpd.conf to reflect that we want the WebLogic Apache plug-in to use HTTPS/SSL and not just plain HTTP. <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16101    SecureProxy ON    WLSSLWallet /oracle/secure/my-wallet    WLLogFile /tmp/wl-proxy.log </IfModule> Then restart Apache once more and you can go back to the browser to test the communication. Opening the URL https://irm.company.com/irm_rights will proxy your request to the WebLogic server at https://irm.company.internal:16101/irm_rights. At this point you have a fully functional Oracle IRM service, the next step is to create a sealed document and test the entire system.

    Read the article

  • The Art of Productivity

    - by dwahlin
    Getting things done has always been a challenge regardless of gender, age, race, skill, or job position. No matter how hard some people try, they end up procrastinating tasks until the last minute. Some people simply focus better when they know they’re out of time and can’t procrastinate any longer. How many times have you put off working on a term paper in school until the very last minute? With only a few hours left your mental energy and focus seem to kick in to high gear especially as you realize that you either get the paper done now or risk failing. It’s amazing how a little pressure can turn into a motivator and allow our minds to focus on a given task. Some people seem to specialize in procrastinating just about everything they do while others tend to be the “doers” who get a lot done and ultimately rise up the ladder at work. What’s the difference between these types of people? Is it pure laziness or are other factors at play? I think that some people are certainly more motivated than others, but I also think a lot of it is based on the process that “doers” tend to follow - whether knowingly or unknowingly. While I’ve certainly fought battles with procrastination, I’ve always had a knack for being able to get a lot done in a relatively short amount of time. I think a lot of my “get it done” attitude goes back to the the strong work ethic my parents instilled in me at a young age. I remember my dad saying, “You need to learn to work hard!” when I was around 5 years old. I remember that moment specifically because I was on a tractor with him the first time I heard it while he was trying to move some large rocks into a pile. The tractor was big but so were the rocks and my dad had to balance the tractor perfectly so that it didn’t tip forward too far. It was challenging work and somewhat tedious but my dad finished the task and taught me a few important lessons along the way including persistence, the importance of having a skill, and getting the job done right without skimping along the way. In this post I’m going to list a few of the techniques and processes I follow that I hope may be beneficial to others. I blogged about the general concept back in 2009 but thought I’d share some updated information and lessons learned since then. Most of the ideas that follow came from learning and refining my daily work process over the years. However, since most of the ideas are common sense (at least in my opinion), I suspect they can be found in other productivity processes that are out there. Let’s start off with one of the most important yet simple tips: Start Each Day with a List. Start Each Day with a List What are you planning to get done today? Do you keep track of everything in your head or rely on your calendar? While most of us think that we’re pretty good at managing “to do” lists strictly in our head you might be surprised at how affective writing out lists can be. By writing out tasks you’re forced to focus on the most important tasks to accomplish that day, commit yourself to those tasks, and have an easy way to track what was supposed to get done and what actually got done. Start every morning by making a list of specific tasks that you want to accomplish throughout the day. I’ll even go so far as to fill in times when I’d like to work on tasks if I have a lot of meetings or other events tying up my calendar on a given day. I’m not a big fan of using paper since I type a lot faster than I write (plus I write like a 3rd grader according to my wife), so I use the Sticky Notes feature available in Windows. Here’s an example of yesterday’s sticky note: What do you add to your list? That’s the subject of the next tip. Focus on Small Tasks It’s no secret that focusing on small, manageable tasks is more effective than trying to focus on large and more vague tasks. When you make your list each morning only add tasks that you can accomplish within a given time period. For example, if I only have 30 minutes blocked out to work on an article I don’t list “Write Article”. If I do that I’ll end up wasting 30 minutes stressing about how I’m going to get the article done in 30 minutes and ultimately get nothing done. Instead, I’ll list something like “Write Introductory Paragraphs for Article”. The next day I may add, “Write first section of article” or something that’s small and manageable – something I’m confident that I can get done. You’ll find that once you’ve knocked out several smaller tasks it’s easy to continue completing others since you want to keep the momentum going. In addition to keeping my tasks focused and small, I also make a conscious effort to limit my list to 4 or 5 tasks initially. I’ve found that if I list more than 5 tasks I feel a bit overwhelmed which hurts my productivity. It’s easy to add additional tasks as you complete others and you get the added benefit of that confidence boost of knowing that you’re being productive and getting things done as you remove tasks and add others. Getting Started is the Hardest (Yet Easiest) Part I’ve always found that getting started is the hardest part and one of the biggest contributors to procrastination. Getting started working on tasks is a lot like getting a large rock pushed to the bottom of a hill. It’s difficult to get the rock rolling at first, but once you manage to get it rocking some it’s really easy to get it rolling on its way to the bottom. As an example, I’ve written 100s of articles for technical magazines over the years and have really struggled with the initial introductory paragraphs. Keep in mind that these are the paragraphs that don’t really add that much value (in my opinion anyway). They introduce the reader to the subject matter and nothing more. What a waste of time for me to sit there stressing about how to start the article. On more than one occasion I’ve spent more than an hour trying to come up with 2-3 paragraphs of text.  Talk about a productivity killer! Whether you’re struggling with a writing task, some code for a project, an email, or other tasks, jumping in without thinking too much is the best way to get started I’ve found. I’m not saying that you shouldn’t have an overall plan when jumping into a task, but on some occasions you’ll find that if you simply jump into the task and stop worrying about doing everything perfectly that things will flow more smoothly. For my introductory paragraph problem I give myself 5 minutes to write out some general concepts about what I know the article will cover and then spend another 10-15 minutes going back and refining that information. That way I actually have some ideas to work with rather than a blank sheet of paper. If I still find myself struggling I’ll write the rest of the article first and then circle back to the introductory paragraphs once I’m done. To sum this tip up: Jump into a task without thinking too hard about it. It’s better to to get the rock at the top of the hill rocking some than doing nothing at all. You can always go back and refine your work.   Learn a Productivity Technique and Stick to It There are a lot of different productivity programs and seminars out there being sold by companies. I’ve always laughed at how much money people spend on some of these motivational programs/seminars because I think that being productive isn’t that hard if you create a re-useable set of steps and processes to follow. That’s not to say that some of these programs/seminars aren’t worth the money of course because I know they’ve definitely benefited some people that have a hard time getting things done and staying focused. One of the best productivity techniques I’ve ever learned is called the “Pomodoro Technique” and it’s completely free. This technique is an extremely simple way to manage your time without having to remember a bunch of steps, color coding mechanisms, or other processes. The technique was originally developed by Francesco Cirillo in the 80s and can be implemented with a simple timer. In a nutshell here’s how the technique works: Pick a task to work on Set the timer to 25 minutes and work on the task Once the timer rings record your time Take a 5 minute break Repeat the process Here’s why the technique works well for me: It forces me to focus on a single task for 25 minutes. In the past I had no time goal in mind and just worked aimlessly on a task until I got interrupted or bored. 25 minutes is a small enough chunk of time for me to stay focused. Any distractions that may come up have to wait until after the timer goes off. If the distraction is really important then I stop the timer and record my time up to that point. When the timer is running I act as if I only have 25 minutes total for the task (like you’re down to the last 25 minutes before turning in your term paper….frantically working to get it done) which helps me stay focused and turns into a “beat the clock” type of game. It’s actually kind of fun if you treat it that way and really helps me focus on a the task at hand. I automatically know how much time I’m spending on a given task (more on this later) by using this technique. I know that I have 5 minutes after each pomodoro (the 25 minute sprint) to waste on anything I’d like including visiting a website, stepping away from the computer, etc. which also helps me stay focused when the 25 minute timer is counting down. I use this technique so much that I decided to build a program for Windows 8 called Pomodoro Focus (I plan to blog about how it was built in a later post). It’s a Windows Store application that allows people to track tasks, productive time spent on tasks, interruption time experienced while working on a given task, and the number of pomodoros completed. If a time estimate is given when the task is initially created, Pomodoro Focus will also show the task completion percentage. I like it because it allows me to track my tasks, time spent on tasks (very useful in the consulting world), and even how much time I wasted on tasks (pressing the pause button while working on a task starts the interruption timer). I recently added a new feature that charts productive and interruption time for tasks since I wanted to see how productive I was from week to week and month to month. A few screenshots from the Pomodoro Focus app are shown next, I had a lot of fun building it and use it myself to as I work on tasks.   There are certainly many other productivity techniques and processes out there (and a slew of books describing them), but the Pomodoro Technique has been the simplest and most effective technique I’ve ever come across for staying focused and getting things done.   Persistence is Key Getting things done is great but one of the biggest lessons I’ve learned in life is that persistence is key especially when you’re trying to get something done that at times seems insurmountable. Small tasks ultimately lead to larger tasks getting accomplished, however, it’s not all roses along the way as some of the smaller tasks may come with their own share of bumps and bruises that lead to discouragement about the end goal and whether or not it is worth achieving at all. I’ve been on several long-term projects over my career as a software developer (I have one personal project going right now that fits well here) and found that repeating, “Persistence is the key!” over and over to myself really helps. Not every project turns out to be successful, but if you don’t show persistence through the hard times you’ll never know if you succeeded or not. Likewise, if you don’t persistently stick to the process of creating a daily list, follow a productivity process, etc. then the odds of consistently staying productive aren’t good.   Track Your Time How much time do you actually spend working on various tasks? If you don’t currently track time spent answering emails, on phone calls, and working on various tasks then you might be surprised to find out that a task that you thought was going to take you 30 minutes ultimately ended up taking 2 hours. If you don’t track the time you spend working on tasks how can you expect to learn from your mistakes, optimize your time better, and become more productive? That’s another reason why I like the Pomodoro Technique – it makes it easy to stay focused on tasks while also tracking how much time I’m working on a given task.   Eliminate Distractions I blogged about this final tip several years ago but wanted to bring it up again. If you want to be productive (and ultimately successful at whatever you’re doing) then you can’t waste a lot of time playing games or on Twitter, Facebook, or other time sucking websites. If you see an article you’re interested in that has no relation at all to the tasks you’re trying to accomplish then bookmark it and read it when you have some spare time (such as during a pomodoro break). Fighting the temptation to check your friends’ status updates on Facebook? Resist the urge and realize how much those types of activities are hurting your productivity and taking away from your focus. I’ll admit that eliminating distractions is still tough for me personally and something I have to constantly battle. But, I’ve made a conscious decision to cut back on my visits and updates to Facebook, Twitter, Google+ and other sites. Sure, my Klout score has suffered as a result lately, but does anyone actually care about those types of scores aside from your online “friends” (few of whom you’ve actually met in person)? :-) Ultimately it comes down to self-discipline and how badly you want to be productive and successful in your career, life goals, hobbies, or whatever you’re working on. Rather than having your homepage take you to a time wasting news site, game site, social site, picture site, or others, how about adding something like the following as your homepage? Every time your browser opens you’ll see a personal message which helps keep you on the right track. You can download my ubber-sophisticated homepage here if interested. Summary Is there a single set of steps that if followed can ultimately lead to productivity? I don’t think so since one size has never fit all. Every person is different, works in their own unique way, and has their own set of motivators, distractions, and more. While I certainly don’t consider myself to be an expert on the subject of productivity, I do think that if you learn what steps work best for you and gradually refine them over time that you can come up with a personal productivity process that can serve you well. Productivity is definitely an “art” that anyone can learn with a little practice and persistence. You’ve seen some of the steps that I personally like to follow and I hope you find some of them useful in boosting your productivity. If you have others you use please leave a comment. I’m always looking for ways to improve.

    Read the article

  • SOA Suite Integration: Part 1: Building a Web Service

    - by Anthony Shorten
    Over the next few weeks I will be posting blog entries outlying the SOA Suite integration of the Oracle Utilities Application Framework. This will illustrate how easy it is to integrate by providing some samples. I will use a consistent set of features as examples. The examples will be simple and while will not illustrate ALL the possibilities it will illustrate the relative ease of integration. Think of them as a foundation. You can obviously build upon them. Now, to ease a few customers minds, this series will certainly feature the latest version of SOA Suite and the latest version of Oracle Utilities Application Framework but the principles will apply to past versions of both those products. So if you have Oracle SOA Suite 10g or are a customer of Oracle Utilities Application Framework V2.1 or above most of what I will show you will work with those versions. It is just easier in Oracle SOA Suite 11g and Oracle Utilities Application Framework V4.x. This first posting will not feature SOA Suite at all but concentrate on the capability for the Oracle Utilities Application Framework to create Web Services you can use for integration. The XML Application Integration (XAI) component of the Oracle Utilities Application Framework allows product objects to be exposed as XML based transactions or as Web Services (or both). XAI was written before Web Services became fashionable and has allowed customers of our products to provide a consistent interface into and out of our product line. XAI has been enhanced over the last few years to take advantages of the maturing landscape of Web Services in the market place to a point where it now easier to integrate to SOA infrastructure. There are a number of object types that can be exposed as Web Services: Maintenance Objects – These are the lowest level objects that can be exposed as Web Services. Customers of past versions of the product will be familiar with XAI services based upon Maintenance Objects as they used to be the only method of generating Web Services. These are still supported for background compatibility but are starting to become less popular as they were strict in their structure and were solely attribute based. To generate Maintenance Object based Web Services definition you need to use the XAI Schema Editor component. Business Objects – In Oracle Utilities Application Framework V2.1 we introduced the concept of Business Objects. These are site or industry specific objects that are based upon Maintenance Objects. These allow sites to respecify, in configuration, the structure and elements of a Maintenance Object and other Business Objects (they are true objects with support for inheritance, polymorphism, encapsulation etc.). These can be exposed as Web Services. Business Services – As with Business Objects, we introduced Business Services in Oracle Utilities Application Framework V2.1 which allowed applications services and query zones to be expressed as custom services. These can then be exposed as Web Services via the Business Service definition. Service Scripts - As with Business Objects and Business Services, we introduced Service Scripts in Oracle Utilities Application Framework V2.1. These allow services and/objects to be combined into complex objects or simply expose common routines as callable scripts. These can also be defined as Web Services. For the purpose of this series we will restrict ourselves to Business Objects. The techniques can apply to any of the objects discussed above. Now, lets get to the important bit of this blog post, the creation of a Web Service. To build a Business Object, you first logon to the product and navigate to the Administration Menu by selecting the Admin Menu from the Menu action on left top of the screen (next to Home). A popup menu will appear with the menu’s available. If you do not see the Admin menu then you do not have authority to use it. Here is an example: Navigate to the B menu and select the + symbol next to the Business Object menu item. This indicates that you want to ADD a new Business Object. This menu will appear if you are running Alphabetic mode in your installation (I almost forgot that point). You will be presented with the Business Object maintenance screen. You will fill out the following on the first tab (at a minimum): Business Object – The name of the Business Object. Typically you will make it descriptive and also prefix with CM to denote it as a customization (you can easily find it if you prefix it). As I running this on my personal copy of the product I will use my initials as the prefix and call the sample Web Service “AS-User”. Description – A short description of the object to tell others what it is used for. For my example, I will use “Anthony Shorten’s User Object”. Detailed Description – You can add a long description to help other developers understand your object. I am just going to specify “Anthony Shorten’s Test Object for SOA Suite Integration”. Maintenance Object – As this Business Service is going to be based upon a Maintenance Object I will specify the desired Maintenance Object. In this example, I have decided to use the Framework object USER. Now, I chose this for a number of reasons. It is meaningful, simple and is across all our product lines. I could choose ANY maintenance object I wished to expose (including any custom ones, if I had them). Parent Business Object – If I was not using a Maintenance Object but building a child Business Object against another Business Object, then I would specify the Parent Business Object here. I am not using Parent’s so I will leave this blank. You either use Parent Business Object or Maintenance Object not both. Application Service – Business Objects like other objects are subject to security. You can attach an Application Service to an object to specify which groups of users (remember services are attached to user groups not users) have appropriate access to the object. I will use a default service provided with the product, F1-DFLTS ,as this is just a demonstration so I do not have to be too sophisticated about security. Instance Control – This allows the object to create instances in its objects. You can specify a Business Object purely to hold rules. I am being simple here so I will set it to Allow New Instances to allow the Business Object to be used to create, read, update and delete user records. The rest of the tab I will leave empty as I want this to be a very simple object. Other options allow lots of flexibility. The contents should look like this: Before saving your work, you need to navigate to the Schema tab and specify the contents of your object. I will save some time. When you create an object the schema will only contain the basic root elements of the object (in fact only the schema tag is visible). When you go to the Schema Tab, on the dashboard you will see a BO Schema zone with a solitary button. This will allow you to Generate the Schema for you from our metadata. Click on the Generate button to generate a basic schema from the metadata. You will now see a Schema with the element tags and references to the metadata of the Maintenance object (in the mapField attribute). I could spend a while outlining all the ways you can change the schema with defaults, formatting, tagging etc but the online help has plenty of great examples to illustrate this. You can use the Schema Tips zone in the for more details of the available customizations. Note: The tags are generated from the language pack you have installed. The sample is English so the tags are in English (which is the base language of all installations). If you are using a language pack then the tags will be generated in the language of the user that generated the object. At this point you can save your Business Object by pressing the Save action. At this point you have a basic Business Object based on the USER maintenance object ready for use but it is not defined as a Web Service yet. To do this you need to define the newly created Business Object as an XAI Inbound Service. The easiest and quickest way is to select + off the XAI Inbound Service off the context menu on the Business Object maintenance screen. This will prepopulate the service definition with the following: Adapter – This will be set to Business Adaptor. This indicates that the service is either Business Object, Business Service or Service Script based. Schema Type – Whether the object is a Business Object, Business Service or Service Script. In this case it is a Business Object. Schema Name – The name of the object. In this case it is the Business Object AS-User. Active – Set to Yes. This means the service is available upon startup automatically. You can enable and disable services as needed. Transaction Type – A default transaction type as this is Business Object Service. More about this in later postings. In our case we use the default Read. This means that if we only specify data and not a transaction type then the product will assume you want to issue a read against the object. You need to fill in the following: XAI Inbound Service – The name of the Web Service. Usually people use the same name as the underlying object , in the case of this example, but this can match your sites interfacing standards. By the way you can define multiple XAI Inbound Services/Web Services against the same object if you want. Description and Detail Description – Documentation for your Web Service. I just supplied some basic documentation for this demonstration. You can now save the service definition. Note: There are lots of other options on this screen that allow for behavior of your service to be specified. I will leave them blank for now. When you save the service you are issued with two new pieces of information. XAI Inbound Service Id is a randomly generated identifier used internally by the XAI Servlet. WSDL URL is the WSDL standard URL used for integration. We will take advantage of that in later posts. An example of the definition is shown below: Now you have defined the service but it will only be available when the next server restart or when you flush the data cache. XAI Inbound Services are cached for performance so the cache needs to be told of this new service. To refresh the cache you can use the Admin –> X –> XAI Command menu item. From the command dropdown select Refresh Registry and press Send Command. You will see an XML of the command sent to the server (the presence of the XML means it is finished). If you have an error around the authorization, then check your default user and password settings on the XAI Options menu item. Be careful with flushing the cache as the cache is shared (unless of course you are the only Web Service user on the system – In that case it only affects you). The Web Service is NOW available to be used. To perform a simple test of your new Web Service, navigate to the Admin –> X –> XAI Submission menu item. You will see an open XML request tab. You need to type in the request XML you want to test in the Main tab. The first tag is the XAI Inbound Service Name and the elements are as per your schema (minus the schema tag itself as that is only used internally). My example is as follows (I want to return the details of user SYSUSER) – Remember to close tags. Hitting the Save button will issue the XML and return the response according to the Business Object schema. Now before you panic, you noticed that it did not ask for credentials. It propagates the online credentials to the service call on this function. You now have a Web Service you can use for integration. We will reuse this information in subsequent posts. The process I just described can be used for ANY object in the system you want to expose. This whole process at a minimum can take under a minute. Obviously I only showed the basics but you can at least get an appreciation of the ease of defining a Web Service (just by using a browser). The next posts now build upon this. Hope you enjoyed the post.

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • C#/.NET Little Wonders: ConcurrentBag and BlockingCollection

    - by James Michael Hare
    In the first week of concurrent collections, began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  The last post discussed the ConcurrentDictionary<T> .  Finally this week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see C#/.NET Little Wonders: A Redux. Recap As you'll recall from the previous posts, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  With .NET 4.0, a new breed of collections was born in the System.Collections.Concurrent namespace.  Of these, the final concurrent collection we will examine is the ConcurrentBag and a very useful wrapper class called the BlockingCollection. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this informative whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentBag<T> – Thread-safe unordered collection. Unlike the other concurrent collections, the ConcurrentBag<T> has no non-concurrent counterpart in the .NET collections libraries.  Items can be added and removed from a bag just like any other collection, but unlike the other collections, the items are not maintained in any order.  This makes the bag handy for those cases when all you care about is that the data be consumed eventually, without regard for order of consumption or even fairness – that is, it’s possible new items could be consumed before older items given the right circumstances for a period of time. So why would you ever want a container that can be unfair?  Well, to look at it another way, you can use a ConcurrentQueue and get the fairness, but it comes at a cost in that the ordering rules and synchronization required to maintain that ordering can affect scalability a bit.  Thus sometimes the bag is great when you want the fastest way to get the next item to process, and don’t care what item it is or how long its been waiting. The way that the ConcurrentBag works is to take advantage of the new ThreadLocal<T> type (new in System.Threading for .NET 4.0) so that each thread using the bag has a list local to just that thread.  This means that adding or removing to a thread-local list requires very low synchronization.  The problem comes in where a thread goes to consume an item but it’s local list is empty.  In this case the bag performs “work-stealing” where it will rob an item from another thread that has items in its list.  This requires a higher level of synchronization which adds a bit of overhead to the take operation. So, as you can imagine, this makes the ConcurrentBag good for situations where each thread both produces and consumes items from the bag, but it would be less-than-idea in situations where some threads are dedicated producers and the other threads are dedicated consumers because the work-stealing synchronization would outweigh the thread-local optimization for a thread taking its own items. Like the other concurrent collections, there are some curiosities to keep in mind: IsEmpty(), Count, ToArray(), and GetEnumerator() lock collection Each of these needs to take a snapshot of whole bag to determine if empty, thus they tend to be more expensive and cause Add() and Take() operations to block. ToArray() and GetEnumerator() are static snapshots Because it is based on a snapshot, will not show subsequent updates after snapshot. Add() is lightweight Since adding to the thread-local list, there is very little overhead on Add. TryTake() is lightweight if items in thread-local list As long as items are in the thread-local list, TryTake() is very lightweight, much more so than ConcurrentStack() and ConcurrentQueue(), however if the local thread list is empty, it must steal work from another thread, which is more expensive. Remember, a bag is not ideal for all situations, it is mainly ideal for situations where a process consumes an item and either decomposes it into more items to be processed, or handles the item partially and places it back to be processed again until some point when it will complete.  The main point is that the bag works best when each thread both takes and adds items. For example, we could create a totally contrived example where perhaps we want to see the largest power of a number before it crosses a certain threshold.  Yes, obviously we could easily do this with a log function, but bare with me while I use this contrived example for simplicity. So let’s say we have a work function that will take a Tuple out of a bag, this Tuple will contain two ints.  The first int is the original number, and the second int is the last multiple of that number.  So we could load our bag with the initial values (let’s say we want to know the last multiple of each of 2, 3, 5, and 7 under 100. 1: var bag = new ConcurrentBag<Tuple<int, int>> 2: { 3: Tuple.Create(2, 1), 4: Tuple.Create(3, 1), 5: Tuple.Create(5, 1), 6: Tuple.Create(7, 1) 7: }; Then we can create a method that given the bag, will take out an item, apply the multiplier again, 1: public static void FindHighestPowerUnder(ConcurrentBag<Tuple<int,int>> bag, int threshold) 2: { 3: Tuple<int,int> pair; 4:  5: // while there are items to take, this will prefer local first, then steal if no local 6: while (bag.TryTake(out pair)) 7: { 8: // look at next power 9: var result = Math.Pow(pair.Item1, pair.Item2 + 1); 10:  11: if (result < threshold) 12: { 13: // if smaller than threshold bump power by 1 14: bag.Add(Tuple.Create(pair.Item1, pair.Item2 + 1)); 15: } 16: else 17: { 18: // otherwise, we're done 19: Console.WriteLine("Highest power of {0} under {3} is {0}^{1} = {2}.", 20: pair.Item1, pair.Item2, Math.Pow(pair.Item1, pair.Item2), threshold); 21: } 22: } 23: } Now that we have this, we can load up this method as an Action into our Tasks and run it: 1: // create array of tasks, start all, wait for all 2: var tasks = new[] 3: { 4: new Task(() => FindHighestPowerUnder(bag, 100)), 5: new Task(() => FindHighestPowerUnder(bag, 100)), 6: }; 7:  8: Array.ForEach(tasks, t => t.Start()); 9:  10: Task.WaitAll(tasks); Totally contrived, I know, but keep in mind the main point!  When you have a thread or task that operates on an item, and then puts it back for further consumption – or decomposes an item into further sub-items to be processed – you should consider a ConcurrentBag as the thread-local lists will allow for quick processing.  However, if you need ordering or if your processes are dedicated producers or consumers, this collection is not ideal.  As with anything, you should performance test as your mileage will vary depending on your situation! BlockingCollection<T> – A producers & consumers pattern collection The BlockingCollection<T> can be treated like a collection in its own right, but in reality it adds a producers and consumers paradigm to any collection that implements the interface IProducerConsumerCollection<T>.  If you don’t specify one at the time of construction, it will use a ConcurrentQueue<T> as its underlying store. If you don’t want to use the ConcurrentQueue, the ConcurrentStack and ConcurrentBag also implement the interface (though ConcurrentDictionary does not).  In addition, you are of course free to create your own implementation of the interface. So, for those who don’t remember the producers and consumers classical computer-science problem, the gist of it is that you have one (or more) processes that are creating items (producers) and one (or more) processes that are consuming these items (consumers).  Now, the crux of the problem is that there is a bin (queue) where the produced items are placed, and typically that bin has a limited size.  Thus if a producer creates an item, but there is no space to store it, it must wait until an item is consumed.  Also if a consumer goes to consume an item and none exists, it must wait until an item is produced. The BlockingCollection makes it trivial to implement any standard producers/consumers process set by providing that “bin” where the items can be produced into and consumed from with the appropriate blocking operations.  In addition, you can specify whether the bin should have a limited size or can be (theoretically) unbounded, and you can specify timeouts on the blocking operations. As far as your choice of “bin”, for the most part the ConcurrentQueue is the right choice because it is fairly light and maximizes fairness by ordering items so that they are consumed in the same order they are produced.  You can use the concurrent bag or stack, of course, but your ordering would be random-ish in the case of the former and LIFO in the case of the latter. So let’s look at some of the methods of note in BlockingCollection: BoundedCapacity returns capacity of the “bin” If the bin is unbounded, the capacity is int.MaxValue. Count returns an internally-kept count of items This makes it O(1), but if you modify underlying collection directly (not recommended) it is unreliable. CompleteAdding() is used to cut off further adds. This sets IsAddingCompleted and begins to wind down consumers once empty. IsAddingCompleted is true when producers are “done”. Once you are done producing, should complete the add process to alert consumers. IsCompleted is true when producers are “done” and “bin” is empty. Once you mark the producers done, and all items removed, this will be true. Add() is a blocking add to collection. If bin is full, will wait till space frees up Take() is a blocking remove from collection. If bin is empty, will wait until item is produced or adding is completed. GetConsumingEnumerable() is used to iterate and consume items. Unlike the standard enumerator, this one consumes the items instead of iteration. TryAdd() attempts add but does not block completely If adding would block, returns false instead, can specify TimeSpan to wait before stopping. TryTake() attempts to take but does not block completely Like TryAdd(), if taking would block, returns false instead, can specify TimeSpan to wait. Note the use of CompleteAdding() to signal the BlockingCollection that nothing else should be added.  This means that any attempts to TryAdd() or Add() after marked completed will throw an InvalidOperationException.  In addition, once adding is complete you can still continue to TryTake() and Take() until the bin is empty, and then Take() will throw the InvalidOperationException and TryTake() will return false. So let’s create a simple program to try this out.  Let’s say that you have one process that will be producing items, but a slower consumer process that handles them.  This gives us a chance to peek inside what happens when the bin is bounded (by default, the bin is NOT bounded). 1: var bin = new BlockingCollection<int>(5); Now, we create a method to produce items: 1: public static void ProduceItems(BlockingCollection<int> bin, int numToProduce) 2: { 3: for (int i = 0; i < numToProduce; i++) 4: { 5: // try for 10 ms to add an item 6: while (!bin.TryAdd(i, TimeSpan.FromMilliseconds(10))) 7: { 8: Console.WriteLine("Bin is full, retrying..."); 9: } 10: } 11:  12: // once done producing, call CompleteAdding() 13: Console.WriteLine("Adding is completed."); 14: bin.CompleteAdding(); 15: } And one to consume them: 1: public static void ConsumeItems(BlockingCollection<int> bin) 2: { 3: // This will only be true if CompleteAdding() was called AND the bin is empty. 4: while (!bin.IsCompleted) 5: { 6: int item; 7:  8: if (!bin.TryTake(out item, TimeSpan.FromMilliseconds(10))) 9: { 10: Console.WriteLine("Bin is empty, retrying..."); 11: } 12: else 13: { 14: Console.WriteLine("Consuming item {0}.", item); 15: Thread.Sleep(TimeSpan.FromMilliseconds(20)); 16: } 17: } 18: } Then we can fire them off: 1: // create one producer and two consumers 2: var tasks = new[] 3: { 4: new Task(() => ProduceItems(bin, 20)), 5: new Task(() => ConsumeItems(bin)), 6: new Task(() => ConsumeItems(bin)), 7: }; 8:  9: Array.ForEach(tasks, t => t.Start()); 10:  11: Task.WaitAll(tasks); Notice that the producer is faster than the consumer, thus it should be hitting a full bin often and displaying the message after it times out on TryAdd(). 1: Consuming item 0. 2: Consuming item 1. 3: Bin is full, retrying... 4: Bin is full, retrying... 5: Consuming item 3. 6: Consuming item 2. 7: Bin is full, retrying... 8: Consuming item 4. 9: Consuming item 5. 10: Bin is full, retrying... 11: Consuming item 6. 12: Consuming item 7. 13: Bin is full, retrying... 14: Consuming item 8. 15: Consuming item 9. 16: Bin is full, retrying... 17: Consuming item 10. 18: Consuming item 11. 19: Bin is full, retrying... 20: Consuming item 12. 21: Consuming item 13. 22: Bin is full, retrying... 23: Bin is full, retrying... 24: Consuming item 14. 25: Adding is completed. 26: Consuming item 15. 27: Consuming item 16. 28: Consuming item 17. 29: Consuming item 19. 30: Consuming item 18. Also notice that once CompleteAdding() is called and the bin is empty, the IsCompleted property returns true, and the consumers will exit. Summary The ConcurrentBag is an interesting collection that can be used to optimize concurrency scenarios where tasks or threads both produce and consume items.  In this way, it will choose to consume its own work if available, and then steal if not.  However, in situations where you want fair consumption or ordering, or in situations where the producers and consumers are distinct processes, the bag is not optimal. The BlockingCollection is a great wrapper around all of the concurrent queue, stack, and bag that allows you to add producer and consumer semantics easily including waiting when the bin is full or empty. That’s the end of my dive into the concurrent collections.  I’d also strongly recommend, once again, you read this excellent Microsoft white paper that goes into much greater detail on the efficiencies you can gain using these collections judiciously (here). Tweet Technorati Tags: C#,.NET,Concurrent Collections,Little Wonders

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >