Search Results

Search found 33819 results on 1353 pages for 'jump list launcher'.

Page 599/1353 | < Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >

  • Keeping up with Technology

    - by kennedysteve
    If you're like me, you have a hard time keeping up with all the technologies out there. The reality is there's too many new technologies (languages, methodologies,  tools, etc). One of the ways I try to keep up with everything is by using good ol' RSS feeds in conjunction with Google Reader. Google Reader is both an online aggregator of RSS feeds, and it also has a good companion app on Google Android. The nicest part of Google Reader for me is the "All Listings" view which gives me a reverse chronological view of ALL articles (mixed together) regardless of the actual RSS feed.  This way, I get to see the newest articles first. I can then choose to hide the articles I've viewed, etc. Here is a list of my RSS feeds. Admittedly, some of these are all over the spectrum. But you might find one or two interesting. .NET Rocks! RSS = http://feeds.feedburner.com/netRocksFullMp3Downloads Main Web Site = http://www.dotnetrocks.com Channel 9 RSS = http://channel9.msdn.com/Feeds/RSS Main Web Site = http://channel9.msdn.com/ CodePlex  RSS = http://www.codeplex.com/site/feeds/rss Main Web Site = http://www.codeplex.com/site/feeds/rss Connected Show Developer Podcast! RSS = http://feeds.connectedshow.com/ConnectedShow Main Web Site = http://www.ConnectedShow.com/ dnrTV RSS = http://feeds.feedburner.com/DnrtvWmv?format=xml Main Web Site = http://dnrtv.com ebookshare RSS = http://www.ebookshare.net/feed/ Main Web Site = http://www.ebookshare.net Geekswithblogs.net RSS = http://feeds.feedburner.com/geekswithblogs Main Web Site = http://geekswithblogs.net/mainfeed.aspx Gmail Blog RSS = http://feeds.feedburner.com/OfficialGmailBlog?format=xml Main Web Site = http://gmailblog.blogspot.com/ Google Mobile Blog RSS = http://feeds.feedburner.com/OfficialGoogleMobileBlog Main Web Site = http://googlemobile.blogspot.com/ Herding Code RSS = http://feeds.feedburner.com/herdingcode Main Web Site = http://herdingcode.com LearnVisualStudio.NET Videos RSS = http://www.learnvisualstudio.net/videos.rss Main Web Site = http://www.learnvisualstudio.net/ Microsoft Learning Upcoming = Microsoft Learning Upcoming Titles RSS = http://learning.microsoft.com/rss/en-US/upcomingtitles?brand=Learning Main Web Site = http://learning.microsoft.com:80/rss/en-US/upcomingtitles?brand=Learning MS On-demand Webcasts RSS = http://www.microsoft.com/communities/rss.aspx?&Title=On-Demand+Webcasts&RssTitle=Microsoft+Webcasts%3A+On-Demand+Webcasts&CMTYSvcSource=MSCOMMedia&WebNewsURL=http%3A%2F%2Fwww.microsoft.com%2Fevents%2FEventDetails.aspx&CMTYRawShape=list&Params=+%0D%0A%09~CMTYDataSvcParams%5E%0D%0A%09~arg+Name%3D'EventType'+Value%3D'OnDemandWebcast'%2F%5E%0D%0A%09~arg+Name%3D'ProviderID'+Value%3D'A6B43178-497C-4225-BA42-DF595171F04C'%2F%5E%0D%0A%09~arg+Name%3D'StartDate'+Value%3D'06%2F30%2F2006'%2F%5E%0D%0A%09~arg+Name%3D'EndDate'+Value%3D'Now%2B0'%2F%5E%0D%0A%09~%2FCMTYDataSvcParams%5E+&NumberOfItems=100 Main Web Site = http://www.microsoft.com/events/default.mspx MS Podcasts for Devs RSS = http://www.microsoft.com/events/podcasts/default.aspx?podcast=rss&audience=Audience-e5381407-359f-4922-97d0-0237af790eee&pageId=x40 Main Web Site = http://www.microsoft.com/events/podcasts/default.aspx?audience=Audience-e5381407-359f-4922-97d0-0237af790eee&pageId=x40&WT.rss_ev=f MSDN Blogs RSS = http://blogs.msdn.com/b/mainfeed.aspx?Type=BlogsOnly Main Web Site = http://blogs.msdn.com/b/ MSDN Radio RSS = http://www.microsoft.com/events/podcasts/default.aspx?topic=&audience=&view=&pageId=x73&seriesID=Series-b9139976-8d48-4249-9b89-ccd17891de1e.xml&podcast=rss&type=wma Main Web Site = http://www.microsoft.com/events/podcasts/default.aspx?seriesID=Series-b9139976-8d48-4249-9b89-ccd17891de1e.xml&pageId=x73&WT.rss_ev=f O'Reilly Deal of the Day RSS = http://feeds.feedburner.com/oreilly/ebookdealoftheday Main Web Site = http://oreilly.com O'Reilly New RSS = http://feeds.feedburner.com/oreilly/newbooks Main Web Site = http://oreilly.com/ Safari Books Online RSS = http://my.safaribooksonline.com/rss Main Web Site = http://my.safaribooksonline.com/ ScottGu's Blog RSS = http://weblogs.asp.net/scottgu/rss.aspx Main Web Site = http://weblogs.asp.net/scottgu/default.aspx SourceForge Community Blog RSS = http://sourceforge.net/blog/feed/ Main Web Site = http://sourceforge.net/blog Stack Overflow RSS = http://blog.stackoverflow.com/feed/ Main Web Site = http://blog.stackoverflow.com Stepcase Lifehack RSS = http://www.lifehack.org/feed/ Main Web Site = http://www.lifehack.org TechNet Radio RSS = http://www.microsoft.com/events/podcasts/default.aspx?topic=&audience=&view=&pageId=x73&seriesID=Series-cc4e3db2-9212-43c5-a57b-d43fa31e6452.xml&podcast=rss&type=wma Main Web Site = http://www.microsoft.com/events/podcasts/default.aspx?seriesID=Series-cc4e3db2-9212-43c5-a57b-d43fa31e6452.xml&pageId=x73&WT.rss_ev=f Wrox All New Titles RSS = http://www.wrox.com/WileyCDA/feed/RSS_WROX_ALLNEW.xml Main Web Site = http://www.wrox.com

    Read the article

  • Build Explorer version 1.1 for Visual Studio Team Explorer is released

    - by terje
    Our free extension to Visual Studio , the folder based Build Explorer Version 1.1 has now been released, and uploaded to the Visual Studio Gallery and Codeplex. We have collected up a few changes and some bugs, as follows: Changes: Queue Default Builds can now be optionally fully enabled, fully disabled or enabled just for leaf nodes (=disabled for folders).  If you got a large number of builds it was pretty scary to be able to launch all of them with just one click.  However, it is nice to avoid having the dialog box up when you want to just run off a single build.  That’s the reasoning between the 3rd choice here. Auto fill-in of the builds at start up and refresh  This was a request that came up a lot, and which was also irritating to us.  When the Team Project is opened, the Build explorer will start by itself and fill up it’s tree. So you don’t need to click the node anymore. There was also quite a bit of flashing when the tree filled up, this has been reduced to just a single top level fill before it collapses the node. The speed of the buildup of the tree has also been increased. The “All Build Definitions” node is now shown on top of the list Login box appeared in certain cross domain situations. This was a fix for the TF30063 authentication problem we had in the beginning.  Hopefully the new code has that fixed properly so that both the login box and the TF30063 are gone forever.  Our testing so far seems to indicate it works.  If anyone gets a real problem here there are two workarounds: 1) Turn off the auto refresh to reduce the issue. If this doesn’t fix it, then 2) please reinstall the former version (go to the codeplex download site if you don’t have it anymore)  Write a comment to this blog post with a description of what happens, and I will send a temporary fix asap. Bug fixes: The folder name matching was case sensitive, so “Application.CI” and “application.CI” created two different folders.  View all builds not shown for leaf odes, and view builds didn’t work in all cases.  There was some inconsistencies here which have been fixed. Partly fixed:  The context menu to queue a new build for disabled builds should be removed, but that was a difficult one, and is still on the list, but the command will not do anything for a disabled build. Using the Queue Default Builds on a folder, and if it had some disabled builds below an error box appeared and ruined the whole experience. As a result of these fixes there has been introduced some new options, as shown below:   The two first settings, the Separator symbol and the options for how to handle Queuing of default builds are set per Team Project, and is stored in the TFS source control under the BuildProcessTemplates folder, with the name Inmeta.VisualStudio.BuildExplorer.Settings.xml The next two settings need some explanations.  They handle the behavior for the auto update of the build folders.  First, these are stored in the local registry per user, at the key HKEY_CURRENT_USER/Software\Inmeta\BuildExplorer. The first option Use Timed Refresh at Startup, if turned off, you will need to click the node as it is done in Version 1.0.  The second option is a timed value, the time after the Build explorer node is created and until the scanning of the Build folders start.  It is assumed that this is enough, and the tests so far indicates this.  If you have very many builds and you see that the explorer don’t get them all, try to increase this value, and of course, notify me of your case, either here or on the Visual Gallery site.

    Read the article

  • SQL SERVER – Advanced Data Quality Services with Melissa Data – Azure Data Market

    - by pinaldave
    There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability. Crucial Questions For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data. Reference Data Services Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit. For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them. Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area. Step by Step to Guide That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base. On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen. Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen. Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data: Name Address Suite City State Zip I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data. Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform. I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema. Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data. My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working. When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional. Final Note As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: DQS

    Read the article

  • No HDMI Audio with GeForce 9600GT and nForce board

    - by Bobby
    I've been trying to get HDMI with sound working for the last few days, and I'm a little bit out of ideas. (I've verified that the hardware/Setup works via Windows.) aplay does not list my HDMI device: $ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: NVidia [HDA NVidia], device 0: ALC662 rev1 Analog [ALC662 rev1 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: NVidia [HDA NVidia], device 1: ALC662 rev1 Digital [ALC662 rev1 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 I've already compiled the alsa drivers (1.0.24) from a snapshot (with --with-oss=no) and added the line options snd-hda-intel model=auto # Tried 3stack-dig and 6stack-dig too to /etc/modprobe.d/alsa-base.conf. Still, the device does not show up. If it is important, the HDMI TV is at the moment not configured to be part of the X session (I've tried that to, at least with X restart, and it didn't change anything). What did I miss? $ lspci 00:00.0 Host bridge: nVidia Corporation Device 07c3 (rev a2) 00:00.1 RAM memory: nVidia Corporation nForce 630i memory controller (rev a2) 00:01.0 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.1 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.2 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.3 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.4 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.5 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.6 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:02.0 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:03.0 ISA bridge: nVidia Corporation MCP73 LPC Bridge (rev a2) 00:03.1 SMBus: nVidia Corporation MCP73 SMBus (rev a1) 00:03.2 RAM memory: nVidia Corporation MCP73 Memory Controller (rev a1) 00:03.4 RAM memory: nVidia Corporation MCP73 Memory Controller (rev a1) 00:04.0 USB Controller: nVidia Corporation GeForce 7100/nForce 630i USB (rev a1) 00:04.1 USB Controller: nVidia Corporation MCP73 [nForce 630i] USB 2.0 Controller (EHCI) (rev a1) 00:08.0 IDE interface: nVidia Corporation MCP73 IDE (rev a1) 00:09.0 Audio device: nVidia Corporation MCP73 High Definition Audio (rev a1) 00:0a.0 PCI bridge: nVidia Corporation MCP73 PCI Express bridge (rev a1) 00:0b.0 PCI bridge: nVidia Corporation MCP73 PCI Express bridge (rev a1) 00:0c.0 PCI bridge: nVidia Corporation MCP73 PCI Express bridge (rev a1) 00:0d.0 PCI bridge: nVidia Corporation MCP73 PCI Express bridge (rev a1) 00:0e.0 IDE interface: nVidia Corporation MCP73 IDE (rev a2) 00:0f.0 Ethernet controller: nVidia Corporation MCP73 Ethernet (rev a2) 02:00.0 VGA compatible controller: nVidia Corporation G94 [GeForce 9600 GT] (rev a1)   $ aplay -L default pulse Playback/recording through the PulseAudio sound server front:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Front speakers surround40:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Digital IEC958 (S/PDIF) Digital Audio Output dmix:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct sample mixing device dmix:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct sample mixing device dsnoop:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct sample snooping device dsnoop:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct sample snooping device hw:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct hardware device without any conversions hw:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct hardware device without any conversions plughw:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Hardware device with all software conversions plughw:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Hardware device with all software conversions

    Read the article

  • SQL SERVER – What is a Technology Evangelist?

    - by pinaldave
    When you hear that someone is an “evangelist” the first thing that might pop into your mind is the Christian church.  In fact, the term did come from Christianity, and basically means someone who spreads the news about their faith.  In the technology world, the same definition is true. Technology evangelists are individuals who, professionally or in their spare time, spread the news about the latest new products.  Sounds like a salesperson, right?  No they are absolutely different. Salespeople also keep up to date with a large number of people, and like to convince others to buy their product – and some will go to any lengths to sell!  An evangelist, on the other hand, is brutally honest about the product, even if sometimes it means not making a sale.  An evangelist is out there to tell the TRUTH.  A salesperson needs to make sales. An Evangelist offers a Solution independent of Technology used – a Salesperson offers Particular Technology. With this definition in mind, you can probably think of a few technology evangelists you already know.  Maybe it’s a relative or a neighbor, someone who loves keeping up with the latest trends and is always willing to tell you about them if you ask even the simplest question.  And, in fact, they probably are evangelists and don’t even know it.  For a long time, the work of technology evangelism was in the hands of community and community technology leaders. Luckily now various organizations have understood the importance of the community and helping community to reach their goals. This has lead them to create role of “Technology Evangelists”. Let me talk about one of the most famous Evangelist of the SQL Server technology. Technology Evangelist only belongs to technology and above any country, race, location or any other thing. They are dedicated to the technology. Vinod Kumar is such a man, who have given a lot to community. For years he was a Technology Evangelist for Microsoft, and maintained a blog that was dedicated to spreading his enthusiasm for his favorite products.  He is one of the most respected Evangelists in the field, and has done a lot of work to define the job for other professionals. Vinod’s career has since progressed to the Microsoft Technology Center (read his post), but he is continuing to be a strong presence in the evangelism community.  I have a lot of respect for Vinod.  He has done a lot for the community and technology evangelism.  Everybody has dream to serve community the way he does, and he is a great role model for evangelists everywhere. On his blog, Vinod created one of the best descriptions of a Technology Evangelist.  It defined the position and also made the distinction between evangelist and salesperson extremely clear.  I will include the highlights of that list here, because no one can say it better than Vinod: Bundle of energy – Passion is their middle name Wonderful Story tellers Empathy, Trust, Loyalty, Openness, Accessibility and Warmth Technology Enthusiast – Doers Love people, people and more people – Community oriented Unique Style and Leadership qualities !!! Self-Confident, Self-Motivated but a student (To read the full list, see: Evangelism Beyond Borders with Evangelists) His blog is a must-read for anyone interested in technology evangelism as a career or simply a hobby.  His advice about how to gain an audience and become a trusted advisor is the best in the business. I think there is an evangelist in everyone. I, too, consider myself a technology evangelist.  Regular readers of this blog will recognize that I am dedicated to bringing information to the masses, and that I pride myself on being both brutally and honest and giving every product fair consideration. I think there is no better way of saying following subject. “Once an Evangelist – Always an Evangelist!” Reference: Pinal Dave (http://blog.SQLAuthority.com)     Filed under: About Me, Database, MVP, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Evangelist

    Read the article

  • Web Services for Info Explorer Zones

    - by Anthony Shorten
    One of the most interesting uses for XAI and Configurable objects is the exposure of a query portal as a Web Service. Let me illustrate this with an example. Say you have an interface that requires a list of data from a number of product tables. In the past you would have to build a java program to do this with SQL then use an application service but it is now possible with just configuration. The first step in the process is to create the SQL you want to use for the interface. It can be any valid static SQL or use host variables for the WHERE clause (we call that filtered). Once you are happy with the SQL (and it performs acceptably) you can incorporate that SQL into a Info-Explorer Zone. You can use any of the explorer zone types but I typically recommend F1-DE-SINGLE as it supports a single SQL statement with multiple filters (up to 15) as well as hidden filters (up to 5). Hidden filters are typically not displayed in the UI for criteria (remember explorer zones can be used on the user Interface as well) but for web services they can be used as normal filters (this means you can use up to 20 filters all up). Once you are happy with the zone, you now need to define it as a Business Service. We have a generic service called FWLZDEXP which allows a explorer zone to be defined as a Business Service. If you open any Business Service based upon FWLZDEXP you will see some examples. The schema is standard and pretty self explanatory in terms of the structure. The schema pattern looks like this: Zone element - maps to the ZONE_CD element and the default value is the zone name you just created. This links the business service to the zone. Filter elements - You name the filters as you like but the mapField is set to Fx_VALUE where x is the filter number corresponding to the filter element in the zone definition. Hidden filter elements - You name the filters as you like but the mapField is set to Hx_VALUE where x is the filter number corresponding to the hidden filter element in the zone definition. results group - this holds the elements of the result set. Each element in your result set has a tagname and is linked to the COL_VALUE mapField and the row element is lists the SEQNO of the column. This corresponds to the column number in the results set in the zone. An example schema is shown below for the F1-USGRACML zone, which returns the access modes for a user group and application service filters. In the example, the userGroup and applicationService elements are the filters and the rows would contain a list of accessModeDescr. This is just a simple example to illustrate the point. There are lots of examples in the product that you can investigate. One recommendation, to save time, is that you copy the schema from one of the examples to save you typing it from scratch. You can simply modify the tags and other elements to suit your needs. Once the Business Service is defined it can simply be defined as a Web Service by registering an XAI Inbound Service using the Business Service definition as a basis. You now have a Web Service based upon a Info Explorer Zone. This is one of my favorite components as it allows interfaces to be simplified. This will be my last blog entry for this year. I hope you all have a great and safe Christmas and an even greater new year. Next year promises to be an exciting year and I look forward to communicating exciting developments we are working on at the moment as they are released.

    Read the article

  • Throttling in OSB

    - by Knut Vatsendvik
    Technorati Tags: soa,integration,osb,throttling,overload protection A common problem with integration is the risk of overloading a particular web service. When the capacity of a web service is reached and it continues to accept connections, it will most likely start to deteriorate. Fortunately there are 2 techniques, with Oracle Service Bus, that you can apply for protecting this from happening. You can either limit the concurrent number of requests for a Business Service (outbound requests) or you can limit the number of threads processing the requests for a Proxy Service (inbound requests). Limiting the Concurrent Number of Requests Limiting the concurrent requests for a Business Service cannot be set at design time so you have to use the built-in Oracle Service Bus Administration Console to do it (/sbconsole). Follow these steps to enable it: In Change Center, click Create to start a new Session Select Project Explorer, and navigate to the Business Service you want to limit Select the Operational Settings tab of the View a Business Service page In this tab, under Throttling, select the Enable check box. By enabling throttling you Specify a value for Maximum Concurrency Specify a positive integer value for Throttling Queue to backlog messages that has exceeded the message concurrency limit Specify the maximum time in milliseconds for Message Expiration a message can spend in Throttling Queue Click Update Click Active in Change Center to active the new settings If you re-publish the service, it will not overwrite the settings. Only if the resource is renamed or moved, it will. Please note that a throttling queue is an in-memory queue. Messages that are placed in this queue are not recoverable when a server fails or when you restart a server. Limiting the Number of Threads A better approach, in my opinion, is to limit the number of threads that can work with request. Follow these steps to do it: Open the WebLogic Server Console (/console) In Change Center, click Create to start a new Session In the left pane expand Environment and select Work Managers In the Global Work Managers page, click New    Click the Work Manager radio button, then click Next Enter a Name for the new Work Manager, and click Next In the Available Targets list, select server instances or clusters on which you will deploy applications that reference the Work Manager Click Finish. The new Work Manager now appears in the Global Work Managers page. Select the new Work Manager Right next to the Maximum Threads Constraint drop-down box, click New   Click the Maximum Threads Constraint radio button, then click Next Enter a Name and a thread Count to be the maximum size to allocate for requests. Click Next  In the Available Targets list, select server instances or clusters on which you will deploy applications that reference the Work Manager Click Finish Click Save Click Active in Change Center to active your changes.  A restart may be necessary.   Puh! Almost there. Start a new session. Go to the Service Bus Console (/sbconsole) and find your consuming Proxy Service. Click the Edit button of the Transport Configuration tab. Click Next Set the Dispatch Policy to the new Work Manager Click Last Click Save Click Active in Change Center to active your changes. 

    Read the article

  • How to use crontab, .netrc, and git push?

    - by Jon
    Hi all, I am in the process of automating the backups from various servers to a central point then pushing those config changes into a git repo so i can track any changes over time. The rest of the scripts are working well, I can copy / rsync the files across the network to a central point. The last script is to get the config files to be put into / updated in repository. The script is as follows: #!/bin/bash clear SERVERNAME="betty" SCRIPTDIR="/home/jon" GITROOT="/tmp/git" TEMPROOT="/tmp/backups" BACKUPROOTDIR="/mnt/backups" echo " - running as user: $UID" echo "backingup git config on $SERVERNAME" echo "" # check to see if root backup folder exists, otherwise create it. if [ -d $GITROOT ]; then rm -rf $GITROOT fi mkdir $GITROOT cd $GITROOT echo " - testing if home is where I think it should be!" echo $HOME echo " - testing if it can see netrc" tail $HOME/.netrc git clone http://192.168.10.97:8000/repositories/HOH-config-backups.git cd HOH-config-backups echo " - copy Configuration Folders across" cp -r $BACKUPROOTDIR/Configuration/* $GITROOT/HOH-config-backups/ cp -r $BACKUPROOTDIR/scripts $GITROOT/HOH-config-backups/ git add . git commit -a -m "committing any new configuration changes!" git push origin master echo "" echo "Git repo updated" echo "" echo " - backing up this script" FIREWIGSCRIPTLOC="$BACKUPROOTDIR/scripts/$SERVERNAME" if [ ! -d $FIREWIGSCRIPTLOC ]; then mkdir $FIREWIGSCRIPTLOC fi cp /home/jon/gitConfig.sh $FIREWIGSCRIPTLOC The git repo is on a different machine in the network using Apache and HTTP-backend.exe (smart HTTP protocol). If I run this script as me "jon" it works. If I run it in crontab it fails. git uses the /home/jon/.netrc file for authentication: machine 192.168.10.97 login gitconfig password 1234579 The log from crontab is: TERM environment variable not set. - running as user: 1000 backingup git config on betty - testing if home is where I think it should be! /home/jon - testing if it can see netrc machine 192.168.10.97 login gitconfig password 1234579 got 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd walk 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd got be880f2d306778a538d592e7a02eb19f416612f7 got bd387e8def9f77aafa798bf53e80d949aba443e8 got 1bc1a59e12775841d4c59d77c63b8a73823138c2 walk bd387e8def9f77aafa798bf53e80d949aba443e8 Getting alternates list for http://192.168.10.97:8000/repositories/HOH-config-backups.git got 030512237bca72faf211e0e8ec2906164eac34f6 got 9bc2f575240bc1f61ff7d69777ce1a165d06b184 got b8400f7f01429104a9d4786a6bb1a16d293e37c1 got 2403b5bf611010e0b401f776f0e23b09ce744838 got 1a27944c48269ef3608a8f2466e43402d06faac0 got b686f45b7d57af4fa8ca0d528bb85216d6247e19 Getting pack list for http://192.168.10.97:8000/repositories/HOH-config-backups.git Getting index for pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff Getting pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff which contains ff84d6d48e9326066438d167a10251218d612b3d walk b686f45b7d57af4fa8ca0d528bb85216d6247e19 got 364e30daec17814073e668f490bb84af891fe1f7 got 23f6497e7f9b80e0d90adad73bd0407a0e5ac6ce got 9e77c47574b5e23ea669afe0c23ab235e4917ee1 got 6654e0d328a216b3783e98c47206cb2d01b3353d got 28821ffd437d2689ffb82c6e4b9c3f5372c95c4b got 8c384a24f645389e4d4b08013c79e9e73a658342 got d203be0123736ee025ce20c081f1489098648dfc got 1852603bf7709e71417d8ccec02390279d533642 got fb753a26b20b04694419fce8ecdaa8dbec105cf1 got 736028997cd84dd1c135f57e9d246674b9cd0b9d got 7af836249e20096d0476a548d5be702a071cdd4b got 240dc39d9db50df63073fc7927b2d002dfa0f54c got 93abd36e3935a01011eb753b635a1a0e984bf31e got c6269e28fecf4d8d0d98b9358aecb3acff02df44 got b0aa29432f73e64032682a351d436c24b14078ab walk 240dc39d9db50df63073fc7927b2d002dfa0f54c got 58fb66d9f35f8a5e32ff4683309c5f0c2a3a03c5 got 0da2def4de0565483cdbe6b87418ee2beb122e58 got 0f6a86c6f87ed52ad2ed01e5c6edd661d364930c got 437a93d27b5bb89c739a0564a34a616e832c3ebe got fe0385abe5c0acd8462268dac330bae00e934f1b got 24259f8f5c5c9ee974a75fe3d1e07c02e3e20fe9 got d29f624bf1a5eceedaa86c10fee35f62747c7d04 got 0154e4c987132585ea7a92b77d02dba285512d6b got eda8bf526567c25ee70addb2ad3c3c6aa57eac77 got 9f3d9d7262d66f9fa4f6a13b7c86199953f4bc4e got 8e20881e19667aa22245d0598646991067455a4d got abb1123145689b35eb19519952c71253ee45fa98 got dfeff593c79b4156ce2ce1adf043d0e80356488c got e20c5b48b1d360e0bcf34189e3f3d2bbf23e92cc got b13eb81cc274780322ecf786372320343926bec9 walk 8de83868b3fac748b0a55eba16c8f668ec852abb got b5961421bbc42afe7a07cc1c8b615aba26ba74d7 got 2650ba819019df4193b482733e29ca79b29f3f2c got b3111e1be8103e91803a97a817ed81f28025aca1 got b060be934d709684f5eb5dad3c03932a3589e864 got cf70d2043f081d7a4438e9d5a290a9f986c84060 got 80bf0f1cc836feab86d6935bb7968d8555a8d531 got da318d167920e34bc6573e4fc236249ccbbee316 got d82ac853d387b760149599e6e1ab96403f6ec672 got 0005f691d1f46550fdb4e56025f52e30a5b18cc2 Initialized empty Git repository in /tmp/git/HOH-config-backups/.git/ - copy Configuration Folders across Created commit 424df2f: committing any new configuration changes! 3 files changed, 55 insertions(+), 1 deletions(-) create mode 100755 scripts/betty/gitConfig.sh error: Cannot access URL http://192.168.10.97:8000/repositories/HOH-config-backups.git/, return code 22 error: failed to push some refs to 'http://192.168.10.97:8000/repositories/HOH-config-backups.git' Git repo updated - backing up this script cp: cannot create regular file `/mnt/backups/scripts/betty/gitConfig.sh': Permission denied my crontab is: # m h dom mon dow command 04 * * * * /home/jon/gitConfig.sh > /tmp/gitconfig.log 2>&1 I open it by doing: $crontab -e i.e. not as root. I am a bit confused as to why it is not running as my user (or what user id 1000 is). Not sure what I need to do to get the push with git to work within crontab. edit: found out about the userid: jon@betty:~$ id uid=1000(jon) gid=1000(jon) groups=4(adm),20(dialout),24(cdrom),46(plugdev),109(sambashare),114(lpadmin),115(admin),1000(jon) here is my $HOME/.gitconfig file: [user] name = Jon Hawkins email = [email protected] Thanks

    Read the article

  • Spending the summer at camp… Web Camp, that is

    - by Jon Galloway
    Microsoft is sponsoring a series of Web Camps this summer. They’re a series of free two day events being held worldwide, and I’m really excited about being taking part. The camp is targeted at a broad range of developer background and experience. Content builds from 101 level introductory material to 200-300 level coverage, but we hit some advanced bits (e.g. MVC 2 features, jQuery templating, IIS 7 features, etc.) that advanced developers may not yet have seen. We start with a lap around ASP.NET & Web Forms, then move on to building and application with ASP.NET MVC 2, jQuery, and Entity Framework 4, and finally deploy to IIS. I got to spend some time working with James before the first Web Camp refining the content, and I think he’s packed about as much goodness into the time available as is scientifically possible. The content is really code focused – we start with File/New Project and spend the day building a real, working application. The second day of the Web Camp provides attendees an opportunity to get hands on. There are two options: Join a team and build an application of your choice Work on a lab or tutorial James Senior and I kicked off the fun with the first Web Camp in Toronto a few weeks ago. It was sold out, lots of fun, and by all accounts a great way to spend two days. I’m really enthusiastic about the format. Rather than just listening to speakers and then forgetting everything in a few days, attendees actually build something of their choice. They get an opportunity to pitch projects they’re interested in, form teams, and build it – getting experience with “real world” problems, with all the help they need from experienced developers. James got help on the second day practical part from the good folks that run Startup Weekend. Startup Weekend is a fantastic program that gathers developers together to build cool apps in a weekend, so their input on how to organize successful teams for weekend projects was invaluable. Nick Seguin joined us in Toronto, and in addition to making sure that everything flowed smoothly, he just added a lot of fun and excitement to the event, reminding us all about how much fun it is to come up with a cool idea and just build it. In addition to the Toronto camp, I’ll be at the Mountain View, London, Munich, and New York camps over the next month. London is sold out, but the rest still have space available, so come join us! Here’s the full list, with the ones I’ll be at bolded because - you know - it’s my blog. The the whole speaker list is great, including Scott Guthrie, Scott Hanselman, James Senior, Rachel Appel, Dan Wahlin, and Christian Wenz. Toronto May 7-8 (James Senior and I were thrown out on our collective ears) Moscow May 19 Beijing May 21-22 Shanghai May 24-25 Mountain View May 27-28 (I’m speaking with Rachel Appel) Sydney May 28-29 Singapore June 04-05 London June 04-05 (I’m speaking with Christian Wenz – SOLD OUT) Munich June 07-08 (I’m speaking with Christian Wenz) Chicago June 11-12 Redmond, WA June 18-19 New York June 25-26 (I’m speaking with Dan Wahlin) Come say hi!

    Read the article

  • Using the new CSS Analyzer in JavaFX Scene Builder

    - by Jerome Cambon
    As you know, JavaFX provides from the API many properties that you can set to customize or make your components to behave as you want. For instance, for a Button, you can set its font, or its max size.Using Scene Builder, these properties can be explored and modified using the inspector. However, JavaFX also provides many other properties to have a fine grained customization of your components : the css properties. These properties are typically set from a css stylesheet. For instance, you can set a background image on a Button, change the Button corners, etc... Using Scene Builder, until now, you could set a css property using the inspector Style and Stylesheet editors. But you had to go to the JavaFX css documentation to know the css properties that can be applied to a given component. Hopefully, Scene Builder 1.1 added recently a very interesting new feature : the CSS Analyzer.It allows you to explore all the css properties available for a JavaFX component, and helps you to build your css rules. A very simple example : make a Button rounded Let’s take a very simple example:you would like to customize your Buttons to make them rounded. First, enable the CSS Analyzer, using the ‘View->Show CSS Analyzer’ menu. Grow the main window, and the CSS Analyzer to get more room: Then, drop a Button from the Library to the ContentView: the CSS Analyzer is now showing the Button css properties: As you can see, there is a ‘-fx-background-radius’ css property that allow to define the radius of the background (note that you can get the associated css documentation by clicking on the property name). You can then experiment this by setting the Button style property from the inspector: As you can see in the css doc, one can set the same radius for the 4 corners by a simple number. Once the style value is applied, the Button is now rounded, as expected.Look at the CSS Analyzer: the ‘-fx-background-radius’ property has now 2 entries: the default one, and the one we just entered from the Style property. The new value “win”: it overrides the default one, and become the actual value (to highlight this, the cell background becomes blue). Now, you will certainly prefer to apply this new style to all the Buttons of your FXML document, and have a css rule for this.To do this, save you document first, and create a css file in the same directory than the new document.Create an empty css file (e.g. test.css), and attach it the the root AnchorPane, by first selecting the AnchorPane, then using the Stylesheets editor from the inspector: Add the corresponding css rule to your new test.css file, from your preferred editor (Netbeans for me ;-) and save it. .button { -fx-background-radius: 10px;} Now, select your Button and have a look at the CSS Analyzer. As you can see, the Button is inheriting the css rule (since the Button is a child of the AnchorPane), and still have its inline Style. The Inline style “win”, since it has precedence on the stylesheet. The CSS Analyzer columns are displayed by precedence order.Note the small right-arrow icons, that allow to jump to the source of the value (either test.css, or the inspector in this case).Of course, unless you want to set a specific background radius for this particular Button, you can remove the inline Style from the inspector. Changing the color of a TitledPane arrow In some cases, it can be useful to be able to select the inner element you want to style directly from the Content View . Drop a TitledPane to the Content View. Then select from the CSS Analyzer the CSS cursor (the other cursor on the left allow you to come back to ‘standard’ selection), that will allow to select an inner element: height: 62px;" align="LEFT" border="0"> … and select the TitledPane arrow, that will get a yellow background: … and the Styleable Path is updated: To define a new css rule, you can first copy the Styleable path : .. then paste it in your test.css file. Then, add an entry to set the -fx-background-color to red. You should have something like: .titled-pane:expanded .title .arrow-button .arrow { -fx-background-color : red;} As soon as the test.css is saved, the change is taken into account in Scene Builder. You can also use the Styleable Path to discover all the inner elements of TitledPane, by clicking on the arrow icon: More details You can see the CSS Analyzer in action (and many other features) from the Java One BOF: BOF4279 - In-Depth Layout and Styling with the JavaFX Scene Builder presented by my colleague Jean-Francois Denise. On the right hand, click on the Media link to go to the video (streaming) of the presa. The Scene Builder support of CSS starts at 9:20 The CSS Analyzer presentation starts at 12:50

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • Install GIMP 2.7.1 on Lucid Lynx using PPA

    - by Vivek
    GIMP lovers are going to be disappointed to hear that GIMP is going away in the next release of much awaited Ubuntu 10.04. Today we take a look at installing in on Lucid Lynx using PPA. The reason for getting rid of it as cited by the GIMP developers, is that GIMP is too professional a software to be included in regular desktop version of Ubuntu. And it takes up too much of space on the disk. Also, the fact that it’s too complicated for regular users. If you can’t live without it…let’s see how to install GIMP 2.7.1 on Lucid Lynx (Currently in Alpha). The new version of GIMP supports single window mode and we will also see how to enable this feature as well. First we need to add the official GIMP 2.7.1 PPA in the software sources of Ubuntu 10.04, by opening the terminal window and typing the following command: sudo sh -c “echo ‘deb http://ppa.launchpad.net/matthaeus123/mrw-gimp-svn/ubuntu lucid main’ >> /etc/apt/sources.list” Now that we have added the PPA we need to add the GPG key, so type the following in your Terminal window. sudo apt-key adv –recv-keys –keyserver keyserver.ubuntu.com 405A15CB Next up we have to update the software repository… sudo apt-get update All that is left is to install GIMP 2.7.1 by typing in the following… sudo apt-get install gimp Click ‘Y’ (for yes) to install GIMP Once GIMP is installed you can start it by going to Applications > Graphics > GNU Image Manipulation Program. You now have your favorite GIMP on your favorite Ubuntu 10.04. As you can see in the image below, GIMP still comes with default 3 windows, which could clog up your lower panel In Ubuntu 10.04. However, now you can run GIMP in single window mode by going to Windows > Single-Window mode. That’s all! Now you have your GIMP running in single window mode with less of hassle to manage 3 windows. It’s unfortunate that GIMP will not be included, but by following these instructions, you’ll be able to enjoy using it in Ubuntu 10.04. Similar Articles Productive Geek Tips Show the List of Installed Packages on Ubuntu or DebianHow to Install Windows Applications on Linux Using CrossoverInstall VMware Tools on Ubuntu Edgy EftInstall Adobe PDF Reader on Ubuntu EdgyInstall MySQL Server 4.1 on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements

    Read the article

  • Training on Demand Certification Packages for DBAs

    - by Antoinette O'Sullivan
    The demand for Database Administrators continues to grow.*Almost two-thirds of IT hiring managers indicate that they highly value certifications in validatingIT skills and expertise.** * Job satisfaction and DBA work growth rate: CNN Money's 2011 Best Jobs in America survey.** Survey among nearly 1,700 respondents by CompTIA, the nonprofit trade association for the IT industry, cited in Certification Magazine, Feb. 14 th., 2012. Get Certified with Training on DemandAre you an experienced Database professional eager to achieve certification?Is time your most precious resource?Then try our new Training On Demand Certification Value Package with 20% discount. These all-in-one packages give you everything you need to get certified with success: Why Training On Demand:  Expert training from Oracle’s top instructors Sophisticated streaming video recording Available for 90 days, 24 hours a day, 7 days a week White boarding and training labs for hands-on experience Start, stop, pause, jump or rewind sections of the course as needed  Oracle University instructor Q&A  A full-text search leads to the right video fragment in a matter of seconds. Watch this demo to see how it works. Additional Certification resources: Benefits of Oracle Certification Database Certification Paths Available Database Certification Exams Getting certified has never been easier!For assistance contact your local Oracle University Service Desk. Many organizations deploy both Oracle Database and MySQL side by side to serve different needs, and as a database professional you can find training courses on both topics at Oracle University! Check out the upcoming Oracle Database 11g training courses and MySQL training courses. Even if you're only managing Oracle Databases at this point of time, getting familiar with MySQL Database will broaden your career path with growing job demand. These Value Packages are also available with the following training formats: In-Class, Live Virtual Class and Self Study: MySQL Database Administration Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand MySQL Database Administrator Certification Value Package View Package View Package View Package View Package MySQL Developer Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand       MySQL Developer Certification Value Package View Package View Package     Oracle Database 10g Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand Oracle Database 10g Administrator Certified Associate Certification Value Package View Package View Package View Package   Oracle Database 10g Administrator Certified Professional Certification Value Package View Package View Package View Package   Oracle Database 11g Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand Oracle Database 11g Administrator Certified Associate Certification Value Package View Package View Package View Package View Package Oracle Database 11g Administrator Certified Professional Certification Value Package View Package View Package View Package View Package Exam Prep Seminar Value Package: Oracle Database Admin 1       View Package Oracle Database 11g Administrator Certified Professional UPGRADE Certification Value Package       View Package Oracle Real Application Clusters 11g and Grid Infrastructure Administraton Certified Expert Certification Value Package       View Package Exam Prep Seminar Value Package: Oracle Database Admin 2        View Package Exam Prep Seminar Value Package: Oracle RAC 11g and Grid Infrastructure Administration       View Package Exam Prep Seminar Value Package: Upgrade Oracle Certified Professional (OCP) to Oracle Database 11g       View Package SQL and PL/SQL Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand Oracle Database Sql Expert Certification Value Package View Package View Package View Package View Package Exam Prep Seminar Value Package: Oracle Database SQL       View Package View our Certification Value Packages Mention this code at the time of booking: E1245 Connect For a full list of MySQL Training courses and events, go to http://oracle.com/education/mysql.

    Read the article

  • Initial Review - Mastering Unreal Technology I: Introduction to Level Design with Unreal Engine 3

    - by Matt Christian
    Recently I purchased 3 large volumes on using the Unreal 3 Engine to create levels and custom games.  This past weekend I cracked the spine of the first and started reading.  Here are my early impressions (I'm ~250 pages into it, with appendices it's about 900). Pros Interestingly, the book starts with an overview of the Unreal engines leading up to Unreal 3 (including Gears of War) and follows with some discussion on planning a mod and what goes into the game development process.  This is nice for an intro to the book and is much preferred rather than a simple chapter detailing what is on the included CD, how to install and setup UnrealEd, etc...  While the chapter on Unreal history and planning can be considered 'fluff', it's much less 'fluffy' than most books provide. I need to mention one thing here that is pretty crucial in the way I'm going to continue reviewing this book.  Most technical books like this are used as a shelf reference; as a thick volume you use for looking up techniques every now and again.  Even so, I prefer reading from cover to cover, including chapters I may already be knowledgable on (I'm sure this is typical for most people).  If there was a chapter on installing UnrealEd (the previously mentioned 'fluff'), I would probably force myself to read it, even though I've installed the game and engine multiple times on different systems. Chapter 3 is where we really get to the introduction piece of UnrealEd, creating your first basic level.  This large chapter details creating two small rooms, adding static meshes, adding lighting, creating and adding particle emitters, creating a door that animates with Unreal Matinee and Kismet, static meshes with physics, and other little additions to make your level look less beginner.  This really is a chapter that overviews the entire rest of the book, as each chapter following details the creation and intermediate usages of Static Meshes, Materials, Lighting, etc... One other very nice part to this book is the way the tutorials are setup.  Each tutorial builds off the previous and all are step-by-step.  If you haven't completed one yet, you can find all the starting files on the CD that comes with the book. Cons While the description of the overview chapter (Chapter 3) is fresh in your mind, let me start the cons by saying this chapter is setup extremely confusing for the noob.  At one point, you end up creating a door mesh and setting it up as a InteropMesh so that it is ready to be animated, only to switch to particles and spend a good portion of time working on a different piece of the level.  Yes, this is actually how I develop my levels (jumping back and forth), though it's very odd for a book to jump out of sequence. The next item might be a positive or a negative depending on your skill level with UnrealEd.  Most of the introduction to the editor layout is found in one of the Appendices instead of before Chapter 3.  For new readers, this might lead to confusion as Appendix A would typically be read between Chapter 2 and 3.  However, this is a positive for those with some experience in UnrealEd as they don't have to force themselves through a 'learn every editor button' chapter.  I'm listing this in the Cons section as the book is 'Introduction to...' and is probably going to be directed toward a lot of very beginner developers. Finally, there's a lack of general description to a lot of the underlying engine and what each piece in UnrealEd is or does.  Sometimes you'll be performing Tutorial after Tutorial with barely a paragraph in between describing ANY of what you've just done.  Tutorial 1.1 Step 6 says to press Button X, so you do.  But why?  This is in part a problem with the structure of the tutorials rather than the content of the book.  Since the tutorials are so focused on a step-by-step (or procedural) description of a process, you learn the process and not why you're doing that.  For example, you might learn how to size a material to a surface, but will only learn what buttons to press and not what each one does. This becomes extremely apparent in the chapter on Static Meshes as most of the chapter is spent in 3D Studio Max.  Since there are books on 3DSM and modelling, the book really only tells you the steps and says to go grab a book on modelling if you're really interested in 3DSM.  Again, I've learned the process to develop my own meshes in 3DSM, but I don't know the why behind the steps. Conclusion So far the book is very good though I would have a hard time recommending it to a complete beginner.  I would suggest anyone looking at this book (obviously including the other 2, more advanced volumes) to pick up a copy of UDK or Unreal 3 (available online or via download services such as Steam) and watch some online tutorials and play with it first.  You'll find plenty of online videos available that were created by the authors and may suit as a better introduction to the editor.

    Read the article

  • BizTalk Server Monitoring &ndash; SharePoint Web Part

    - by SURESH GIRIRAJAN
    I have been worked with customers using BizTalk as shared infrastructure in the enterprise, where we have two or more BizTalk apps running on it for different Business groups. Also these customers are not using BizTalk ESB portal even though they are using BizTalk ESB exception framework. So main issue with all these Business groups are they don’t have visibility into the BizTalk apps running in prod, even though they are using SCOM and other monitoring stuff in place. So I am trying to address few issues I am going to list below and how I try to mitigate them, first one on the list is how to get visibility into prod, how to provision those access to the BizTalk resources with minimal activity and how can we take advantage of the resources we have today. So I was working on creating REST data services for BizTalk RFID a year ago and available on codeplex. I thought to extend that idea to take advantage of BizTalk Data Services available in codeplex. I extended the BizTalk data services I will upload the updated service soon. So let me start thru how my solution works, so first step I am using the BizTalk data service (REST service) which expose most of the BizTalk artifacts as resources such as Applications, Orchestrations, Send ports, Receive ports, Host instances and In process instances etc. BizTalk Server Monitoring – SharePoint Web Part I am hosting the BizTalk data service in IIS with application pool configured to run under BizTalk administrator credentials. So with this setup I am making the service to make accessible anonymous. Next step of this solution I have created a SharePoint Visual web part which consumes the BizTalk data service and display all the BizTalk Application and Platform settings in read only mode. Even though BizTalk data services offers to browse resources as well perform actions like starting, stopping Orchestrations, Send ports, Receive locations, Host instances etc. Host Instances BizTalk Applications BizTalk Running / Suspended Instances So having this BizTalk Monitoring SharePoint web part, will be added to the SharePoint. This eliminates the need for granting access to the BizTalk users explicitly, so when you have BizTalk contractor or BizTalk application user need to have access to the BizTalk environment all the need is have access to the SharePoint website. You can configure the web part point to different end point based on your environment. I am making this as read only as part of this to make easier for the users and in terms of provisioning. This removes the dependency of BizTalk admin at least for viewing the BizTalk application status and errors etc. If we need to make any changes to the BizTalk application then its application owner responsibility to co-ordinate with BizTalk admins. There are options like BizTalk ESB portal, BizTalk 360 etc… but this one of the approach to reduce number of steps required to give access to BizTalk application users and also to maximize the resource we have in enterprise today. Also you can expose this data service thru Azure Service Bus and access from other apps like mobile devices or create a web site hosted in Azure etc. One last thing I have tested only with BizTalk Server 2010 on x64 VM only, but it should work on other version. I will try to upload the code shortly with instructions how to setup etc.… I welcome thoughts and suggestions… Hope this helps….

    Read the article

  • Read Mobi eBooks on Kindle for PC

    - by Matthew Guay
    Do you use your PC as a eBook reader?  Kindle for PC makes it easy to read thousands of books from the Kindle Store on your computer. What you may not know is that is also works with .mobi format too, so you can increase the amount of books you can read. Amazon has jumpstarted the eBook market with their popular Kindle device.  Last fall Amazon unveiled Kindle for PC, and we reviewed how you can Read Kindle Books On Your Computer with Kindle for PC.  Whether or not you own a Kindle or other eBook reader, this is a great way to take advantage of the thousands of eBooks available from the Kindle Store today. It supports azw, prc, and tpz format, which are sold from the Kindle store, but it also supports Mobipocket (.mobi) eBooks that are not DRM protected.  Here’s how you can add them to Kindle for PC so you can easily read them on your PC Getting Started: First, make sure you have Kindle for PC (link below) installed on your computer. Sign in with your Amazon account when you first run it. Kindle for PC lets you easily read eBooks downloaded from the Kindle Store, but it doesn’t have any way to add other eBooks directly from the program. To add eBooks, you can sometimes download and double-click on the books, and they will open in Kindle for PC and be automatically added to the library.  However, this does not always seem to work. So instead, browse to your Documents folder (simply click on the Documents link on your Start menu), and double-click on the My Kindle Content folder. This folder contains all the Kindle books you have downloaded.  If you have other eBooks you would like to add to Kindle for PC, simply drag-and-drop or copy and paste them into this folder.  Here we have a .mobi formatted book downloaded from the Gutenberg Project that we’re dragging into the folder. Now, close and reopen Kindle for PC.  It should now show your new eBook right beside the eBooks you have downloaded from the Kindle Store. These eBooks work just the same as the ones downloaded from the Kindle store, and you can change font size and add bookmarks just as with other eBooks. The eBooks downloaded this way may show up with either a Amazon logo or a mobile device icon.  You should only see the mobile device icon on .mobi files formatted for mobile devices; other ones should show up with the Amazon logo.  In this screen, Pilgrim’s Progress is a standard .mobi book, The Adventures of Sherlock Holmes is a mobipocket book, and the others are downloaded from the Kindle Store. Conclusion This is a great way to read eBooks from across the internet on Kindle for PC.  Wikipedia’s Kindle page has a list of websites that offer eBooks formatted for the Kindle, so be sure to check it out for more books. Links Download Kindle for PC List of websites that offer eBooks that will work on Kindle – via Wikipedia Similar Articles Productive Geek Tips Read Kindle Books On Your Computer with Kindle for PCInstall Adobe PDF Reader on Ubuntu EdgyHow to Access your Box.Net Account from Ubuntu the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer Create Talking Photos using Fotobabble

    Read the article

  • Unique Business Value vs. Unique IT

    - by barry.perkins
    When the age of computing started, technology was new, exciting, full of potential and had a long way to grow. Vendor architectures were proprietary, and limited in function at first, growing in capability and complexity over time. There were few if any "standards", let alone "open standards" and the concepts of "open systems", and "open architectures" were far in the future. Companies employed intelligent, talented and creative people to implement the best possible solutions for their company. At first, those solutions were "unique" to each company. As time progressed, standards emerged, companies shared knowledge, business capability supplied by technology grew, and companies continued to expand their use of technology. Taking advantage of change required companies to struggle through periodic "revolutionary" change cycles, struggling through costly changes that were fraught with risk, resulted in solutions with an increasingly shorter half-life, and frequently required altering existing business processes and retraining employees and partner businesses. The pace of technological invention and implementation grew at an ever increasing rate, making the "revolutionary" approach based upon "proprietary" or "closed" architectures or technologies no longer viable. Concurrent with the advancement of technology, the rate of change in business increased, leading us to the incredibly fast paced, highly charged, and competitive global economy that we have today, where the most successful companies are companies that are good at implementing, leveraging and exploiting change. Fast forward to today, a world where dramatic changes in business and technology happen continually, a world where "evolutionary" change is crucial. Companies can no longer afford to build "unique IT", nor can they afford regular intervals of "revolutionary" change, with the associated costs and risks. Human ingenuity was once again up to the task, turning technology into a platform supporting business through evolutionary change, by employing "open": open standards; open systems; open architectures; and open solutions. Employing "open", enables companies to implement systems based upon technology, capability and standards that will evolve over time, providing a solid platform upon which a company can drive business needs, requirements, functions, and processes down into the technology, rather than exposing technology to the business, allowing companies to focus on providing "unique business value" rather than "unique IT". The big question! Does moving from "older" technology that no longer meets the needs of today's business, to new "open" technology require yet another "revolutionary change"? A "revolutionary" change with a short half-life, camouflaging reality with great marketing? The answer is "perhaps". With the endless options available to choose from, it is entirely possible to implement a solution that may work well today, but in 5 years time will become yet another albatross for the company to bear. Some solutions may look good today, solving a budget challenge by reducing cost, or solving a specific tactical challenge, but result in highly complex environments, that may be difficult to manage and maintain and limit the future potential of your business. Put differently, some solutions might push today's challenge into the future, resulting in a more complex and expensive solution. There is no such thing as a "1 size fits all" IT solution for business. If all companies implemented business solutions based upon technology that required, or forced the same business processes across all businesses in an industry, it would be extremely difficult to show competitive advantage through "unique business value". It would be equally difficult to "evolve" to meet or exceed business needs and keep up with today's rapid pace of change. How does one ensure that they do not jump from one trap directly into another? Or to put it positively, there are solutions available today that can address these challenges and issues. How does one ensure that the buying decision of today will serve the business well for years into the future? Intelligent & Informed decisions - "buying right" In a previous blog entry, we discussed the value of linking tactical to strategic The key is driving the focus to what is best for your business, handling today's tactical issues while also aligning with a roadmap/strategy that is tightly aligned with your strategic business objectives. When considering the plethora of possible options that provide various approaches to solving today's complex business problems, it is extremely important to ensure that vendors supplying those options, focus on what is best for your business, supplying sufficient information, providing adequate answers to questions, addressing challenges, issues, concerns and objections honestly and openly, and focus on supplying solutions that are tailored for, and deliver the most business value possible for your business. Here are a few questions to consider relative to the proposed options that should help ensure that today's solution doesn't become tomorrow's problem. Do the proposed solutions: Solve the problem(s) you are trying to address? Provide a solid foundation upon which to grow/enhance your business? Provide tactical gains that align with and enable your strategic business goals/objectives? Provide an infrastructure that can be leveraged with subsequent projects? Solve problems for the business overall, the lines of business, or just IT? Simplify your current environment Provide the basis for business: Efficiency Agility Clarity governance, risk, compliance real time business visibility and trend analysis Does your IT staff have the knowledge/experience to successfully manage the proposed systems once they are deployed in production? Done well, you will be presented with options tailored to your business, that enable you to drive the "unique business value" necessary to help your business stand out from others, creating a distinct competitive advantage, delivering what your customers need, when they need it, so you can attract new customers, new business, and grow top line revenue, all at a cost that provides a strong Return on Investment/Return on Assets. The net result is growth with managed cost providing significantly improved profit margin and shareholder value.

    Read the article

  • Am I right about the differences between Floyd-Warshall, Dijkstra's and Bellman-Ford algorithms?

    - by Programming Noob
    I've been studying the three and I'm stating my inferences from them below. Could someone tell me if I have understood them accurately enough or not? Thank you. Dijkstra's algorithm is used only when you have a single source and you want to know the smallest path from one node to another, but fails in cases like this Floyd-Warshall's algorithm is used when any of all the nodes can be a source, so you want the shortest distance to reach any destination node from any source node. This only fails when there are negative cycles (this is the most important one. I mean, this is the one I'm least sure about:) 3.Bellman-Ford is used like Dijkstra's, when there is only one source. This can handle negative weights and its working is the same as Floyd-Warshall's except for one source, right? If you need to have a look, the corresponding algorithms are (courtesy Wikipedia): Bellman-Ford: procedure BellmanFord(list vertices, list edges, vertex source) // This implementation takes in a graph, represented as lists of vertices // and edges, and modifies the vertices so that their distance and // predecessor attributes store the shortest paths. // Step 1: initialize graph for each vertex v in vertices: if v is source then v.distance := 0 else v.distance := infinity v.predecessor := null // Step 2: relax edges repeatedly for i from 1 to size(vertices)-1: for each edge uv in edges: // uv is the edge from u to v u := uv.source v := uv.destination if u.distance + uv.weight < v.distance: v.distance := u.distance + uv.weight v.predecessor := u // Step 3: check for negative-weight cycles for each edge uv in edges: u := uv.source v := uv.destination if u.distance + uv.weight < v.distance: error "Graph contains a negative-weight cycle" Dijkstra: 1 function Dijkstra(Graph, source): 2 for each vertex v in Graph: // Initializations 3 dist[v] := infinity ; // Unknown distance function from 4 // source to v 5 previous[v] := undefined ; // Previous node in optimal path 6 // from source 7 8 dist[source] := 0 ; // Distance from source to source 9 Q := the set of all nodes in Graph ; // All nodes in the graph are 10 // unoptimized - thus are in Q 11 while Q is not empty: // The main loop 12 u := vertex in Q with smallest distance in dist[] ; // Start node in first case 13 if dist[u] = infinity: 14 break ; // all remaining vertices are 15 // inaccessible from source 16 17 remove u from Q ; 18 for each neighbor v of u: // where v has not yet been 19 removed from Q. 20 alt := dist[u] + dist_between(u, v) ; 21 if alt < dist[v]: // Relax (u,v,a) 22 dist[v] := alt ; 23 previous[v] := u ; 24 decrease-key v in Q; // Reorder v in the Queue 25 return dist; Floyd-Warshall: 1 /* Assume a function edgeCost(i,j) which returns the cost of the edge from i to j 2 (infinity if there is none). 3 Also assume that n is the number of vertices and edgeCost(i,i) = 0 4 */ 5 6 int path[][]; 7 /* A 2-dimensional matrix. At each step in the algorithm, path[i][j] is the shortest path 8 from i to j using intermediate vertices (1..k-1). Each path[i][j] is initialized to 9 edgeCost(i,j). 10 */ 11 12 procedure FloydWarshall () 13 for k := 1 to n 14 for i := 1 to n 15 for j := 1 to n 16 path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );

    Read the article

  • Dual Boot Oracle Solaris 11/11 and Linux (Ubuntu 11.10/grub2)

    - by HartmutStreppel
    After having worked with Open Solaris on my laptop first, then with an upgrade to Oracle Solaris 11 Express, I finally did a fresh install of Oracle Solaris 11/11, when it became available. I am not a big fan of upgrades as I know that I am not the perfect administrator and my system gets spoiled with unclean configurations, outdated packages and wrong settings that cannot be reversed. So I prefer to start from scratch. Especially with Oracle Solaris 11 I wanted to have a system just like a customer would have it in production. The installation was smooth - more or less, if I had only read the documentation a bit better in advance. For a number of reasons I prefer a dual boot system. The most important one is, that especially with mobile devices you often run into network problems. And you have a hard time figuring out where the problem is: in your laptop hardware, in the OS you are running, or really within the network. If you have an alternate OS to boot, you can exclude the OS and your hardware. This makes you feel better. The second OS should be a Linux variant - and for some not so obvious reason I decided to go with the latest Ubuntu release (11.10). It replaced a very old Open Suse installation that had not been booted for a while. I knew that it was probably best to install Ubuntu first and then Oracle Solaris 11, as this would put the right boot information for Oracle Solaris  into the MBR and onto the root partition. But then, how to enable dual boot with the 2 OSes. Searching the web one mainly finds information about dual boot of: Linux and Linux Linux and Windows I do not want to explain which wrong configurations I worked through, but I prefer to explain the final setup, which is extremely simple, and I am wondering why this is not covered as the easiest solution for most dual boot setups. I use chainloader from and to both OS'es, with the only disadvantage that I have to confirm two grub menus each time I want to boot the "other" OS. Still there were some hurdles to jump over: Ubuntu did not like getting its boot blocks being placed on the partition instead of the disk; I must admit that I do not fully understand why. But using the --force option you could get that done Ubuntu needs an active partition; that was easy to achieve grub2 uses a different numbering scheme for the partitions. That is in the docs, if you read them. BTW: The usual disclaimer is valid. There is  no guarantee that what I describe works or works well. Please back up your data carefully before trying any of this. So, Oracle Solaris 11 is installed on the first partition and Ubuntu on the third. With Ubtuntu things initially were a bit more complicated, as I did not know how to boot it. And the live CD did not offer the capability to boot the on-disk image (at least I did not find it). So I booted the live CD, mounted the Ubuntu installation at /mnt and wrote the boot blocks into the partition. This is something that does not seem to be recommended, at least grub-install refrained from doing what I intended. After a bit more research I was bold enough to use the --force option and wrote the boot blocks to /dev/sda3 using grub-install --boot-directory=/mnt/boot --force --no-floppy /dev/sda3 So, I now had a system with the Solaris boot loader in the MBR, Solaris specific boot blocks on the Solaris root partition and Ubuntu specific boot blocks in the Ubuntu partition. I just had to chain them together and I was done. Oracle Solaris 11: I have added the following lines to /rpool/boot/grub/menu.lst (be aware of the /rpool!!!!) title Ubuntu 11.10root (hd0,2)makeactivechainloader +1boot The Ubuntu root file system sits on the third partition (/dev/sda3). Ubuntu: I have added the following lines to /etc/grub.d/40_custom: menuentry "Solaris 11/11" {      set root=(hd0,1)      chainloader +1} Two things need to be mentioned: a) grub2 starts numbering partitions with 1; so my /dev/sda1 is partition 1. b) Oracle Solaris boots without the partition being made active (btw: the command to make a partition active with grub2 is "parttool (hd0,1) boot+", which currently does not work for me). As debugging grub is a bit complicated, I used the grub CLI to perform some tests and also used a tool, that I found on sourceforge.net that was able to prepare a list of all boot loaders on all partitions. This told me that the basic setup was correct. Unfortunately I lost it in the live CD environment. I hope this is helpful for some of the readers.Hartmut

    Read the article

  • Remove Programs from the Open With Menu in Explorer

    - by Matthew Guay
    Would you like to clean up the Open with menu in Windows Explorer?  Here’s how you can remove program entries you don’t want in this menu on any version of Windows. Have you ever accidently opened an mp3 with Notepad, or a zip file with Word?  If so, you’re also likely irritated that these programs now show up in the Open with menu in Windows Explorer every time you select one of those files.  Whenever you open a file type with a particular program, Windows will add an entry for it to the Open with menu.  Usually this is helpful, but it can also clutter up the menu with wrong entries. On our computer, we have tried to open a PDF file with Word and Notepad, neither which can actually view the PDF itself.  Let’s remove these entries.  To do this, we need to remove the registry entries for these programs.  Enter regedit in your Start menu search or in the Run command to open the Registry editor. Backup your registry first just in case, so you can roll-back any changes you make if you accidently delete the wrong value.  Now, browse to the following key: HKEY_CURRENT_USER \Software \Microsoft \Windows \CurrentVersion \ Explorer \FileExts\ Here you’ll see a list of all the file extensions that are registered on your computer. Browse to the file extension you wish to edit, click the white triangle beside it to see the subfolders, and select OpenWithList.  In our test, we want to change the programs associated with PDF files, so we select the OpenWithList folder under .pdf. Notice the names of the programs under the Data column on the right.  Right-click the value for the program you don’t want to see in the Open With menu and select Delete. Click Yes at the prompt to confirm that you want to delete this value. Repeat these steps with all the programs you want to remove from this file type’s Open with menu.  You can go ahead and remove entries from other file types as well if you wish. Once you’ve removed the entries you didn’t want to see, check out the Open with menu in Explorer again.  Now it will be much more streamlined and will only show the programs you want to see. Conclusion This simple trick can help you keep your Open with menu tidy, and only show the programs you want in the list.  It can be irritating to accidently open files in programs that can’t even read them.  This trick works in all versions of Windows, including 2000, XP, Vista, and Windows 7. Similar Articles Productive Geek Tips Remove ISP Text or Corporate Branding from Internet Explorer Title BarRemove the Username From the Start Menu in XPKeep Start Menu From Closing After Opening ApplicationsRemove PartyPoker (Or Other Items) from the Internet Explorer Tools MenuUninstall, Disable, or Delete Internet Explorer 8 from Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app

    Read the article

  • Calling XAI Inbound Services from Oracle BI Publisher

    - by ACShorten
    Note: This technique requires Oracle BI Publisher 1.1.3.4.1 which supports Service Complex Types. Web Services require credentials for authentication. Note: The deafults for the product installation are used in this article. If your site uses alternative values then substitute those alternatives where applicable. Note: Examples shown in this article are examples for illustrative purposes only. When building a report in Oracle BI Publisher it may be necessary to call an XAI Inbound Service to get information via the object rather than directly calling the database tables for various reasons: The CLOB fields used in the Object are accessible for a report. Note: CLOB fields cannot be used as criteria in the current release. Objects can take advantage of algorithms to format or calculate additional data that is not stored in the database directly. For example, Information format strings can automatically generated by the object which gives consistent information between a report and the online screens. To use this facility the following process must be performed: Ensure that the product group, cisusers by default, is enabled for the SPLServiceBean in the console. This allows BI Publisher access to call Web Services directly. To ensure this follow the instructions below: Logon to the Oracle WebLogic server console using an appropriate administrator account. By default the user system or weblogic is provided for this purpose. Navigate to the Security Realms section and select your configured realm. This is set to myrealm by default. In the Roles and Policies section, expand the SPLService section of the Deployments option to reveal the SPLServiceBean roles. If there is no role associated with the SPLServiceBean, create a new EJB role and specify the cisusers role, by default. For example:   Add a Role Condition to the role just created, with a Predicate List of Group and specify cisusers as the Group Argument Name. For example: Save all your changes. The XAI Inbound Services to be used by BI Publisher must be defined prior to using the interface. Refer to the XAI Best Practices (Doc Id: 942074.1) from My Oracle Support or via the online help for more information about this process. Inside BI Publisher create your report, according to the BI Publisher documentation. When specifying the dataset, under the Data Model Report option, specify the following to use an XAI Inbound Service as a data source: Parameter Comment Type Web Service Complex Type true Username Any valid user name within the product. This user MUST have security access to the objects referenced in the XAI Inbound Service Password Authentication password for Username Timeout Timeout, in seconds, set for the Web Service call. For example 60 seconds. WSDL URL Use the WSDL URL on the XAI Inbound Service definition as your WSDL URL. It will be in the following format by default:http://<host>:<port>/<server>/XAIApp/xaiserver/<service>?WSDLwhere: <host> - Host Name of Web Application Server <port> - Port allocated to Web Application Server for product access <server> - Server context for server <service> - XAI Inbound Service Name Note: For customers using secure transmission should substitute https instead of http and use the HTTPS port allocated to the product at installation time. Web Service Select the name of the service that shows in the drop-down menu. If no service name shows up, it means that Publisher could not establish a connection with the server or WSDL name provided in the above URL in order to get the service name. See BI Publisher server log for more information. Method Select the name of the Method that shows in the drop-down menu. A method name should show in the Method drop-down menu once the Web Service name is selected. For example: Additionally, filters can be used from the Web Service that can be generated, required or optional, from the WSDL in the Parameter List. For example:

    Read the article

  • Expert F# &ndash; Pattern Matching with Adam and Eve

    - by MarkPearl
    So I am loving my Expert F# book. I wish I had more time with it, but the little time I get I really enjoy. However today I was completely stumped by what the book was trying to get across with regards to pattern matching. On Page 38 – Chapter 3, it briefly describes F# option values. On this page it gives the code snippet along the code lines below and then goes on to speak briefly about pattern matching... open System type 'a option = | None | Some of 'a let people = [ ("Adam", None); ("Eve", None); ("Cain", Some("Adam", "Eve")); ("Abel", Some("Adam", "Eve")) ] let showParents(name, parents) = match parents with | Some(dad, mum) -> printfn "%s has father %s, mother %s" name dad mum | None -> printfn "%s has no parents!" name Console.WriteLine(showParents("Adam", None))   Originally when I read this code I think I misunderstood the purpose of the example code. I for some reason thought that the showParents function would magically be parsing the people array and looking for a match of name and then showing the parents. But obviously it cannot do this since there is no reference to the people array in the showParents method. After rereading the page I realized that I had just combined the two segments of code together, possibly incorrectly, and that a better example would have been to have a code snippet like the following. let showParents(name, parents) = match parents with | Some(dad, mum) -> printfn "%s has father %s, mother %s" name dad mum | None -> printfn "%s has no parents!" name Console.WriteLine(showParents("Adam", None)) Console.WriteLine(showParents("Cain", Some("Adam", "Eve"))) Console.ReadLine()   However, what if I wanted to have a function that was passed a list of people and a name would then show the parents of the name if there were any, and if not would show that they had no parents… so that doesnt seem to difficult does it… lets look at my very unoptimized noob F# code to try and achieve this… open System let people = [ ("Adam", None); ("Eve", None); ("Cain", Some("Adam", "Eve")); ("Abel", Some("Adam", "Eve")) ] // // returns the name of the person // let showName(person : string * (string * string) option) = let name = fst(person) name // // Returns a string with the parents details or not // let showParents(itemData : string * (string * string) option) = let name = fst(itemData) let parents = snd(itemData) match parents with | Some(dad, mum) -> "Father " + dad + " and Mother " + mum | None -> "Has no parents!" // // Prints the details // let showDetails(person : string * (string * string) option) = Console.WriteLine(showName(person)) Console.WriteLine(showParents(person)) // // Check if the name matches the first portion of person // if so, return true, else return false // let nameMatch(name : string , person : string * (string * string) option) = match name with | x when x = fst(person) -> true | _ -> false // // Searches an array of people and looks for a match of names // let findPerson(name : string, people : (string * (string * string) option) list) = let o = Seq.tryFind(fun x -> nameMatch(name, x)) people if Option.isSome o then o else Option.None // // Try and find a person, if found show their details // else show no match // let FoundPerson = findPerson("Cain", people) match FoundPerson with | None -> Console.WriteLine("Not found") | Some(x) -> showDetails(x) Console.ReadLine() So, my code isn’t the cleanest but it did teach me a bit more F#. The area that I learnt about was the option keyword. The challenge being, if a match of the name isn’t found – and if a name is found but the person doesn’t have parents it should react accordingly. I’m pretty sure I can optimize this code quite a bit more and I think I may come back to it sometime in the future and relook at it, but for now at least I was able to achieve what I wanted.. and my brain has gone just that wee little bit more functional.

    Read the article

  • Unable to apt-get upgrade in ubuntu 11.10

    - by blackhole
    These are the errors shows by different client Update Manager: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 968, in simulate trans.unauthenticated = self._simulate_helper(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1092, in _simulate_helper return depends, self._cache.required_download, \ File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 235, in required_download pm.get_archives(fetcher, self._list, self._records) SystemError: E:Method has died unexpectedly!, E:Sub-process returned an error code (100), E:Method /usr/lib/apt/methods/ did not start correctly Synaptic package Manager E: Method has died unexpectedly! E: Sub-process returned an error code (100) E: Method /usr/lib/apt/methods/ did not start correctly E: Unable to lock the download directory Command: sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: libfreetype6 libfreetype6-dev 2 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Failed to exec method /usr/lib/apt/methods/ E: Method has died unexpectedly! E: Sub-process returned an error code (100) E: Method /usr/lib/apt/methods/ did not start correctly Can anyone one tell me how to resolve these issues ? I have no volatile packages or anything so i am even posting the preview of my sources.list file. # deb cdrom:[Ubuntu 10.10 _Maverick Meerkat_ - Release i386 (20101007)]/ maverick main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://in.archive.ubuntu.com/ubuntu/ oneiric main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://in.archive.ubuntu.com/ubuntu/ oneiric-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://in.archive.ubuntu.com/ubuntu/ oneiric universe deb http://in.archive.ubuntu.com/ubuntu/ oneiric-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://in.archive.ubuntu.com/ubuntu/ oneiric multiverse deb http://in.archive.ubuntu.com/ubuntu/ oneiric-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http://in.archive.ubuntu.com/ubuntu/ maverick-backports main restricted universe multiverse # deb-src http://in.archive.ubuntu.com/ubuntu/ maverick-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu oneiric partner deb-src http://archive.canonical.com/ubuntu oneiric partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu oneiric main deb-src http://extras.ubuntu.com/ubuntu oneiric main deb http://in.archive.ubuntu.com/ubuntu/ oneiric-security main restricted deb http://in.archive.ubuntu.com/ubuntu/ oneiric-security universe deb http://in.archive.ubuntu.com/ubuntu/ oneiric-security multiverse # deb http://archive.canonical.com/ lucid partner Here is the preview of my sources.list file

    Read the article

  • Listen to Online Radio with Antenna

    - by Asian Angel
    Are you looking for some fresh new music to listen to at home or at work? With Antenna you can listen to online radio stations from all over the world. Note: Requires Adobe AIR (download link at bottom of article). Antenna in Action Once you have completed the installation and started Antenna up this is the window that you will see. The left side will have a “browsing pane” where you can search for the stations that you would like to listen to using the various categories. Based on the stations that you choose the background map will change location to match the stations locations. Here is a closer look at the “Categories Bar”. For our first example we used the “Country Category” to find our first station to listen to. When you choose a country you will be presented with a list of the stations available for that country. To start listening to a particular station just double click on the appropriate entry line. A closer look at the “browser pane” with our first station playing. Notice the “Reliability Indicator” that will be available for each listing…some may be better than others and you can use this to choose the best streaming stations from the list. In the upper left corner you will notice three icons…each will open a small pop-up window with a specific purpose. The first icon will open up the “About Window”. If you need to contact Antenna’s creator or would like to place a request for a station to be added to the app then this is the best way to do it. The second icon will open up a Antenna specific chat window. The third icon will allow you to set a default location and make adjustments to some of the app’s settings. Recording Audio The “Recording Function” is the only area where we experienced some “quirkiness” with the app. To start recording press the “Round White Button”… Note: Based on feedback on the app creator’s webpage some people have experienced the same problem as we did during our tests with the app failing to complete the recordings. Hopefully this bug will be fixed with the next release. Once recording has started the button will turn red. Click on the button again to stop recording. Once you have stopped recording you will see the following message window appear and the main window will be shaded over with a whitish color until you click “OK”. Conclusion Regardless of the slight quirkiness in recording online music Antenna more than makes up for it with the terrific selection of online stations and streaming capability. New fresh music for you to listen to is only a click or two away… Links Download Antenna (Antenna Homepage) Download Antenna at Softpedia Download Adobe AIR Similar Articles Productive Geek Tips Listen to Local FM Radio in Windows 7 Media CenterListen to Over 100,000 Radio Stations in Windows Media CenterListen To XM Radio with Windows Media Center in Windows 7Listen and Record Over 12,000 Online Radio Stations with RadioSureWeekend Fun: Watch Television on Your PC with AnyTV TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic

    Read the article

  • What I Expect From Myself This Year

    - by Lee Brandt
    I am making it a point not to call them resolutions, because the word has become an institution and is beginning to have no meaning. That's why I end up not keeping my resolutions, I think. So in the spirit of holding myself to my own commitments, I will make a plan and some realistic goals. 1.) Lose weight. Everyone has this on their list, but I am going to be conservative and specific. I currently weigh 393lbs. (yeah, I know). So I want to plan to lose 10lbs per month, that's 1lb. every three days, that shouldn't be difficult if I stick to my diet and exercise plan. - How do I do this?     - Diet: vegetarian. Since I already know I have high blood pressure and borderline high cholesterol, a meat-free diet is in order. I was vegan for a little over 2 years in 2006-2008, I think I can handle vegetarian.     - Exercise: at least 3 times (preferably every day) a week for 30 minutes. It has to be something that gets my heart rate up, or burns in my muscles. I can walk for cardio to start and mild calisthenics (girly push-ups, crunches, etc.).         - Move: I spend all my time behind the computer. I have recently started to use a slight variation of the Pomodoro Technique (my Pomodoros are 50 minutes instead of 25). During my 10 minutes every hour to answer emails, chats, etc., I will take a few minutes to stretch. 2.) Get my wife pregnant. We've been talking about it for years. Now that she is done with graduate school and I have a great job, now's the time. We'll be the oldest parents in the PTA most likely, but I don't care. 3.) Blog More. Another favorite among bloggers, but I do have about six drafts for blog posts started. The topics are there all I need to do is flesh out the post. This can be the first hour of any computer time I have after work. As soon as I am done exercising, shower and post. 4.) Speak less. Most people want to speak more. I want to concentrate on the places that I enjoy and that can really use the speakers (like local code camps), rather than trying to be some national speaker. I love speaking at conferences, but I need to spend some more time at home if we're going to get pregnant. 5.) Read more. I got a Kindle for Christmas and I am loving it so far. I have almost finished Treasure Island, and am getting ready to pick my next book. I will probably read a lot of classics for 2 reasons: (1) they teach deep lessons and (2) most are free for the Kindle. 6.) Find my religion. I was raised Southern Baptist, but I want to find my own way. I've been wanting to go to the local Unitarian Church, so I will make a point to go before the end of March. I also want to add a few religious books to my reading list. My boss bought me a copy of Lee Strobel's The Case for Christ: A Journalist's Personal Investigation of the Evidence for Jesus , and I have a copy of Bruce Feiler's Abraham: A Journey to the Heart of Three Faiths (P.S.) . I will start there. Seems like a lot now that I spell it out like this. But these are only starters. I am forty years old. I cannot keep living like I am twenty anymore. So here we go, 2011.

    Read the article

< Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >